Author: Manahel Thabet

25 May 2019
Economist urges Government to manage disruptive technologies

Economist urges Government to manage disruptive technologies

If Government is to bring down the country’s soaring debt through growth, an American economist is strongly advising the Mia Mottley administration to better handle disruptive technologies.

But while many pundits advocate a complete embrace of this technology, Michigan University economics professor Dr Linda Tesar is warning Government to expect significant short-term pain in order to gain potential benefits in the long run.

Disruptive technology has significantly altered the way businesses or entire industries have traditionally functioned.

In the most recent cases, much of it driven by e-commerce, businesses have been forced to change the way they approach their operations for fear of losing market share or becoming irrelevant.

With Amazon up-ending bricks-and-mortar retailers, Uber ride-sharing changing the face of public transport and Airbnb’s online hospitality marketplace disrupting the hotel and tourism trade.

Tesar, who was the featured speaker as the Central Bank of Barbados’ 6th Distinguished Visiting Fellow, said: “The thing about disruptive technology such as Airbnb and Uber, it is great, but it is disruptive for a reason because it disrupts what is already there. This means that the only way to take advantage of disruptive technology is to be willing to upset existing businesses.

“If you bring in Uber, the taxi drivers aren’t going to be very happy. For example, when driverless truck technology comes on stream, it is going to mean layoffs for many drivers.

“What are these drivers going to do? Re-training and re-tooling are all easy things to say but not so easy in practice.”

Dr. Tesar explained that because disruptive technologies are not contained by conventional rules, it is difficult to adequately plan for it in a growth strategy.

She said: “We don’t know what it is because if we knew, we would put a label on it. So, it is very hard to create the conditions for it to grow.

“I think it is tempting to say that growth is the way out, but I think it is dangerous to say that one is going to grow themselves out of debt. While it is probably good in the long run, in the short term it is very painful.”

The economist suggested that with a high debt to GDP, Barbados only has three options if Government is to attain the goal of bringing the debt to 60 per cent of GDP by 2033: taxation, cuts in spending and growth.

She said that of the three, growth is the most desirous option, but noted that in the quest for quick growth, unmanaged disruptive technologies become a concern.

Dr Tesar said: “In bringing down the debt there is only three things to do, spend less, tax more or growth. All three of those things are going to contribute to primary surplus, so that you can have a sustainable level of debt.

“Out of those, if one had to pick one, growth would be the one that they choose but getting growth is never that simple.

“How do we grow our way out of debt? One way is to create a climate where businesses can say this is a place where we want to invest.  Another is increasing efficiency by getting more out of what you are currently doing.

“Finally, there is innovation, which sometimes shows up as a technology factor in the production function. It ends up being the residual that we can’t explain.”

Source:
https://barbadostoday.bb/2019/05/23/economist-urges-government-to-manage-disruptive-technologies/

23 May 2019
The future of AI is collaborative

The future of AI is collaborative

AI is becoming increasingly widespread, affecting all facets of society — even Sonic drive-ins are planning to implement artificial intelligence to provide better customer service.

Of course, every time a new innovation appears in the realm of AI, fears arise regarding its potential to replace human jobs. While this is a reality of adapting to a more tech-driven society, these fears tend to ignore the collaborative and job-creating attributes that AI will have in the future.

The future’s most successful businesses will be those that learn to combine the best attributes of machines and human workers to achieve new levels of efficiency and innovation. In reality, the future of AI will be largely dependent on collaboration with living, breathing human beings.

AI augmenting human performance

In most business settings, AI does not have the ability to make crucial decisions. However, it does have the power to provide greater insights and support to ensure that you make the right decisions faster.

Simply put, there are many tasks that AI can perform faster and more efficiently than humans. It is estimated that we produce 2.5 quintillion bytes of data per day. While individual businesses only produce a tiny fraction of that total, there is no denying that trying to analyze data points drawn from diverse areas such as logistics, marketing and corporate software programs is becoming increasingly difficult.

This is where AI enters the picture. Machine learning allows AI to analyze data points at much greater speed than a person ever could, while also eliminating the risk of data entry errors that so often occur during manual work.

Such systems present data in comprehensive formats that make it far easier to identify trends, opportunities and risks to improve business practices. This trend is already having a significant impact in the business world. A 2016 survey revealed that “61 percent of those who have an innovation strategy say they are using AI to identify opportunities in data that would otherwise be missed.”

While AI may not be granted decision-making capabilities for crucial business tasks, its ability to provide reliable, error-free data is already leading to vital insights that completely transform business operations.

AI’s automation capabilities means it is increasingly being used to streamline mundane tasks and give workers more time for high-level activities. This can make companies more efficient by lowering operating costs and improving productivity. In other words, as AI continues to advance, it will help us do our own jobs even better.

However, the biggest potential for AI comes from machine learning.

As AI learns from new data inputs, it becomes increasingly powerful and better able to assist with more complex tasks and algorithms, further expanding opportunities for collaboration and increased efficiency. Machine learning is helping AI applications better understand a wider range of instructions, and even the context in which a request is made.

This will lead to even faster and more efficient results, and helping to overcome common problems we see today, such as automated customer service systems being unable to solve complaints or requests. Even as these systems grow more advanced, however, there will still be many instanceswhere human interaction is needed to achieve the desired resolution.

People will help machines, too

The future doesn’t merely entail AI streamlining everyday tasks or helping us do our jobs better. AI is only possible thanks to human ingenuity, and that trend isn’t going away anytime soon. Future innovations and improvements will be largely dependent on what people are able to produce.

As Russell Glenister explains in an interview with Business News Daily, “Driverless cars are only a reality because of access to training data and fast GPUs, which are both key enablers. To train driverless cars, an enormous amount of accurate data is required, and speed is key to undertake the training. Five years ago, the processors were too slow, but the introduction of GPUs made it all possible.”

Improving GPUs aren’t the only way developers will continue to play a vital role in helping AI advance to new heights. Human guidance will also be necessary to help AI “learn” how to perform desired tasks — particularly for applications where real-time human interaction will be required.

This is especially apparent in virtual assistants such as Alexa or Siri. Alexa’s recent introduction of speech normalization AI has been found to reduce errors by 81 percent, but these results were only achieved after researchers provided training using a public data set containing 500,000 samples. Similar processes have also been used to give these virtual assistants their own distinct personalities.

As AI applications become more complex and more engrained in day to day life, there will also be an increased need for individuals who can explain the findings and decisions generated by a machine.

Supervision of AI applications will also be necessary to ensure that unwanted outcomes — such as discrimination and even racism — are detected and eliminated to prevent harm. No matter how smart AI becomes, it will continue to require human guidance to find new solutions and better fulfill its intended function.

Though AI offers boundless opportunities for innovation and improvement, it won’t be able to achieve its full potential on its own. A collaborative future will see programmers, engineers and everyday consumers and workers more fully integrating AI into their daily lives.

When people and AI work together, the possibilities will be truly limitless.

Source:
https://thenextweb.com/podium/2019/05/21/the-future-of-ai-is-collaborative/

22 May 2019
In the 'post-digital' era, disruptive technologies are must-haves for survival

In the ‘post-digital’ era, disruptive technologies are must-haves for survival

Call it the post-digital era. That’s the era that many organizations have now entered – the era in which advanced digital technologies are must-haves in order to stay competitive in their markets. At least, that is the way that Accenture sees it.

The research and consulting firm has recently published its annual Technology Vision report, looking at where organizations have been putting their technology investments, and which tools and trends they see as top priorities in the next year.

Information Management spoke with Michael Biltz, who heads the research for the annual study, about what this year’s study revealed and what lessons it holds for software professionals and data scientists.


Information Management: In your recent Technology Vision report, what are the most significant findings of interest to data scientists and data analysts?

Michael Biltz: The overarching takeaway from the 2019 Accenture Technology Vision report is that we’re entering a “post-digital” era in which digital technologies are no longer a competitive advantage – they’re the price of admission. This is supported by our research, which found that over 90 percent of companies invested in digital transformation in 2018, with collective spend reaching approximately $1.1 trillion. This indicates that we’re now at a point where practically every company is driving its business with digital capabilities – and at a faster rate than most people have anticipated.

This new environment necessitates new rules for business; what got your company to where it is today will not be enough to succeed in the new post-digital era.

For example, technology is creating a world of intensely customized and on-demand experiences, where every moment will become a potential market – what we call “momentary markets.” Already, 85 percent of those surveyed believe that customer demands are moving their organizations toward individualized and on-demand delivery models, and that the integration of these two capabilities represents the next big wave of competitive advantage.

In other words, success will be judged by companies’ ability to combine deep understanding of its customers with individualized services delivered at just the right moment.


IM: How do those findings compare with the results of similar previous studies by Accenture?

Biltz: One of the great things about publishing this report annually is that we can observe how the trends evolve year-over-year, with the latest report drawing on insights from earlier editions. The 2019 report builds on last year’s theme of ‘Intelligent Enterprise Unleashed: Redefine Your Company Based on the Company You Keep,’ which focused on how rapid advancements in technologies—including artificial intelligence (AI), advanced analytics and the cloud—are accelerating the creation of intelligent enterprises and enabling companies to integrate themselves into people’s lives, changing the way people work and live.

Expanding on last year’s theme, the 2019 report discusses how the digital enterprise is at a turning point, with businesses progressing on their digital journeys. But digital is no longer a differentiating advantage—it’s now the price of admission. In this emerging “post-digital” world, in which new technologies are rapidly evolving people’s expectations and behavior, success will be based on an organization’s ability to deliver personalized “realities” for customers, employees and business partners. This will require understanding people at a holistic level and recognizing that their outlooks and needs change at a moment’s notice.

IM: Were you surprised by any of these findings, and why so or why not?

Biltz: We were surprised to see that on the one hand, companies are investing time and money into transforming their services and job functions, yet commitment to transforming and reskilling their workforces largely hasn’t kept pace. And when new roles and capabilities are created, many organizations still try to apply traditional (but increasingly outdated) tools, organization structures, training and incentives to support them. This is creating what we call a “Digital Divide,” between companies and their employees.


IM: What are the data-related themes and technologies that will be of greatest interest to organizations over the next three years?

Biltz: While all of the themes described in my response to the first question are relevant to forward-looking organizations, the first trend, ‘DARQ Power: Understanding the DNA of DARQ,’ highlights four emerging technologies that companies should explore in order to remain competitive. These are distributed ledger technology (DLT), artificial intelligence (AI), extended reality (XR) and quantum computing (Q).

While these technologies are at various phases of maturity and application, each represents opportunities for businesses to stay ahead of the curve, differentiate themselves and vastly improve products and services.


IM: Did your study shed any light on how prepared organizations are to adopt these technologies and get expected value from them?

Biltz: Yes, our research into the development and application of DARQ technologies was quite revealing. Our research found that 89 percent of businesses are already experimenting with one or more of these technologies, expecting them to be key differentiators. However, the rate of adoption varies between the four technologies, as they’re currently at varying stages of maturity.

Here are a few specifics for each of DARQ capability: 

  • Distributed Ledger Technologies: 65 percent of executives surveyed reported that their companies are currently piloting or have adopted distributed ledger technologies into their business; 23 percent are planning to pilot this kind of technology in the future.
  • Artificial Intelligence: When asked to rank which of the DARQ technologies will have the greatest impact on their organization over the next three years, 41 percent listed AI as number one. Already, 67 percent are piloting AI solutions, or have already adopted AI across one or more business units.
  • Extended Reality: 62 percent are leveraging XR technologies, and this percentage is set to increase, with 24 percent evaluating how to use XR in the future.
  • Quantum Computing: Although quantum computing is the furthest of the DARQ technologies from full maturity, we’re seeing rapid advances in this area. Consider this: it took 19 years to get from a chip with two qubits (the quantum equivalent of a traditional computer’s bit) to one with 17; within two years of that Google unveiled a 72-qubit chip. And the technology is becoming more readily available, with software companies releasing platforms that allow organizations that don’t have their own quantum computers to use the former’s quantum computers via the cloud.


IM: What is your advice on how IT leaders can best educate the C-suite on which so-called disruptive technologies are worth investing in and which aren’t a good match?

Biltz: It’s important to first focus on your long-term vision and strategy for the company, asking questions such as, “Who do we as a company want to be in 5-10 years? What markets will we target? And what role do we want to play in the emerging digital ecosystems?”

Once you understand the answers to these questions, not only do the specific technologies fall into place, they also tend drive a level of innovation that are usually only expected from likes of the technology giants.


IM: What industries or types of organizations are the leaders with cutting-edge and disruptive technologies that other organizations can learn from?

Biltz: I think organizations can best learn from companies – regardless of industry – that are exploring leveraging more than one DARQ capability to unlock value. This is where true disruption lies: those exploring how to integrate these seemingly standalone technologies together will be better positioned to reimagine their organizations and set new standards for differentiation within their industries.

Volkswagen is an excellent example. The company is using quantum computing to test traffic flow optimization, as well as to simulate the chemical structure of batteries to accelerate development. To further bolster the results from quantum computing, the company is teaming up with Nvidia to add AI capabilities to future models.

Volkswagen is also testing distributed ledgers to protect cars from hackers, facilitate automatic payments at gas stations, create tamper-proof odometers, and more. And the company is using augmented reality to provide step-by-step instructions to help its employes service cars.

Read more:
https://www.dig-in.com/news/in-the-post-digital-era-disruptive-technologies-are-must-haves-for-survival

21 May 2019
Artificial intelligence better than humans at spotting lung cancer

Artificial intelligence better than humans at spotting lung cancer

Researchers have used a deep-learning algorithm to detect lung cancer accurately from computed tomography scans. The results of the study indicate that artificial intelligence can outperform human evaluation of these scans.

doctor looking at lung scans on computer screen

New research suggests that a computer algorithm may be better than radiologists at detecting lung cancer.

Lung cancer causes almost 160,000 deaths in the United States, according to the most recent estimates. The condition is the leading cause of cancer-related death in the U.S., and early detection is crucial for both stopping the spread of tumors and improving patient outcomes.

As an alternative to chest X-rays, healthcare professionals have recently been using computed tomography (CT) scans to screen for lung cancer.

In fact, some scientists argue that CT scans are superior to X-rays for lung cancer detection, and research has shown that low-dose CT (LDCT) in particular has reduced lung cancer deaths by 20%.

However, a high rate of false positives and false negatives still riddles the LDCT procedure. These errors typically delay the diagnosis of lung cancer until the disease has reached an advanced stage when it becomes too difficult to treat.

New research may safeguard against these errors. A group of scientists has used artificial intelligence (AI) techniques to detect lung tumors in LDCT scans.

Daniel Tse, from the Google Health Research group in Mountain View, CA, is the corresponding author of the study, the findings of which appear in the journal Nature Medicine.

‘Model outperformed all six radiologists’

Tse and colleagues applied a form of AI called deep learning to 42,290 LDCT scans, which they accessed from the Northwestern Electronic Data Warehouse and other data sources belonging to the Northwestern Medicine hospitals in Chicago, IL.

The deep-learning algorithm enables computers to learn by example. In this case, the researchers trained the system using a primary LDCT scan together with an earlier LDCT scan, if it was available.

Prior LDCT scans are useful because they can reveal an abnormal growth rate of lung nodules, thus indicating malignancy.

Read more:
https://www.medicalnewstoday.com/articles/325223.php

20 May 2019

New Photonic Microchip Mimics Basic Brain Function

Although the bleeding edge of artificial intelligence has provided us with powerful tools that can outmatch us in specific tasks and best us in even our most challenging games, they all operate as isolated algorithms with none of the incredible associative power of the human brain. Our current computational architecture can’t match the efficiency of our own minds, but a composite team of researchers from the Universities of Münster, Oxford, and Exeter discovered a way to begin narrowing that gap by creating a small artificial neurosynaptic network powered by light.

Today’s computers store memory separately from the processor. The human brain, on the other hand, stores memory in the synapses that serve as the connective tissue between neurons. Rather than transferring memory for processing, the brain has concurrent access to that memory when connected neurons fire. While neuroscience generally accepts the theory of synaptic memory storage, we still lack a definitive understanding of how it works.

Even still, we understand that the brain handles multiple processes simultaneously. Human brains may not function with the numeric computational efficiency of a basic calculator by default, but it takes a supercomputer that requires enough electricity to power 10,000 homes to outpace it. The human brain sets a high benchmark for its artificial counterpart in processing power alone, and its significant architectural differences make its capabilities so much more dynamic that it makes these comparisons almost pointless.

The multinational research team that successfully replicated an artificial neurosynaptic network used four neurons and 60 synapses arranged with 15 synapses per neuron. On its own, the cerebrum contains billions of neurons. With an even larger number of synapses (a ratio of about 10,000:1), when you consider the difference in scale, it’s easy to see how this major accomplishment only serves as an initial step in a long journey.

But that doesn’t make the accomplishment any less impressive. The way the chip functions defies simple explanation, but in essence, it uses a series of resonators to guide incoming laser light pulses so they reach the artificial neuron as intended. Without proper wave guidance, the artificial neurons would fail to receive consistent input and have no practical computational function. Comprised of phase-change material, the artificial neurons change their properties in response to a focused laser and this process successfully imitates one tiny piece of the human brain.

Read more:
https://www.extremetech.com/computing/291627-new-photonic-microchip-mimics-basic-brain-function

19 May 2019

Scientists: We Need to Protect the Solar System from Space Mining

Save The Solar System

A group of scientists want to declare much of the solar system to be official “space wilderness” in order to protect it from space mining. As The Guardian reports, the proposal calls for more than 85 percent of the solar system to be protected from human development.

“If we don’t think about this now, we will go ahead as we always have, and in a few hundred years we will face an extreme crisis, much worse than we have on Earth now,” Martin Elvis, senior astrophysicist at the Smithsonian Astrophysical Observatory in Cambridge , Massachusetts,  and lead author, told The Guardian. “Once you’ve exploited the solar system, there’s nowhere left to go.”

Iron Horse

The research will be published in an upcoming issue of the journal Acta Astronautica.

It suggests that we could use up as much as an eighth of the solar system’s supply of iron — the researchers’ proposed “tripwire” threshold after which we’d run the risk of running out of space resources indefinitely — in just 400 years.

Asteroid Farmer

Numerous private companies have suggested that space mining could further human advancements in space — while turning a huge profit. For instance, U.S.-based mining company Planetary Resources is already planning to look for “critical water resources necessary for human expansion in space,” according to its website.

And the next generation of human explorers is bound to be swept up by the great promise of space resources as well. The Colorado School of Mines has started offering a PhD program that focuses on the “exploration, extraction, and use of [space] resources.”

Gold Rush

What’s less clear is whether humankind has learned its lesson here on Earth.

“If everything goes right, we could be sending our first mining missions into space within 10 years,” Elvis told The Guardian. “Once it starts and somebody makes an enormous profit, there will be the equivalent of a gold rush. We need to take it seriously.”

Source:
https://futurism.com/scientists-stop-space-mining-solar-system

18 May 2019

The First Steps To Digital Transformation? Get Your Data In Order

Recently, Gartner announced its top 10 strategic technology trends for 2019. It is a nice list, touching on digital transformation trends that range from empowered edge computing to artificial intelligence-driven autonomous things. But while Gartner’s trends sound great in annual reports and Forbes articles, operationally, most enterprises aren’t properly (or digitally) prepared to adopt these trends. The reason why? Today’s pace of business and the disorderly data that’s needed to make sense of it all.

In the past, IT environments were simpler and more accessible for humans. But with the advent of cloud, containers, multi-modal delivery and other new technologies resulting in inordinately massive and complex environments, IT is being forced to move at machine speed, rendering manual processes too slow and inefficient.

To keep up with the rapid pace and scale of today’s digital environments, enterprises are turning to AIOps, which is powered by machine learning (ML) and artificial intelligence (AI). Unfortunately, ML-based algorithms and AI-based automation, key elements of unlocking digital transformation, are easier said than done. The underlying reason is that ML-based algorithms, by themselves, aren’t sophisticated enough to deal with today’s ephemeral, containerized, cloud-based world. ML needs to evolve into AI, and to do that, it needs cleaner actionable data to automate processes.

But attaining high-quality data presents its own unique challenges, and enterprises that do not have the right strategy in place will encounter cascading problems when trying to implement digital transformation initiatives in the future.

How To Build A High-Quality Data Strategy — Two Types Of Data

Imagine cooking a meal from scratch only to realize you forgot to chop an onion. You might be able to add it in later, but it won’t add the same texture and flavor. Too often, enterprises embark on an AI/ML transformation only to realize mid-development that they are missing key performance indicator (KPI) data that they did not foresee needing. Such mid-process realizations can have deleterious effects on a digital transformation initiative, stalling or even crippling its progress. Simply put, AI/ML doesn’t function without the right data.

The first step to building a high-quality data strategy is realizing that you need two separate data strategies: one for historical data and the other for real-time data or continuous learning.

Historical data is crucial for AI/ML strategies and serves as the fundamental building block for any effective anomaly detection, predictor or pattern analysis implementation. However, getting the right historic training data is much more difficult and challenging than many might assume.

There are several key questions to consider:

• What do your end goals and use cases for automation look like?

• What data do those use cases demand?

• How much of that data do you need?

• At what fidelity do you need that data?

Next, realize that training AI/ML on historical data is not enough. It needs to ingest real-time data to respond to and automate processes. Real-time data is the fuel that allows the ML algorithms to learn and adapt to new situations and environments. Unfortunately, real-time data presents its own set of challenges, too. The volume, velocity, variety and veracity of data can be overwhelming and expensive to manage.

Finally, enterprises must ensure the ML algorithms don’t acquire bad habits as a consequence of using poor data. And like bad human habits, it is hard to get an AI to unlearn a bad habit once formed. Specifically, these could be outliers that are erroneously deemed normal when they aren’t. Or they could present data gaps, which may skew newly learned behavior. Fundamentally, an AI/ML platform that does learn from bad data can ultimately result in extraneous false alerts and have negative impacts on IT operations. There are multiple ways to avoid going down this path, but they all boil down to one important thing: data quality.

The Two Most Important Ingredients For Data Quality

Historic and real-time training data are foundational to AI, ML and automation. However, data quality remains a major sore point for enterprises that underestimate the complexity of that challenge. Fortunately, data quality issues don’t have to be a terminal problem if approached strategically.

The most important step is to have full visibility both horizontally across operational silos and vertically, deep into infrastructure layers. You won’t know what KPIs are going to be important, so an ideal solution is one that allows you to ingest as much data as possible from as many places as possible right from the start.

It is also crucial that data be stored and normalized in a way that connects it to other data. Data that rests in silos will never be able to power automation; it has to have context. An ideal solution is one that can ingest data and contextualize it simultaneously. Spending time stitching data together, normalizing and correlating it after it is ingested is time-consuming and difficult.

Read more:
https://www.forbes.com/sites/forbestechcouncil/2019/05/13/the-first-steps-to-digital-transformation-get-your-data-in-order/#432cb38b3c61

16 May 2019
How a 99-Year-Old Company Pivoted with a Digital Transformation

How a 99-Year-Old Company Pivoted with a Digital Transformation

When you hear Pitney Bowes, the first thing that probably comes to mind is postage scales. But over the past five years, this near-centenarian company has undergone a dramatic transformation. In a pivot toward shipping and e-commerce technology, Pitney Bowes has implemented a digital transformation strategy that’s arguably setting the new industry standard. (Just ask PwC, who’s said: “Doing digital right doesn’t mean you need to become the next Amazon, Netflix, or Google-or even the next Pitney Bowes.”)

Of course, no digital transformation can be called truly successful if it’s not driving business results. After years of stalled growth, in 2018 Pitney Bowes saw its second consecutive year of revenue growth, marking its best revenue growth in a decade.

So how does a 99-year-old mailing solutions company become a leading technology company? I recently sat down with its chief marketing officer, Bill Borrelle, to find out how Pitney Bowes has successfully reinvented itself for the modern world.

1. Change people’s perceptions.

“As a marketer,” says Bill, “we needed to recraft the narrative of the company, leaning on the proof points that already existed, and laying the runway for where we would go.”

In order to start changing perceptions and help both employees and consumers think differently about the company, Bill’s team launched a branding effort that reframed Pitney Bowes as “the craftsmen of commerce.” With this new lens, they’re placing the company at the crossroads of two seemingly counterintuitive ideas–the legacy of its history and the modernity of technology–and linking their past and their future together.

As just one proof point of their digital transformation, these “craftsmen of commerce” created the SendPro, a first-of-its-kind sending device for the modern mailer that combines hardware, software, and Internet-of-Things capabilities. The product is the embodiment of the tension between heritage and modernity–leveraging the precision the company is known for with the technology that’s shaping the future of commerce.

Already, the industry has started to see Pitney Bowes in a new light, with the company winning design awards typically won by cutting-edge technology companies. Last year, for example, its SendPro C-Series was recognized by the International Design Awards.

2. Transform the customer experience.

At the heart of its digital transformation strategy, Pitney Bowes aims to reimagine the customer experience. Says Bill, “Our goal was to be completely relevant to our clients in today’s changing world of commerce.” In order to deliver on that, the company needed to create products that would seamlessly merge the physical and the digital worlds.

This meant that all mailing devices became smart–suddenly, those iconic postage meters were connected to the internet, delivering real-time information. With a product called Relay, customers can now easily choose whether to deliver a message through physical mail or through email. All of these solutions are underpinned by the Pitney Bowes Commerce Cloud, a SaaS common data platform built on AWS.

As another milestone in its digital transformation journey, Pitney Bowes created a digital ecosystem that would enhance the customer experience regardless of how they interact with the company. Today, 600,000 of the roughly one million Pitney Bowes customers engage with its streamlined online experience–where they can do everything from buy supplies and view postage usage to learn USPS rates and seek technical support.

The result? Customer satisfaction ratings have increased significantly, and Pitney Bowes has more than doubled online sales thanks to its e-commerce experience.

3. Rethink the fundamentals of marketing.

Today’s technology can’t be marketed with outdated thinking. So while Pitney Bowes employs many of the hallmarks of modern marketing–data that enables unprecedented personalization and a tech stack with 70 marketing technologies–Bill also made a point of modernizing the foundational principles. Revitalizing the “4 P’s” from the 1960s, he introduced the mantra of the modern marketer: precision, pace, profit, and people.

In doing so, Bill was delivering a larger message about staying on the cutting edge of marketing just as they push the limits of technology. As he says, “I wanted the 250 marketers at Pitney Bowes to be modern marketers and raise the bar for ourselves.”

Toward the end of our conversation, Bill revealed something I thought was really fascinating: He told me that 80 percent of consumers still prefer to receive physical statements and invoices. It was a reminder that for all the strides companies like Pitney Bowes are making, there’s still a broader digital transformation happening today.

Or, as Bill says: “We’re a different company, but we’re still going. We have more to do.”

What does this mean for YOU? No matter what your size or industry, how can you take the lessons learned from Pitney Bowes’ digital transformations and apply them to your own business? With clever marketing and a focus on customer experience, you too can transform your business and prepare it for 2020 and beyond.

Source:
https://www.inc.com/dave-kerpen/how-a-99-year-old-company-pivoted-with-a-digital-transformation.html

15 May 2019

How to Assess Digital Transformation Efforts

Not all organizations are succeeding with their digital transformation efforts. For one thing, the focus of their success metrics may be too narrow.

The operative word in digital transformation is “transformation,” not digital, which at least partially explains the concept’s successes and failures. If your company emphasizes digital at the expense of transformation, it may be overly focused on technology. If your company emphasizes transformation, it is more likely to address the cultural and technological aspects of digital transformation.

The different approaches use different sets of success metrics. Specifically, while one company may narrowly focus on metrics related to cloud migration or DevOps, the other measures success based on business objectives.

“Digital transformation is almost a bit of a bad name because it encourages people to buy digital products and tools as opposed to reconstructing their businesses,” said Mimi Brooks, CEO of Logical Design Solutions (LDS), a consulting firm that designs digital solutions for global enterprises. “There are still a lot of [organizations] that think becoming a digital business is about buying the digital platforms and tools that everybody else has, so I think we’ve got a bit of an idea problem there.”

Assessment should be continuous

One thing that differentiates today’s digital businesses from traditional companies is time. In the digital world, everything happens at an accelerated rate and to keep pace, businesses must evolve from periodic processes and mindsets to continuous processes and mindsets. For the past couple of decades, software development teams have been moving along a continuum of Agile, DevOps, and continuous integration and continuous delivery in a constant quest to deliver value to customers at an ever-accelerating rate. The problem with digital transformation efforts is that the continuous process mindset has not yet bubbled up and across non-digital native companies in many cases.

The business side of the house has to be as aligned around [Agile and DevOps] because they’re going to have to create roadmaps and requirements that can be turned into user stories that keep an engineering team delivering high velocity,” said Mike Cowden, president of digital transformation and software development enablement consulting company Slalom Build. “It’s a complete organizational mindset that has to take place in order for digital transformation to even happen. “

While digital natives have had the luxury of starting with a clean slate, traditional companies have to overcome the mentality of assessment, planning, execution, and evaluation as events versus continuous processes.

Read more:
https://www.informationweek.com/strategic-cio/how-to-assess-digital-transformation-efforts/a/d-id/1334693

14 May 2019

Five Things To Know About AI

Artificial intelligence (AI) holds a lot of promise when it comes to almost every facet of how businesses are run. Global spending on AI is rising with no signs of slowing down — IDC estimates that organizations will invest $35.8 billion in AI systems this year. That’s an increase of 44% from 2018. With all the fanfare, it’s easy to get lost in the noise and excitement — and with all of the vendors out there touting their various AI-based solutions, it’s also easy to get confused about which is which and what does what.

So, how do you muddle through the noise and make sure you really understand AI? Here are five things I believe you should be aware of when it comes to providing an AI solution or evaluating one for your business.

1. AI-Washing

Because AI is a trending technology that many believe holds great potential, vendors will sometimes claim they have AI-enabled capabilities when they really don’t. There’s no ruling body that defines what “AI” means — vendors are free to use it however they want. The same thing happened when the cloud entered the market, which caused the term “cloud washing” to emerge for products and services that were hyped as cloud-based but weren’t actually in the cloud. The same goes for “greenwashing” where companies lead consumers to believe they follow environmental best practices but really don’t.

Today’s “AI washing” makes it harder to tell truth from fiction. A Gartner press release from 2017 warned that AI washing is creating confusion and obscuring the technology’s real benefits. Many vendors are focused on the goal of “simply marketing their products as AI-based rather than focusing first on identifying the needs, potential uses, and the business value to customers,” according to Gartner research vice president Jim Hare.

It’s important to be clear about what AI is and about how a vendor is using the term. For instance, AI isn’t the same thing as automation. Automation allows process scripts to take care of previously manual, repetitive tasks, but the system isn’t learning and evolving. It’s just doing what it’s told to do. AI’s goal is generally to mimic human behavior and learn as it goes to become better at the tasks assigned to it over time.

2. Potential For Misuse

As with anything, AI can be used for nefarious purposes. A tool is only as “good” or “bad” as the hands that hold it. There are those who seek to use AI to control their citizenry via a nationwide network of facial recognition cameras (paywall) or build autonomous weapons, which I would consider bad applications. Fortunately, many hands have already found beneficial uses for AI, including accurate medical diagnoses, new cancer treatment approaches and language translation.

Another positive sign is that governments are working toward regulation and accountability. France and Canada announced plans to start the International Panel on AI to explore “AI issues and best practices,” and the U.S. Pentagon asked the Defense Innovation Board to create an ethical framework for using AI in warfare.

Ultimately, I believe AI is the best hope for overcoming the potential misuse of AI. For instance, much has been made of the inherent bias that keeps showing up in AI systems. IBM, for example, recently announced its automated bias-detection solutions. Since humanity can’t put the AI genie back in the bottle, we can devise good AI systems to help countermand its potential negative applications.

3. The Idea That AI Will Take People’s Jobs

Yes, it will eliminate some jobs — typically low-level and repetitive work — but it will likely create jobs, too. Gartner forecasted that AI will create more jobs than it eliminates by 2020, with a net increase of over two million jobs in 2025.

I believe AI also will take on tasks which the human brain is simply incapable of handling. AI can be trained to analyze vast data sets to gain insights that could elude the human mind. This could be particularly helpful in the creation of new drugs, saving time, effort and millions of dollars on development and clinical trials. I also believe AI could be useful for finding unique biological markers that enable individual-specific treatment. That said, this doesn’t mean that human oversight and involvement isn’t required.

4. The Idea That AI Will Change The Way People Think

AI probably won’t cause humans to rely on machines to do their jobs and make their decisions. AI, however well-developed it gets, can never replace the complexities of the human brain. That makes it even less reliable than our brains — meaning that AI compliments, rather than replaces, humans.

It’s unlikely that AI will yield flawless results. For instance, AI-powered speech transcriptions still serve up hilarious errors. Facial recognition programs still misidentify people. We can think of AI as an assistant to final human judgment, but a human must still be in the loop.

5. Lack Of Education

Here’s what I think is the biggest problem with AI in today’s world: We just don’t have enough people who are educated on how it works and how to leverage it. I think we’re staring right into the face of a looming skills gap.

For instance, an O’Reilly report on AI (via Information Age) found that over half of respondents felt their organizations needed machine learning experts and data scientists (although O’Reilly is an e-learning provider). And according to Deloitte, “Since nearly every major company is actively looking for data science talent, the demand has rapidly outpaced the supply of people with required skills.” In the United States alone, McKinsey projected (via Deloitte) that there will be a shortfall of 250,000 data scientists by 2024.

Students need to be learning about AI starting as early as middle school. Our children need to be equipped to handle the inevitable future that AI will bring. Otherwise, the shortage of workers who can actually leverage these technologies will expand. And that’s not good for anyone.

Act With Intelligence

Between the extremes of marketing hype and visions of Armageddon lies the truth of AI. Yes, there’s potential for misuse, but the majority of applications are and will be beneficial. You can’t ignore AI; organizations that find appropriate use cases for AI may get started sooner and find success sooner than their laggard competitors.

Source:
https://www.forbes.com/sites/forbesbusinessdevelopmentcouncil/2019/05/14/five-things-to-know-about-ai/#6106c7649b71