X-Frame-Options: SAMEORIGIN

Category: Disruptive Technology

26 May 2020

Reality Check: The Benefits of Artificial Intelligence

Gartner believes Artificial Intelligence (AI) security will be a top strategic technology trend in 2020, and that enterprises must gain awareness of AI’s impact on the security space. However, many enterprise IT leaders still lack a comprehensive understanding of the technology and what the technology can realistically achieve today. It is important for leaders to question exasperated Marketing claims and over-hyped promises associated with AI so that there is no confusion as to the technology’s defining capabilities.

IT leaders should take a step back and consider if their company and team is at a high enough level of security maturity to adopt advanced technology such as AI successfully. The organization’s business goals and current focuses should align with the capabilities that AI can provide.

A study conducted by Widmeyer revealed that IT executives in the U.S. believe that AI will significantly change security over the next several years, enabling IT teams to evolve their capabilities as quickly as their adversaries.

Of course, AI can enhance cybersecurity and increase effectiveness, but it cannot solve every threat and cannot replace live security analysts yet. Today, security teams use modern Machine Learning (ML) in conjunction with automation, to minimize false positives and increase productivity.

As adoption of AI in security continues to increase, it is critical that enterprise IT leaders face the current realities and misconceptions of AI, such as:

Artificial Intelligence as a Silver Bullet
AI is not a solution; it is an enhancement. Many IT decision leaders mistakenly consider AI a silver bullet that can solve all their current IT security challenges without fully understanding how to use the technology and what its limitations are. We have seen AI reduce the complexity of the security analyst’s job by enabling automation, triggering the delivery of cyber incident context, and prioritizing fixes. Yet, security vendors continue to tout further, exasperated AI-enabled capabilities of their solution without being able to point to AI’s specific outcomes.

If Artificial Intelligence is identified as the key, standalone method for protecting an organization from cyberthreats, the overpromise of AI coupled with the inability to clearly identify its accomplishments, can have a very negative impact on the strength of an organization’s security program and on the reputation of the security leader. In this situation, Chief Information Security Officers (CISO) will, unfortunately, realize that AI has limitations and the technology alone is unable to deliver aspired results.

This is especially concerning given that 48% of enterprises say their budgets for AI in cybersecurity will increase by 29 percent this year, according to Capgemini.

Automation Versus Artificial Intelligence
We have seen progress surrounding AI in the security industry, such as the enhanced use of ML technology to recognize behaviors and find security anomalies. In most cases, security technology can now correlate the irregular behavior with threat intelligence and contextual data from other systems. It can also use automated investigative actions to provide an analyst with a strong picture of something being bad or not with minimal human intervention.

A security leader should consider the types of ML models in use, the biases of those models, the capabilities possible through automation, and if their solution is intelligent enough to build integrations or collect necessary data from non-AI assets.

AI can handle a bulk of the work of a Security Analyst but not all of it. As a society, we still do not have enough trust in AI to take it to the next level — which would be fully trusting AI to take corrective actions towards those anomalies it identified. Those actions still require human intervention and judgment.

Biased Decisions and Human Error
It is important to consider that AI can make bad or wrong decisions. Given that humans themselves create and train the models that achieve AI, it can make biased decisions based on the information it receives.

Models can produce a desired outcome for an attacker, and security teams should prepare for malicious insiders to try to exploit AI biases. Such destructive intent to influence AI’s bias can prove to be extremely damaging, especially in the legal sector.

By feeding AI false information, bad actors can trick AI to implicate someone of a crime more directly. As an example, just last year, a judge ordered Amazon to turn over Echo recordings in a double murder case. In instances such as these, a hacker has the potential to wrongfully influence ML models and manipulate AI to put an innocent person in prison. In making AI more human, the likelihood of mistakes will increase.

What’s more, IT decision-makers must take into consideration that attackers are utilizing AI and ML as an offensive capability. AI has become an important tool for attackers, and according to Forrester’s Using AI for Evil report, mainstream AI-powered hacking is just a matter of time.

AI can be leveraged for good and for evil, and it is important to understand the technology’s shortcomings and adversarial potential.

The Future of AI in Cybersecurity
Though it is critical to acknowledge AI’s realistic capabilities and its current limitations, it is also important to consider how far AI can take us. Applying AI throughout the threat lifecycle will eventually automate and enhance entire categories of Security Operations Center (SOC) activity. AI has the potential to provide clear visibility into user-based threats and enable increasingly effective detection of real threats.

There are many challenges IT decision-makers face when over-estimating what Artificial Intelligence alone can realistically achieve and how it impacts their security strategies right now. Security leaders must acknowledge these challenges and truths if organizations wish to reap the benefits of AI today and for years to come.

Source: https://www.aithority.com/guest-authors/reality-check-the-benefits-of-artificial-intelligence/

28 Apr 2020

MICROSOFT WANTS TO MINE CRYPTOCURRENCY USING YOUR BRAIN WAVES

Microsoft applied for an unusual new patent that would read users’ brainwaves in exchange for cryptocurrency like Bitcoin.

The patent application, which has yet to be granted, describes a system that would scan a user’s brain activity or other biological signals to make sure they completed a task, such as watching a commercial. The system would then use those signals to mine for cryptocurrency like Bitcoin, PC Magazine reports, as a way to compensate the user.

We’ve Seen Worse
The logistics for how such a transaction would occur remain hazy: the application includes details on how such a system’s software may work, but less information on how it would actually be used.But this wouldn’t be the first time a tech company tried to patent absurd technology — it’s actually a fairly common practice, even though many of the systems described in these patents never get built.

Biological Captcha
Based on what information is available, the system seems ideal for a system like Mechanical Turk, in which workers complete quick tasks — like helping train AI algorithms — for small sums of money.

The idea there, PC Mag reports, is to make the process of proving that someone actually did the work quick and painless — albeit intrusive — instead of taking up time they could spend on the next job.

Source: https://futurism.com/the-byte/microsoft-mine-cryptocurrency-using-your-brain-waves

23 Mar 2020

Researchers Find Captivating New Details In Image of Black Hole

Last April, the international coalition of scientists who run the Event Horizon Telescope (EHT), a network of eight telescopes from around the world, revealed the first-ever image of a black hole.

Now, a team of researchers at the Center for Astrophysics at Harvard have revealed calculations, as detailed in a paper published in the journal Science Advances today, that predict an intricate internal structure within black hole images caused by extreme gravitational light bending.

The new research, they say, could lead to much sharper images when compared to the blurry ones we’ve seen so far.

“With the current EHT image, we’ve caught just a glimpse of the full complexity that should emerge in the image of any black hole,” said Michael Johnson a lecturer at the Center for Astrophysics, in a statement.

The EHT image was able to catch the black hole’s “photon sphere” or “photon ring,” a region around a black hole where gravity is so overpowering, it forces photons to travel in orbits.

But as it turns out, there’s even more to the image.

“The image of a black hole actually contains a nested series of rings,” Johnson said. “Each successive ring has about the same diameter but becomes increasingly sharper because its light orbited the black hole more times before reaching the observer.”

Until last year, that internal structure of black holes remained shrouded in mystery.“As a theorist, I am delighted to finally glean real data about these objects that we’ve been abstractly thinking about for so long,” Alex Lupsasca from the Harvard Society of Fellows said in the statement.

These newly discovered substructures could allow for even sharper images in the future. “What really surprised us was that while the nested subrings are almost imperceptible to the naked eye on images — even perfect images — they are strong and clear signals for arrays of telescopes called interferometers,” Johnson added.

“While capturing black hole images normally requires many distributed telescopes, the subrings are perfect to study using only two telescopes that are very far apart,” Johnson said. “Adding one space telescope to the EHT would be enough.”

There might be other ways as well. In November, a team of Dutch astronomers suggested sending two to three satellites equipped with radio imaging technology to observe black holes at five times the sharpness of the last attempt.

Source: https://futurism.com/researchers-take-sharper-black-hole-images

07 Mar 2020

How to Leverage AI to Upskill Employees

Artificial intelligence is the answer to polishing math skills and plugging our workforce pipeline.

 

One of the largest economic revolutions of our time is unfolding around us. Technology, innovation and automation are redrawing the career paths of millions of people. Most headlines focus on the negative, i.e. machines taking our jobs. But in reality, these developments are opening up a world of opportunity for people who can make the move to a STEM career or upskill in their current job. There’s also another part to this story: How AI can help boost the economy by improving how we learn.

In 2018, 2.4 million STEM jobs in the U.S. went unfilled. That’s almost equal to the entire population of Los Angeles or Chicago. It’s a gap causing problems for employers trying to recruit and retain workers, whether in startups, small businesses or major corporations. We just don’t have enough workers.

The Unspoken Barrier 

The barrier preventing new or existing employees from adding to their skill set and filling the unfulfilled jobs? Math. Calculus to be specific. It has become a frustrating impediment to many people seeking a STEM career. For college students, the material is so difficult that one-third of them in the U.S. fail the related course or drop it out of frustration. For adults, learning calculus is not always compulsory for the day to day of every STEM job, but learning its principles can help sharpen logic and reasoning. Plus, simply understanding how calculus relates to real-world scenarios is helpful in many STEM jobs. Unfortunately, for many people, the thought of tackling any level of math is enough to scare them away from a new opportunity.  We need to stop looking at math as a way to filter people out of the STEM pipeline. We need to start looking at it as a way to help more people, including professionals looking to pivot careers.

How AI Can Change How Employees Learn

How do we solve this hurdle and fill plug the pipeline? Artificial intelligence. We often discuss how AI can be used to help data efficiencies and process automation, but AI can also assist in personal tutoring to get people over the barriers of difficult math. The recently released Aida Calculus app uses AI to create a highly personalized learning experience and is the first of its kind to use a very complex combination of AI algorithms that provide step-by-step feedback on equations and then serve up custom content showing how calculus works in the real world.

While the product is important, the vision behind it is much bigger. This is a really impactful application of AI for good. It also shows that math skills can be developed in everyone and technology like AI can change the way people learn difficult subjects. The goal is to engage anyone, be it a student or working adult, who is curious about how to apply math in their daily lives. By making calculus relevant and relatable, we can begin to instill the confidence people need to take on STEM careers, even if those jobs don’t directly use calculus.

Leveraging AI Through Human Development

When people boost their complex math skills or even their general understanding of basic math concepts, there’s a world of opportunity waiting. STEM jobs outearn non-STEM jobs by up to 30 percent in some cases. A 2017 study commissioned by Qualcomm suggested that 5G will create 22 million jobs globally by 2035. The U.S. Labor Department says that IT fields will add half a million new jobs in the next eight years and that jobs in information security will grow by 30 percent. Job growth in STEM is outpacing overall U.S. job growth. At the same time, Pearson’s own Global Learners Survey said that 61 percent of Americans are likely to change careers entirely. It’s a good time for that 61 percent to consider STEM.

To equip themselves for this new economy, people will have to learn how learn. Whether it’s math or any other subject, they’ll likely need to study again, and that is hard. But we can use innovation and technology to make the tough subjects a little easier and make the whole learning experience more personalized, helping a whole generation of people take advantage of the opportunity to become the engineers, data analysts and scientists we need.

Source: https://www.entrepreneur.com/article/345502

02 Feb 2020

How you can get your business ready for AI

  • 90% of executives see promise in the use of artificial intelligence.
  • AI set to add $15.7 trillion to global economy.
  • Only 4% planning major deployment of technology in 2020.

They say you have to learn to walk before you can run. It turns out the same rule applies when it comes to the rollout of artificial intelligence.

A new report on AI suggests that companies need to get the basics of the technology right before scaling up its use. In a PwC survey, 90% of executives said AI offers more opportunities than risks, but only 4% plan to deploy it enterprise-wide in 2020, compared with 20% who said they intended to do so in 2019.

Slow and steady wins the race

By 2030, AI could add $15.7 trillion to the global economy. But its manageable implementation is a global challenge. The World Economic Forum is working with industry experts and business leaders to develop an AI toolkit that will help companies understand the power of AI to advance their business and to introduce the technology in a sustainable way.

Focusing on the fundamentals first will allow organizations to lay the groundwork for a future that brings them all the rewards of AI.

Here are five things PwC’s report suggests companies can do in 2020 to prepare.

1. Embrace the humdrum to get things done

One of the key benefits that company leaders expect from investment in AI is the streamlining of in-house processes. The automation of routine tasks, such as the extrication of information from tax forms and invoices, can help companies operate more efficiently and make significant savings.

AI can already be used to manage the threat of fraud and cybersecurity – something that 38% of executives see as a key capability of the technology. For example, AI can recognize unauthorized network entry and identify malicious behaviour in software.

2. Turn training into real-world opportunity

For companies to be ready for AI at scale, they need to do more than just offer training opportunities. Employees have to be able to use the new skills they have learned, in a way that continuously improves performance.

It’s also important to make teams ‘multilingual’, with both tech and non-tech skills integrated across the business, so that colleagues can not only collaborate on AI-related challenges, but also decide which problems AI can solve.

3. Tackle risks and act responsibly

Along with helping employees to see AI not as a threat to their jobs but as an opportunity to undertake higher-value work, companies must ensure they have the processes, tools and controls to maintain strong ethics and make AI easy to understand. In some cases, this might entail collaboration with customers, regulators, and industry peers.

As AI usage continues to grow, so do public fears about the technology in applications such as facial recognition. That means risk management is becoming more critical. Yet not all companies have centralized governance around AI, and that could increase cybersecurity threats, by making the technology harder to manage and secure.

4. AI, all the time

Developing AI models requires a ‘test and learn’ approach, in which the algorithms are continually learning and the data is being refined. That is very different from the way that software is developed, and a different set of tools are needed. Machines learn through the input of data, and more – and better quality – data is key to the rollout of AI.

Some of AI’s most valuable uses come when it works 24/7 as part of broader operational systems, such as marketing or finance. That’s why leaders in the field are employing it across multiple functions and business units, and fully integrating it with broader automation initiatives and data analytics.

5. A business model for the future

It’s worth remembering that despite AI’s growing importance, it is still just one weapon in the business armoury. Its benefit could come through its use as part of a broader automation or business strategy.

Weaving it successfully into a new business model includes a commitment to employee training and understanding return on investment. For now, that investment could be as simple as using robotic process automation to handle customer requests.

AI’s impact may be incremental at first, but its gradual integration into business operations means that game-changing disruption and innovation are not far away.

Source: https://www.weforum.org/agenda/2020/01/artificial-intelligence-predictions-2020-pwc/

28 Jan 2020

Here’s what AI experts think will happen in 2020

But it’s time to let the past go and point our bows toward the future. It’s no  longer possible to estimate how much the machine learning and AI markets are worth, because the line between what’s an AI-based technology and what isn’t has become so blurred that Apple, Microsoft, and Google are all “AI companies” that also do other stuff.

Your local electricity provider uses AI and so does the person who takes those goofy real-estate agent pictures you see on park benches. Everything is AI — an axiom that’ll become even truer in 2020.

We solicited predictions for the AI industry over the next year from a panel of experts, here’s what they had to say:

Marianna Tessel, CTO at Intuit

AI and human will collaborate. AI will not “replace humans,” it will collaborate with humans and enhance how we do things. People will be able to provide higher level work and service, powered by AI. At Intuit, our platform allows experts to connect with customers to provide tax advice and help small businesses with their books in a more accurate and efficient way, using AI. It helps work get done faster and helps customers make smarter financial decisions. As experts use the product, the product gets smarter, in turn making the experts more productive. This is the decade where, through this collaboration, AI will enhance human abilities and allow us to take our skills and work to a new level.

AI will eat the world in ways we can’t imagine today: AI is often talked about as though it is a Sci-Fi concept, but it is and will continue to be all around us. We can already see how software and devices have become smarter in the past few years and AI has already been incorporated into many apps. AI enriched technology will continue to change our lives, every day, in what and how we operate. Personally, I am busy thinking about how AI will transform finances – I think it will be ubiquitous. Just the same way that we can’t imagine the world before the internet or mobile devices, our day-to-day will soon become different and unimaginable without AI all around us, making our lives today seem so “obsolete” and full of “unneeded tasks.”

We will see a surge of AI-first apps: As AI becomes part of every app, how we design and write apps will fundamentally change. Instead of writing apps the way we have during this decade and add AI, apps will be designed from the ground up, around AI and will be written differently. Just think of CUI and how it creates a new navigation paradigm in your app. Soon, a user will be able to ask any question from any place in the app, moving it outside of a regular flow. New tools, languages, practices and methods will also continue to emerge over the next decade.

Jesse Mouallek, Head of Operations for North America at Deepomatic

We believe 2020 to be the year that industries that aren’t traditionally known to be adopters of sophisticated technologies like AI, reverse course. We expect industries like waste management, oil and gas, insurance, telecommunications and other SMBs to take on projects similar to the ones usually developed by the tech giants like Amazon, Microsoft and IBM. As the enterprise benefits of AI become more well-known, the industries outside of Silicon Valley will look to integrate these technologies.

If companies don’t adapt to the current trends in AI, they could see tough times in the future. Increased productivity, operational efficiency gains, market share and revenue are some of the top line benefits that companies could either capitalize or miss out on in 2020, dependent on their implementation. We expect to see a large uptick in technology adoption and implementation from companies big and small as real-world AI applications, particularly within computer vision, become more widely available.

We don’t see 2020 as another year of shiny new technology developments. We believe it will be more about the general availability of established technologies, and that’s ok. We’d argue that, at times, true progress can be gauged by how widespread the availability of innovative technologies is, rather than the technologies themselves. With this in mind, we see technologies like neural networks, computer vision and 5G becoming more accessible as hardware continues to get smaller and more powerful, allowing edge deployment and unlocking new use cases for companies within these areas.

Hannah Barnhardt, VP of Product Strategy Marketing at Deluxe Entertainment

2020 is the year AI/ML capabilities will be truly operationalized, rather than companies pontificating about its abilities and potential ROI. We’ll see companies in the media and entertainment space deploy AI/ML to more effectively drive investment and priorities within the content supply chain and harness cloud technologies to expedite and streamline traditional services required for going to market with new offerings, whether that be original content or Direct to Consumer streaming experiences.

Leveraging AI toolsets to automate garnering insights into deep catalogs of content will increase efficiency for clients and partners, and help uphold the high-quality content that viewers demand. A greater number of studios and content creators will invest and leverage AI/ML to conform and localize premium and niche content, therefore reaching more diverse audiences in their native languages.

Tristan Greene, reporter for The Next Web

I’m not an industry insider or a machine learning developer, but I covered more artificial intelligence stories this year than I can count. And I think 2019 showed us some disturbing trends that will continue in 2020. Amazon and Palantir are poised to sink their claws into the government surveillance business during what could potentially turn out to be President Donald Trump’s final year in office. This will have significant ramifications for the AI industry.

The prospect of an Elizabeth Warren or Bernie Sanders taking office shakes the Facebooks and Microsofts of the world to their core, but companies who are already deeply invested in providing law enforcement agencies with AI systems that circumvent citizen privacy stand to lose even more. These AI companies could be inflated bubbles that pop in 2021, in the meantime they’ll look to entrench with law enforcement over the next 12 months in hopes of surviving a Democrat-lead government.

Look for marketing teams to get slicker as AI-washing stops being such a big deal and AI rinsing — disguising AI as something else — becomes more common (ie: Ring is just a doorbell that keeps your packages safe, not an AI-powered portal for police surveillance, wink-wink).

Here’s hoping your 2020 is fantastic. And, if we can venture a final prediction: stay tuned to TNW because we’re going to dive deeper into the world of artificial intelligence in 2020 than ever before. It’s going to be a great year for humans and machines.

Source: https://thenextweb.com/artificial-intelligence/2020/01/03/heres-what-ai-experts-think-will-happen-in-2020/

11 Jan 2020

Mind-reading technology lets you control tech with your brain — and it actually works

  • CES featured several products that let you control apps, games and devices with your mind.
  • The technology holds a lot of promise for gaming, entertainment and even medicine.
  • NextMind and FocusOne were two of the companies that showed off mind-control technology at CES this year.

 

LAS VEGAS — It’s not the self-driving cars, flying cars or even the dish-washing robots that stick out as the most transformative innovation at this year’s Consumer Electronics Show: It’s the wearable gadgets that can read your mind.

There’s a growing category of companies focused on the “Brain-Computer Interface.” These devices can record brain signals from sensors on the scalp (or even devices implanted within the brain) and translate them into digital signals. This industry is expected to reach $1.5 billion this year, with the technology used for everything from education and prosthetics, to gaming and smart home control.

 

This isn’t science fiction. I tried a couple of wearables that track brain activity at CES this week, and was surprised to find they really work. NextMind has a headset that measures activity in your visual cortex with a sensor on the back of your head. It translates the user’s decision of where to focus his or her eyes into digital commands.

“You don’t see with your eyes, your eyes are just a medium,” Next Mind CEO Sid Kouider said. “Your vision is in your brain, and we analyze your vision in your brain and we can know what you want to act upon and then we can modify that to basically create a command.”

Kouider said that this is the first time there’s been a brain-computer interface outside the lab, and the first time you can theoretically control any device by focusing your thoughts on them.

Wearing a Next Mind headset, I could change the color of a lamp — red, blue and green — by focusing on boxes lit up with those colors. The headset also replaced a remote control. Staring at a TV screen, I could activate a menu by focusing on a triangle in a corner of the screen. From there, focusing my eyes, I could change the channel, mute or pause video, just by focusing on a triangle next to each command.

“We have several use cases, but we are also targeting entertainment and gaming because that’s where this technology is going to have its best use,” Kouider said. “The experience of playing or applying it on VR for instance or augmented reality is going to create some new experiences of acting on a virtual world.”

 

Next Mind’s technology isn’t available to consumers yet, but the company is selling a $399 developer kit with the hope that other companies to create new applications.

“I think it’s going to still take some time until we nail … the right use case,” Kouider said. “That’s the reason we are developing this technology, to have people use the platform and develop their own use cases.”

Another company focused on the brain-computer interface, BrainCo, has the FocusOne headband, with sensors on the forehead measuring the activity in your frontal cortex. The “wearable brainwave visualizer” is designed to measure focus, and its creators want it to be used in schools.

“FocusOne is detecting the subtle electrical signals that your brain is producing,” BrainCo President Max Newlon said. “When those electrical signals make their way to your scalp, our sensor picks them up, takes a look at them and determines, ‘Does it look like your brain is in a state of engagement? Or does it look like your brain is in a state of relaxation?’”

Wearing the headband, I tried a video game with a rocket ship. The harder I focused, the faster the rocket ship moved, increasing my score. I then tried to get the rocket ship to slow down by relaxing my mind. A light on the front of the headband turns red when your brain is intensely focused, yellow if you’re in a relaxed state and blue if you’re in a meditative state. The headbands are designed to help kids learn to focus their minds, and to enable teachers to understand when kids are zoning out. The headband costs $350 for schools and $500 for consumers. The headset comes with software and games to help users understand how to focus and meditate.

BrainCo also has a prosthetic arm coming to market later this year, which will cost $10,000 to $15,000, less than half the cost of an average prosthetic. BrainCo’s prosthetic detects muscle signals and feeds them through an algorithm that can help it operate better over time, Newlon said.

“The thing that sets this prosthetic apart, is after enough training, [a user] can control individual fingers and it doesn’t only rely on predetermined gestures. It’s actually like a free-play mode where the algorithm can learn from him, and he can control his hands just like we do,” Newlon said.

Source: CNBC

06 Jan 2020

Tech trends 2020: New spacecraft and bendy screens

If your ambition is to fly into space – and you’ve got plenty of spare cash – then 2020 could be an exciting year.

If space travel is not really your thing, but you would like a much bigger screen on your mobile phone, then 2020 might also have some tech for you.

But if you think there are already too many phones out there and the technology industry needs to be less wasteful, well some tech companies might catch up with your thinking.

Here’s a little taster of what might be coming in the next twelve months.

Crewed space missions

2020 is going to be a “pivotal year” for space travel, according to Guy Norris, a senior editor at Aviation Week & Space Technology.

Since Nasa retired the Space Shuttle in 2011, the US has relied on Russian spacecraft to transport astronauts to the International Space Station.

That could all change in 2020 when, if all goes to plan, two US-built spacecraft should start carrying crew.

Boeing’s CST-100 Starliner, which can carry up to seven astronauts into orbit, is due for its first test flight today before the first manned flight, likely to be in 2020.

Meanwhile the SpaceX Dragon capsule will go through some final tests in early 2020, and if they all go well then it too would be ready for a crewed mission.

Other systems, designed to reach near-Earth space, could also reach milestones in 2020. Blue Origin, owned by Amazon billionaire Jeff Bezos, could be ready to take tourists on its New Shepard suborbital rocket.

Virgin Galactic could also be ready in 2020 to take passengers into space, more than a decade later than founder Richard Branson originally hoped.

It’s reported that more than 600 people have put down deposits for a Virgin Galactic flight, with tickets costing $250,000 (£195,000).

“It’s finally delivery time for a lot of these long promised programmes and a chance for a whole range of technologies to really prove themselves for the first time,” says Mr Norris.

Technology and the environment

Protests by Extinction Rebellion have helped move climate change up the agenda for technology companies.

Among those that will be under pressure are mobile phone makers. It’s estimated there are 18 billion phones lying around unused worldwide. With around 1.3 billion phones sold in 2019, that number is growing all the time.

Mobile phone makers will be under pressure to make their production processes greener and their phones more easily repairable.

The same will go for the makers of other consumer goods including TVs, washing machines and vacuum cleaners.

Also watch the companies that provide mobile phone services. Vodafone has already promised that in the UK by 2023 its networks will all run on sustainable energy sources. Others are likely to follow suit.

Business travel is under pressure as well. Ben Wood, an analyst at CCS Insight says it will become “socially unacceptable” to fly around the world for meetings and firms will switch to virtual meetings.

There could also be green initiatives from the cloud computing industry as well. Their facilities which house thousands of computer servers use huge amounts of power.

Flexible displays

The launch of Samsung’s first foldable phone in April did not go smoothly. Several reviewers broke the screens and the company had to make some rapid improvements before it went on sale in September.

Motorola had a more successful launch of its new Razr, although some reviewers complained about the price. But this is unlikely to hold the market back. Samsung is expected to launch other devices with flexible displays next year – possibly a tablet.

TCL, the second biggest maker of TVs in China, has also promised to launch its first mobile foldable device in 2020 and then other products quickly after that.

It is betting big on the market, having invested $5.5bn in developing flexible displays.

Analysts say that screens will be incorporated into all sorts of surfaces. Smart speakers might have wrap-around displays, watch-like devices will have straps with displays and fridge doors might have large screens.

Super-fast mobile

We can expect the rollout of high-speed mobile phone networks to continue. By the end of 2019 around 40 networks in 22 countries were offering 5G service.

By the end of 2020 that number will have more than doubled to to around 125 operators, says Kester Mann at CCS Insight.

“There could be an interesting development in the way 5G contracts are priced. A 5G contract without a phone will cost around £30 a month and for that you’re likely to get unlimited amounts of data.”

But analysts say that next year we may see prices based on the speed of the service you want – a bit like the way home broadband is already priced.

Vodafone is already offering contracts based on speed in the UK. Also in the UK, the network 3 is likely to push its 5G offering as an alternative to broadband at home, analysts say. That might appeal to people who move around a lot – students for example – and don’t want a fixed line service.

Quantum computing

Will next year be another big one for quantum computing; the technology which exploits the baffling but powerful behaviour of tiny particles such as electrons and photons?

In October Google said that its quantum computer had performed a task in 200 seconds, that the fastest supercomputer would have taken 10,000 years to complete. There was some quibbling over its achievement, but experts say it was a big moment.

“It’s a fantastic milestone,” says Philipp Gerbert, a member of the deep tech group at consultancy firm BCG: “It’s clear they exceeded the classical computer, by what margin you can debate. They disproved some lingering doubts.”

Mr Gerbert thinks other leaders in the field – IBM, Rigetti and IonQ – could also clear that hurdle: “They all have excellent teams, one or two will reach a similar stage over the next year.”

Once the technology is proven, quantum computers could spur breakthroughs in chemistry, pharmaceuticals and engineering.

Google has also promised to make its quantum computer available for use by outsiders in 2020, but has not provided any details yet.

“Clearly people would love to get access to that,” Mr Gerbert says.

Source: BBC

05 Jan 2020

Tech Tent – tech trends for 2020

Will we start the journey to a better, kinder internet? Which countries are best placed to win the AI race? And should Ivanka Trump be speaking at a tech show? Just some of the questions we address in the first edition of Tech Tent this year.

Last month, the creator of the World Wide Web Sir Tim Berners-Lee, told us of his plan to put it back on the right track. His Contract for the Web aims to get companies, countries and individuals to work together to combat cyber-bullying, misinformation and other online harms.

Catherine Miller of the think tank dot everyone, which describes its mission as championing responsible technology for a fairer future, gives us her assessment of how likely it is that we will make the web a better place in 2020. She stresses that better regulation will be key, changing the economic incentives that mean the tech giants fight to keep people hooked to their platforms, and reward damaging behaviour.

When it comes to the race to build what is arguably the key technology of our times – artificial intelligence – the consensus has been that the United States is in the lead, but China is catching up fast. Now a new global AI index produced by the online news site Tortoise has come up with a more nuanced picture.

It found that, yes, the US and China were one and two in AI, with the UK in third place. But Alexandra Mousavizadeh, the data scientist who led the project for Tortoise Intelligence, tells us that China was much further behind than they had expected.

It scored well in research and development, but its 18th position in having the people with the right skills held it back. “This race is going to be won in many different ways,” says Ms Mousavizadeh, stressing that the free market bottom-up approach of the US had proved very fruitful so far, but the top-down Chinese strategy also has its strengths.

But she says that around the world a government strategy for developing human capital – “preparing a workforce for working with and being part of AI driven growth” – will be key.

We also look less far ahead – to CES, the huge annual gadget-fest which opens in Las Vegas on Tuesday. No doubt we will see all sorts of products promising to use AI to give consumers better experiences.

But one of the keynote speakers looks likely to provide the biggest headlines from the show. On the opening day, Ivanka Trump will be discussing the future of work in a session with the Consumer Technology Association’s CEO Gary Shapiro. The invitation to the President’s daughter has sparked controversy, especially as female keynote speakers from the tech industry have been thin on the ground in previous years.

Mr Shapiro tells Tech Tent that the show is about more than gadgets. It addresses key issues such as the impact of automation on work – and he says as the co-chair of the American Workforce Advisory Board, Ms Trump has significant things to contribute to this debate.

But back to technology. I have just been looking back at a blogpost I wrote on New Year’s Eve 2009 as I prepared to head off to the 2010 CES in Las Vegas.

I was very excited about a British firm called Plastic Logic that was going to unveil a radical new e-reader. “It could be one of the show’s stand-out products,” I wrote, “or it could end up buried under an avalanche of hype about a forthcoming rival device from a better-known firm.”

That rival device turned out to be Apple’s iPad, unveiled later that month, and Plastic Logic’s Que device did indeed end up dead and buried.

So, expect to see some startling new products emerging from Las Vegas in the next few days – we are promised a talking frying-pan and a self-driving sofa – but world-changing devices are few and far between, and are likely to be unveiled elsewhere.

Source: BBC

04 Nov 2019
We Need AI That Is Explainable, Auditable, and Transparent

We Need AI That Is Explainable, Auditable, and Transparent

Every parent worries about the influences our children are exposed to. Who are their teachers? What movies are they watching? What video games are they playing? Are they hanging out with the right crowd? We scrutinize these influences because we know they can affect, for better or worse, the decisions our children make.

Just as we concern ourselves with who’s teaching our children, we also need to pay attention to who’s teaching our algorithms. Like humans, artificial intelligence systems learn from the environments they are exposed to and make decisions based on biases they develop. And like our children, we should expect our models to be able to explain their decisions as they develop.

As Cathy O’Neil explains in Weapons of Math Destruction, algorithms often determine what college we attend, if we get hired for a job, if we qualify for a loan to buy a house, and even who goes to prison and for how long. Unlike human decisions, these mathematical models are rarely questioned. They just show up on somebody’s computer screen and fates are determined.

In some cases, the errors of algorithms are obvious, such as when Dow Jones reported that Google was buying Apple for $9 billion and the bots fell for it or when Microsoft’s Tay chatbot went berserk on Twitter — but often they are not. What’s far more insidious and pervasive are the more subtle glitches that go unnoticed, but have very real effects on people’s lives.

Once you get on the wrong side of an algorithm, your life immediately becomes more difficult. Unable to get into a good school or to get a job, you earn less money and live in a worse neighborhood. Those facts get fed into new algorithms and your situation degrades even further. Each step of your descent is documented, measured, and evaluated.

Consider the case of Sarah Wysocki, a fifth grade teacher who — despite being lauded by parents, students, and administrators alike — was fired from the D.C. school district because an algorithm judged her performance to be sub-par. Why? It’s not exactly clear, because the system was too complex to be understood by those who fired her.

Make no mistake, as we increasingly outsource decisions to algorithms, the problem has the potential to become even more Kafkaesque. It is imperative that we begin to take the problem of AI bias seriously and take steps to mitigate its effects by making our systems more transparent, explainable, and auditable.

Sources of Bias

Bias in AI systems has two major sources: the data sets on which models are trained, and the design of the models themselves. Biases in the data sets on which algorithms are trained can be subtle, for example, such as when smartphone apps are used to monitor potholes and alert authorities to contact maintenance crews. That may be efficient, but it’s bound to undercount poorer areas where fewer people have smartphones.

In other cases, data that is not collected can affect results. Analysts suspect that’s what happened when Google Flu Trends predicted almost double as many cases in 2013 as there actually were. What appears to have happened is that increased media coverage led to more searches by people who weren’t sick.

Yet another source of data bias happens when human biases carry over into AI systems. For example, biases in the judicial system affect who gets charged and sentenced for crimes. If that data is then used to predict who is likely to commit crimes, then those biases will carry over. In other cases, humans are used to tag data and may direct input bias into the system.


This type of bias is pervasive and difficult to eliminate. In fact, Amazon was forced to scrap an AI-powered recruiting tool because they could not remove gender bias from the results. They were unfairly favoring men because the training data they used taught the system that most of the previously-hired employees of the firm that were viewed as successful were male. Even when they eliminated any specific mention of gender, certain words which appeared more often in male resumes than female resumes were identified by the system as proxies for gender.

A second major source of bias results from how decision-making models are designed. For example, if a teacher’s ability is evaluated based on test scores, then other aspects of performance, such as taking on children with learning differences or emotional problems, would fail to register, or even unfairly penalize them. In other cases, models are constructed according to what data is easiest to acquire or the model is overfit to a specific set of cases and is then applied too broadly.

Overcoming Bias

With so many diverse sources of bias, we do not think it is realistic to believe we can eliminate it entirely, or even substantially. However, what we can do is make our AI systems more explainable, auditable, and transparent. We suggest three practical steps leaders can take to mitigate the effects of bias.

First, AI systems must be subjected to vigorous human review. For example, one study cited by a White House report during the Obama administration found that while machines had a 7.5% error rate in reading radiology images, and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.

Second, much like banks are required by law to “know their customer,” engineers that build systems need to know their algorithms. For example, Eric Haller, head of Datalabs at Experian told us that unlike decades ago, when the models they used were fairly simple, in the AI era, his data scientists need to be much more careful. “In the past, we just needed to keep accurate records so that, if a mistake was made, we could go back, find the problem and fix it,” he told us. “Now, when so many of our models are powered by artificial intelligence, it’s not so easy. We can’t just download open-source code and run it. We need to understand, on a very deep level, every line of code that goes into our algorithms and be able to explain it to external stakeholders.”

Third, AI systems, and the data sources used to train them, need to be transparent and available for audit. Legislative frameworks like GDPR in Europe have made some promising first steps, but clearly more work needs to be done. We wouldn’t find it acceptable for humans to be making decisions without any oversight, so there’s no reason why we should accept it when machines make decisions.

Perhaps most of all, we need to shift from a culture of automation to augmentation. Artificial intelligence works best not as some sort of magic box you use to replace humans and cut costs, but as a force multiplier that you use to create new value. By making AI more explainable, auditable and transparent, we can not only make our systems more fair, we can make them vastly more effective and more useful.

Source: https://hbr.org/2019/10/we-need-ai-that-is-explainable-auditable-and-transparent