Author: Manahel Thabet

11 Jan 2020

Mind-reading technology lets you control tech with your brain — and it actually works

  • CES featured several products that let you control apps, games and devices with your mind.
  • The technology holds a lot of promise for gaming, entertainment and even medicine.
  • NextMind and FocusOne were two of the companies that showed off mind-control technology at CES this year.

 

LAS VEGAS — It’s not the self-driving cars, flying cars or even the dish-washing robots that stick out as the most transformative innovation at this year’s Consumer Electronics Show: It’s the wearable gadgets that can read your mind.

There’s a growing category of companies focused on the “Brain-Computer Interface.” These devices can record brain signals from sensors on the scalp (or even devices implanted within the brain) and translate them into digital signals. This industry is expected to reach $1.5 billion this year, with the technology used for everything from education and prosthetics, to gaming and smart home control.

 

This isn’t science fiction. I tried a couple of wearables that track brain activity at CES this week, and was surprised to find they really work. NextMind has a headset that measures activity in your visual cortex with a sensor on the back of your head. It translates the user’s decision of where to focus his or her eyes into digital commands.

“You don’t see with your eyes, your eyes are just a medium,” Next Mind CEO Sid Kouider said. “Your vision is in your brain, and we analyze your vision in your brain and we can know what you want to act upon and then we can modify that to basically create a command.”

Kouider said that this is the first time there’s been a brain-computer interface outside the lab, and the first time you can theoretically control any device by focusing your thoughts on them.

Wearing a Next Mind headset, I could change the color of a lamp — red, blue and green — by focusing on boxes lit up with those colors. The headset also replaced a remote control. Staring at a TV screen, I could activate a menu by focusing on a triangle in a corner of the screen. From there, focusing my eyes, I could change the channel, mute or pause video, just by focusing on a triangle next to each command.

“We have several use cases, but we are also targeting entertainment and gaming because that’s where this technology is going to have its best use,” Kouider said. “The experience of playing or applying it on VR for instance or augmented reality is going to create some new experiences of acting on a virtual world.”

 

Next Mind’s technology isn’t available to consumers yet, but the company is selling a $399 developer kit with the hope that other companies to create new applications.

“I think it’s going to still take some time until we nail … the right use case,” Kouider said. “That’s the reason we are developing this technology, to have people use the platform and develop their own use cases.”

Another company focused on the brain-computer interface, BrainCo, has the FocusOne headband, with sensors on the forehead measuring the activity in your frontal cortex. The “wearable brainwave visualizer” is designed to measure focus, and its creators want it to be used in schools.

“FocusOne is detecting the subtle electrical signals that your brain is producing,” BrainCo President Max Newlon said. “When those electrical signals make their way to your scalp, our sensor picks them up, takes a look at them and determines, ‘Does it look like your brain is in a state of engagement? Or does it look like your brain is in a state of relaxation?’”

Wearing the headband, I tried a video game with a rocket ship. The harder I focused, the faster the rocket ship moved, increasing my score. I then tried to get the rocket ship to slow down by relaxing my mind. A light on the front of the headband turns red when your brain is intensely focused, yellow if you’re in a relaxed state and blue if you’re in a meditative state. The headbands are designed to help kids learn to focus their minds, and to enable teachers to understand when kids are zoning out. The headband costs $350 for schools and $500 for consumers. The headset comes with software and games to help users understand how to focus and meditate.

BrainCo also has a prosthetic arm coming to market later this year, which will cost $10,000 to $15,000, less than half the cost of an average prosthetic. BrainCo’s prosthetic detects muscle signals and feeds them through an algorithm that can help it operate better over time, Newlon said.

“The thing that sets this prosthetic apart, is after enough training, [a user] can control individual fingers and it doesn’t only rely on predetermined gestures. It’s actually like a free-play mode where the algorithm can learn from him, and he can control his hands just like we do,” Newlon said.

Source: CNBC

09 Jan 2020

Before we augment people with tech, we’ll need proper rules

New technologies – from artificial intelligence to synthetic biology – are set to alter the world, the human condition, and our very being in ways that are hard to imagine. The discussion of these developments limits itself as a rule to individual values. But it is also crucial to talk about the collective human values that we wish to guarantee in our intimate technological society. That brings an important political question at the table. How to develop and implement human enhancement technologies in a socially responsible way?

During the last few decades, the human being has become an increasingly acceptable object of study and technological intervention. We are an engineering project ourselves. An important engine behind this development is the combination of nano-, bio-, information, and cognitive technology. This so-called NBIC convergence is creating a new wave of applications, consisting in large part of intimate technologies capable of monitoring, analyzing, and influencing our bodies and behavior. In essence, the NBIC convergence means a steadily more profound interaction between the natural sciences (nano and info) and the life sciences (bio and cogno). This interaction leads to two megatrends: “Biology becomes technology” and “technology becomes biology.”

In the natural sciences, a revolution has occurred in the area of materials. If in the seventies we could research and manufacture materials on a micro-scale, we have now learned to do it on a nanoscale. A DNA strand, for example, is almost two nanometers (or two-millionths of a millimeter) thick. Nanotechnology laid the groundwork for the computer revolution. In turn, those computers make it possible to make better materials and machines. That way nanotechnology and information technology spur each other on. Digitization makes it possible to gather large amounts of data about the material, biological and social world, in order to analyze and apply it. Consider the self-driving car that makes use of digital maps and adds new information to those maps with every meter traveled. In this way, a cybernetic loop arises between the physical and digital worlds.

Living organisms, like the human body, are seen more and more as measurable, analyzable, and manufacturable

The above developments in the natural sciences stimulate the life sciences, such as genetics, medicine, and neuroscience. Modern equipment, from DNA chips to MRI scans, offers countless opportunities to investigate and intervene in body and brain. This leads to the statement that “biology is increasingly becoming technology.” That means that living organisms, like the human body, are seen more and more as measurable, analyzable, and manufacturable. Germline technology is a typical example of this trend. In the summer of 2017, an American research team succeeded for the first time in using CRISPR-Cas9 technology to repair a hereditary disorder in the DNA of a (viable) human embryo.

At their turn insights from the life sciences inspire the design of new types of devices: think of DNA computers and self-repairing materials. Simulation of the workings of the brain in hardware and software is, for instance, an important goal of the largescale European Human Brain Project, into which the European Commission has been investing a billion euros for ten years. This leads to the statement that “technology is increasingly becoming biology.” Engineers increasingly attempt to build qualities typical of living creatures, such as self-healing, reproduction, and intelligence, into technology. Examples of this second trend are artificial intelligence and android social robots.

The trends “biology becomes technology” and “technology becomes biology,” when applied to the human being, ensure that humans and technology are increasingly merging with each other. The Rathenau Insituut, therefore, speaks of an intimate technological revolution.

Consider technologies external to our bodies too

The trend “biology becomes technology” drives the debate over “human enhancement.” Traditionally, this debate focuses on invasive medical technologies that work inside the human body. Consider psycho-pharmaceuticals like methylphenidate (Ritalin), which are used to suppress powerful behavioral impulses and improve the storage capacity of our random-access memory, or modafinil, which can help make us more alert and thoughtful. But also neurotechnologies like deep brain stimulation and other brain implants, biotechnologies like synthetic blood substitutes, artificial retinas, gene therapy, and germline modification – all commonly cited examples in discussions about human enhancement.

What does it mean to be human in the 21st century? That question also pertains to the trend “technology becomes biology,” that is, technologies outside the body that have an impact on people’s physical, mental, and social achievements. One example is the Tactical Assault Light Operator Suit (TALOS), an exoskeleton developed by the US Army to make soldiers stronger and less vulnerable to bullets. Besides that, consider persuasive technology: information technology designed to influence human behavior. Think for example of smartphone apps giving people advice on what (not) to eat, on their driving, and on how they should handle social relations or money. Or a smart bracelet that monitors perspiration and heartrate and vibrates if the wearer displays aggression. The wearer has learned by means of a role-playing game that aggressive behavior doesn’t pay off. Consequently, it is expected that he or she will avoid similar behavior in the real world. Through EEG neurofeedback, people can also get insight into their brain activity and learn to influence it in order to change their behavior.

Intimate technologies offer opportunities for human enhancement, but can also lead to essential changes in human skills and the way we communicate with one another.

The above technologies, working outside the body, raise questions about autonomy and informed consent: are people in “smart” environments really able to make informed decisions? When does the concept of technological paternalism become relevant? Can persuasive technology further weaken an already weak will? Is it morally permissible to influence people’s behavior – even for the better – without their knowledge? Just like invasive technologies, non-invasive technologies raise questions about privacy, as well as bodily and mental integrity. In the case of many persuasive technologies, you have to give away a lot of your data in order to improve yourself. Do users really remain in control of their own data? Do we have the right to remain anonymous, to opt-out of being measured, analyzed, and coached? And how could we, in a world full of sensors? The rise of facial and emotion recognition, in particular, makes this a pressing question.

People can voluntarily insert the above invasive and non-invasive technologies into their bodies and lives, for instance, to become stronger or more attractive. But technology can also have unintended side-effects. Through the increasingly intensive use of technology, our abilities begin to change. We develop new competencies (a phenomenon called “reskilling” or “upskilling”), such as all kinds of digital skills. Other competencies might be reduced (“deskilling”). There is, for example, a body of research appearing to indicate that our social skills, such as empathy, are crumbling through excessive computer use. Intimate technologies, then, offer opportunities for human enhancement, but can also lead to essential changes in human skills and the way we communicate with one another. Such changes in the human condition transcend the level of the individual. They touch upon collective questions and values and demand public debate and, where necessary, political consideration.

Paying attention to collective values

The current debate on human enhancement, though, largely limits itself to individual goals. Examples of classic questions are: is human enhancement an individual right? Can people decide for themselves whether they want technological enhancements? In The Techno-Human Condition, Braden Allenby and Daniel Sarewitz argue that such an approach is inadequate. They suggest that the debate over the impact of human enhancement ought to be conducted on the following three levels of complexity:

  1. The direct impact of a single technology;
  2. The way in which technology influences a socio-technological system and the social and cultural patterns affected by the same;
  3. The impact of technology on a global level.

Take the car as an example. The car, in principle, gets you from A to B faster than a bike would (level 1 reasoning). But if many people drive cars, the bike can sometimes be a faster option in the city (level 2 reasoning). On a global scale, the rise of the car has led to a variety of important developments, such as the development of the oil economy, Fordism (the model of mass production and consumption), and climate change. Allenby and Sarewitz posit that the current debate over human development frequently remains on the instrumental level. It revolves especially around the question of whether people have the right, on the basis of free choice, to opt into technologies designed to enhance their bodies and minds. In opposition to what transhumanists often suppose, they show that – just as the car isn’t the faster choice than the bike under every circumstance – the use of human enhancement technology on an individual level doesn’t straightforwardly lead to a better individual quality of life, let alone to a better society. The application of human enhancement technology will frequently be driven by economic or military motives (level 2 reasoning). Such a scenario complicates the issue of individual free choice, because in that case, “The posthuman person is not a self-made man, but a person designed by others.”

The posthuman person is not a self-made man, but a person designed by others.

The mass deployment of human enhancement technology will also have effects – although hard to predict – on a global level. In Homo Deus, Harari sketches two (parallel) long-term scenarios: first, the arrival of the physically and mentally enhanced “superman” (Homo Deus) and a division between supermen and normal people (level 3 reasoning). According to Harari, in the long term, this could lead to the abandonment of the principle of equality that forms the basis of the Universal Declaration of Human Rights. In addition to this “biology becomes technology” scenario, Harari presents a “technology becomes biology” scenario. He anticipates the rise of “dataism,” in which humanity embeds itself in an Internet-of-All-Things and allows itself to be guided purely by AI-generated advice dispensed by computers. In this scenario, humanity has given up all its privacy, autonomy, individuality, and consequently democracy, which is based on personal political choices. Although such scenarios are speculative, they show us which important issues are at stake and show that it is important to look (far) beyond the individual, instrumental level.

The Dutch discussion of germline technology shows that this often does not happen. So far collective interests play a negligible role in that debate. And that is in spite of the fact that CRISPR-applied modifications in the DNA of the embryo are irreversible and heritable by future generations. In the current debate, the pragmatic approach we know from the medical-ethical regime still dominates. In this debate, a lot of attention is paid to the international position of the Netherlands. The country doesn’t want to fall behind as a knowledge economy. Second, there is a special focus on the health benefits germline modification can deliver for the individual in question. A traditional risk-benefit analysis is central to this. Third, significant emphasis is placed on strengthening reproductive autonomy. It is about the opportunity germline modification offers to prospective parents with a hereditary condition: to have a genetically healthy child of their own.

But germline modification also raises questions that do not fit neatly within the framework of medical-ethical principles oriented towards safety, informed consent, and reproductive autonomy. In terms of collective values and international human rights, there should also be a place in the debate for the notion that the human genome is our common heritage, and thus our collective property.

Technological citizenship

New NBIC technologies are set to alter the world, the human condition, and our very being beyond our imagination. Above, we argued that in relation to human enhancement we must consider both invasive medical technologies (the trend “biology becomes technology”) and technologies outside the body that nevertheless have an impact on people’s bodily, mental, and social performance (the trend “technology becomes biology”). Futurist thinkers from Harari to Aldous Huxley and Raymond Kurzweil show us what is potentially at stake this century: radical improvement of human capacities and choices, division between “natural” and “enhanced” humans, the abolition of the individual and in its wake, democracy. This brings a crucial political question at the table: how can we develop and implement human enhancement technology in a societally responsible way?

Technological citizenship is the collection of rights and duties that makes it possible for citizens to profit from the blessings of technology and protects them against the attendant risks.

To give direction to that potentially radical transition, a democratic search for shared moral principles is necessary, principles that can set the fusion of human and technology off on the right track. An absolute condition for that collective search is a well-developed “technological citizenship” for all citizens. Technological citizenship is the collection of rights and duties that makes it possible for citizens to profit from the blessings of technology and protects them against the attendant risks. It means understanding how statistical results, (genetic) profiling and self-learning algorithms work, seeing how that affects us, and being prepared to defend against unwanted influences and choose (potentially non-technological) alternatives where necessary. Besides, it is important that citizens have the option of participating in the decision-making process regarding technology at every stage of development, from research to application. Technological citizenship emancipates the regular citizen in relation to the experts and developers of technology.

The role of institutions

Education plays a central role in the promotion of technological citizenship. And that begins with primary and secondary education. Here lies a clear role for the government. Meanwhile, in April 2017 the Dutch House of Representatives approved a curriculum revision prepared by Platform Onderwijs2032 (Education2032). It adds two new fields to the curriculum: digital literacy and citizenship. In 2018 development teams are getting started making those fields a reality. It would be good for the two development teams to work in close cooperation, taking into account the fact that citizenship in a technological culture only has meaning if we can engage in an informed discussion about the effect of technology on our private lives and our society.

But education is not enough. To make their citizenship a reality, people need institutions. Without suitable administrative institutions, technological citizenship is an empty shell. It must be possible for rights and duties to be democratically demanded, fixed, and implemented. Individuals, then, can only be considered true technological citizens if they know themselves to be protected by an optimally equipped system of governance. The following four components are crucial to this: 1) rights and compliance monitoring, 2) public debate, 3) political vision, and 4) socially responsible companies.

Robots should not replace human relationships but improve them, whether we are talking about care for the elderly or the upbringing of children

First, citizens must be able to appeal to fundamental human rights suitable to the time we live in. At the request of the Parliamentary Assembly of the Council of Europe, the guardian of human rights in Europe, the Rathenau Instituut researched how robotization, artificial intelligence, and virtualization could challenge our current conception of human rights. The Rathenau Instituut proposed, among other things, two new human rights. First, the right not to be measured, analyzed or coached. People must have the right not to be surveilled or covertly influenced, and to evade continuous algorithmic analysis. Secondly, the right to meaningful human contact within caregiving. Robots should not replace human relationships but improve them, whether we are talking about care for the elderly or the upbringing of children. Already-existing rights and duties should be put into practice in everyday lifeso that technological citizens can count themselves truly protected. We wonder whether the current Dutch supervisory authorities are really able to carry out their missionand whether their mandate is truly adequate. The Netherlands Institute for Human Rights pays little attention to the question of how digitalization can place human rights under pressure. The Dutch Data Protection Authority is given little scope to look at collective values other than privacy.

Second, a social debate over the impact of new technologies is necessary. While civil society is strongly organized to address environmental problems, the Netherlands still has few established social organizations willing to enter into a critical discussion about the new intimate technology revolution, except in relation to privacy and security. Meanwhile, we ought to be asking questions regarding which collective human values we wish to guarantee in our intimately technological society. If we don’t debate these issues at this early stage, we effectively leave the course of technological advancement to the engineers, to the market, and to individual choice. Pessers warns for the collective effect of individual self-determination, which society stealthily confronts with a fait accompli, without any democratic debate. For example, in the case of prenatal diagnostics, the abortion of a number of children with Down syndrome doesn’t change society. But if that starts to happen on a mass scale, it raises the question of whether we really want a society entirely without people with Down syndrome.

If we don’t debate these issues at this early stage, we effectively leave the course of technological advancement to the engineers, to the market, and to individual choice.

Politics and government are called upon to take the lead in the debate and the administrative handling of the intimate technology revolution. Nevertheless, there is at this moment no broad political vision addressing the impact of technology on our being and the current political debate is driven largely by random incidents. For such a vision, further knowledge development is necessary. When it comes to our natural environment, the central concept is ecological sustainability. It required many years and the discovery of new knowledge to give qualitative and quantitative meaning to this concept. We think that in the debate over the relation between technology and humanity, the concept of “human sustainability” must play a central role. Human sustainability means the preservation of human individuality: what aspects of humanity and our being-human do we see as malleable, and which do we want to preserve? Think for example of the desire to keep our empathetic capacities working at a high level, or to have children born from a real mother, not an artificial womb. Concepts such as human dignity and human sustainability require much greater research and consideration.

Finally, citizens must be able to trust that user interests come first when businesses develop new technological products. The increasing fusion between people and technology forces us to keep in mind the values and norms that we design into products and computer coding. On the subject of privacy, academics have argued for years that organizations should pay attention to privacy measures and data minimization when developing information systems. Privacy by design has become a core principle of new European privacy regulations. Privacy-oriented technology is an example of the broader concept of value-sensitive design, which attempts to incorporate not only privacy but a broad range of relevant collective values, including basic human rights, into the development of technology.

This article is republished from NextNature by Ira van Keulen and Rinie van Est, Rathenau Instituut, under a Creative Commons license. Read the original article. This essay has been previously published by the Hans van Mierlo Foundation, a scientific think tank related to the Dutch democratic liberal party (D66).

Source: https://thenextweb.com/syndication/2020/01/08/before-we-augment-people-with-tech-well-need-proper-rules/

06 Jan 2020

Tech trends 2020: New spacecraft and bendy screens

If your ambition is to fly into space – and you’ve got plenty of spare cash – then 2020 could be an exciting year.

If space travel is not really your thing, but you would like a much bigger screen on your mobile phone, then 2020 might also have some tech for you.

But if you think there are already too many phones out there and the technology industry needs to be less wasteful, well some tech companies might catch up with your thinking.

Here’s a little taster of what might be coming in the next twelve months.

Crewed space missions

2020 is going to be a “pivotal year” for space travel, according to Guy Norris, a senior editor at Aviation Week & Space Technology.

Since Nasa retired the Space Shuttle in 2011, the US has relied on Russian spacecraft to transport astronauts to the International Space Station.

That could all change in 2020 when, if all goes to plan, two US-built spacecraft should start carrying crew.

Boeing’s CST-100 Starliner, which can carry up to seven astronauts into orbit, is due for its first test flight today before the first manned flight, likely to be in 2020.

Meanwhile the SpaceX Dragon capsule will go through some final tests in early 2020, and if they all go well then it too would be ready for a crewed mission.

Other systems, designed to reach near-Earth space, could also reach milestones in 2020. Blue Origin, owned by Amazon billionaire Jeff Bezos, could be ready to take tourists on its New Shepard suborbital rocket.

Virgin Galactic could also be ready in 2020 to take passengers into space, more than a decade later than founder Richard Branson originally hoped.

It’s reported that more than 600 people have put down deposits for a Virgin Galactic flight, with tickets costing $250,000 (£195,000).

“It’s finally delivery time for a lot of these long promised programmes and a chance for a whole range of technologies to really prove themselves for the first time,” says Mr Norris.

Technology and the environment

Protests by Extinction Rebellion have helped move climate change up the agenda for technology companies.

Among those that will be under pressure are mobile phone makers. It’s estimated there are 18 billion phones lying around unused worldwide. With around 1.3 billion phones sold in 2019, that number is growing all the time.

Mobile phone makers will be under pressure to make their production processes greener and their phones more easily repairable.

The same will go for the makers of other consumer goods including TVs, washing machines and vacuum cleaners.

Also watch the companies that provide mobile phone services. Vodafone has already promised that in the UK by 2023 its networks will all run on sustainable energy sources. Others are likely to follow suit.

Business travel is under pressure as well. Ben Wood, an analyst at CCS Insight says it will become “socially unacceptable” to fly around the world for meetings and firms will switch to virtual meetings.

There could also be green initiatives from the cloud computing industry as well. Their facilities which house thousands of computer servers use huge amounts of power.

Flexible displays

The launch of Samsung’s first foldable phone in April did not go smoothly. Several reviewers broke the screens and the company had to make some rapid improvements before it went on sale in September.

Motorola had a more successful launch of its new Razr, although some reviewers complained about the price. But this is unlikely to hold the market back. Samsung is expected to launch other devices with flexible displays next year – possibly a tablet.

TCL, the second biggest maker of TVs in China, has also promised to launch its first mobile foldable device in 2020 and then other products quickly after that.

It is betting big on the market, having invested $5.5bn in developing flexible displays.

Analysts say that screens will be incorporated into all sorts of surfaces. Smart speakers might have wrap-around displays, watch-like devices will have straps with displays and fridge doors might have large screens.

Super-fast mobile

We can expect the rollout of high-speed mobile phone networks to continue. By the end of 2019 around 40 networks in 22 countries were offering 5G service.

By the end of 2020 that number will have more than doubled to to around 125 operators, says Kester Mann at CCS Insight.

“There could be an interesting development in the way 5G contracts are priced. A 5G contract without a phone will cost around £30 a month and for that you’re likely to get unlimited amounts of data.”

But analysts say that next year we may see prices based on the speed of the service you want – a bit like the way home broadband is already priced.

Vodafone is already offering contracts based on speed in the UK. Also in the UK, the network 3 is likely to push its 5G offering as an alternative to broadband at home, analysts say. That might appeal to people who move around a lot – students for example – and don’t want a fixed line service.

Quantum computing

Will next year be another big one for quantum computing; the technology which exploits the baffling but powerful behaviour of tiny particles such as electrons and photons?

In October Google said that its quantum computer had performed a task in 200 seconds, that the fastest supercomputer would have taken 10,000 years to complete. There was some quibbling over its achievement, but experts say it was a big moment.

“It’s a fantastic milestone,” says Philipp Gerbert, a member of the deep tech group at consultancy firm BCG: “It’s clear they exceeded the classical computer, by what margin you can debate. They disproved some lingering doubts.”

Mr Gerbert thinks other leaders in the field – IBM, Rigetti and IonQ – could also clear that hurdle: “They all have excellent teams, one or two will reach a similar stage over the next year.”

Once the technology is proven, quantum computers could spur breakthroughs in chemistry, pharmaceuticals and engineering.

Google has also promised to make its quantum computer available for use by outsiders in 2020, but has not provided any details yet.

“Clearly people would love to get access to that,” Mr Gerbert says.

Source: BBC

05 Jan 2020

Tech Tent – tech trends for 2020

Will we start the journey to a better, kinder internet? Which countries are best placed to win the AI race? And should Ivanka Trump be speaking at a tech show? Just some of the questions we address in the first edition of Tech Tent this year.

Last month, the creator of the World Wide Web Sir Tim Berners-Lee, told us of his plan to put it back on the right track. His Contract for the Web aims to get companies, countries and individuals to work together to combat cyber-bullying, misinformation and other online harms.

Catherine Miller of the think tank dot everyone, which describes its mission as championing responsible technology for a fairer future, gives us her assessment of how likely it is that we will make the web a better place in 2020. She stresses that better regulation will be key, changing the economic incentives that mean the tech giants fight to keep people hooked to their platforms, and reward damaging behaviour.

When it comes to the race to build what is arguably the key technology of our times – artificial intelligence – the consensus has been that the United States is in the lead, but China is catching up fast. Now a new global AI index produced by the online news site Tortoise has come up with a more nuanced picture.

It found that, yes, the US and China were one and two in AI, with the UK in third place. But Alexandra Mousavizadeh, the data scientist who led the project for Tortoise Intelligence, tells us that China was much further behind than they had expected.

It scored well in research and development, but its 18th position in having the people with the right skills held it back. “This race is going to be won in many different ways,” says Ms Mousavizadeh, stressing that the free market bottom-up approach of the US had proved very fruitful so far, but the top-down Chinese strategy also has its strengths.

But she says that around the world a government strategy for developing human capital – “preparing a workforce for working with and being part of AI driven growth” – will be key.

We also look less far ahead – to CES, the huge annual gadget-fest which opens in Las Vegas on Tuesday. No doubt we will see all sorts of products promising to use AI to give consumers better experiences.

But one of the keynote speakers looks likely to provide the biggest headlines from the show. On the opening day, Ivanka Trump will be discussing the future of work in a session with the Consumer Technology Association’s CEO Gary Shapiro. The invitation to the President’s daughter has sparked controversy, especially as female keynote speakers from the tech industry have been thin on the ground in previous years.

Mr Shapiro tells Tech Tent that the show is about more than gadgets. It addresses key issues such as the impact of automation on work – and he says as the co-chair of the American Workforce Advisory Board, Ms Trump has significant things to contribute to this debate.

But back to technology. I have just been looking back at a blogpost I wrote on New Year’s Eve 2009 as I prepared to head off to the 2010 CES in Las Vegas.

I was very excited about a British firm called Plastic Logic that was going to unveil a radical new e-reader. “It could be one of the show’s stand-out products,” I wrote, “or it could end up buried under an avalanche of hype about a forthcoming rival device from a better-known firm.”

That rival device turned out to be Apple’s iPad, unveiled later that month, and Plastic Logic’s Que device did indeed end up dead and buried.

So, expect to see some startling new products emerging from Las Vegas in the next few days – we are promised a talking frying-pan and a self-driving sofa – but world-changing devices are few and far between, and are likely to be unveiled elsewhere.

Source: BBC

07 Dec 2019
The Growing Importance of Talent Base Economy

The Growing Importance of Talent Base Economy

Dec 05, 2019 (Heraldkeepers) — A talent-driven economy has become a key requirement for the growth and development of the modern economy. The workforce can generate innovatively, and new ideas surpass all other drivers of economic development. The future success of an economy depends on the quality of talent retained. Some of the main reasons why the modern economy should focus on talents include:

1. Maximize Productivity

With the talent economy, the chance of growth of the modern economy is high. The human capabilities are considered to be the fundamental drivers of economic development. By focusing on the talents and brains, the modern economy will be able to retain the best skills and knowledge. The fresh talents can utilize their innovative ideas and maximize the productivity of individuals, organizations, as well as the entire community, which ultimately leads to the growth and success of the overall economy.

2. Better Development

Focusing on talents can help the modern economy in gaining better development opportunities. The quality of the workforce is of paramount importance for the development of the economy. The availability of skilled talents helps in increasing investments and enhancing the returns on investment for better development of the economy.

3. Expand Capabilities

 

By giving importance to talent economy, the modern economy can expand its capabilities much easily. The talents acquired can help in overcoming the potential challenges with the use of their innovative and new ideas. The growing demand of the talented workforce across the globe allows the modern economy to expand successfully.

Effect of Intellectual Property Rights and Patents on Economy

Intellectual property rights and patents have a significant impact on the global economy. Apart from a talent economy, intellectual property rights and patents can also help in stimulating the growth and development of the modern economy.

Intellectual property rights play an important role in encouraging innovation, technical change, as well as product development. An IPRS system that favors the diffusion of information through the low-cost imitation of foreign technologies and products is likely to impact the economy of a country positively.

The intellectual property rights also help in rewarding the risk-taking and creativity of the new entrepreneurs and enterprises, thereby leading to the growth of the economy. The IPRs can also help in stimulating dissemination and acquisition of new information. The new information gained paves the way for further inventions.

The patent also provides the firms with certainty that they will face fewer threats of uncompensated appropriation. It helps the firms induce products and technologies more readily, enabling the enhancement of the economy. By strengthening intellectual property rights, the developing countries tend to attract more inflows of technology.

IPRs also encourage the growth and development of interregional as well as international marketing networks that help in achieving economies of scale. A strengthened intellectual property enables inducing greater R & D that helps in meeting the needs of the developing countries.

Focusing on talent as well as intellectual property rights and patents can help in the successful growth and development of the modern economy.

Author : Manahel Thabet

Source: https://www.marketwatch.com/press-release/the-growing-importance-of-talent-base-economy-2019-12-05

05 Nov 2019
THIS POWERFUL MORNING PROCESS WILL CHANGE YOUR LIFE – PRE PAVE WITH INTENTION

THIS POWERFUL MORNING PROCESS WILL CHANGE YOUR LIFE – PRE PAVE WITH INTENTION

The most important time… is the time you give to yourself.

And the most important time to do that is first thing in the morning…Because in the morning you set the intention for the rest of your day.

If you jump out of bed late, rushed, stressed, and in your head – that is what you are pre- paving for the rest of your day, and, in the case of most people… the rest of your life: More rush.
More stress.Less living the quality life you deserve.

The key is to get up early enough to allow yourself some time alone. Time to get clear about how you want to feel this day.

Time for intention.Time to get in the energy space of gratitude. Time for meditation.It’s about pre-paving what you want for this day, and your LIFE.

Setting a clear intention and energy, so you attract those things into your experience.

Set the intention for what you want out of the day ahead and get grateful in advance.

This will make sure that you are an energy match to it, and it will soon be in your experience as you are setting the intention for it to be so.

Just go on a rampage of gratitude, of intention and appreciation… It might go something like this:

 

I am grateful today for every moment of calm, every moment of peace, every moment of real connection.
I am grateful for amazing conversations, grateful for every laugh and smile today.
I am grateful for every moment of happiness, especially when I can give that moment to someone else.
I am grateful for every hug. Every kiss. Every moment of real love.
I am grateful for every moment of true presence. When I really feel more connected to everyone and everything around me.

As I am writing these words I am really feeling each moment as if it is really happening, that is perhaps the most important part… The feeling of it.

Putting yourself in that feeling state as if it is really happening. Raising your vibration to that feeling.

Now what that is doing is setting the intention for the day… putting those amazing things in your conscious mind – and so your attention for this day is going to be zeroed in on trying to find and make those things a reality.

This is such a powerful process.

Everything in life is energy. How you show up each day is energy. Your energy is determined by your intention and how you feel.

So make it a priority to feel good.Make it a priority to give yourself time every morning.

Time to meditate, release stress and increase calm.Time in gratitude and pre-paving intention to get in the right energy.

Use whatever words feel natural to you when setting your gratitude intention. Whatever you are really grateful for, and whatever you want to show up in your experience as a FEELING.

Source: https://iamfearlesssoul.com/pre-pave-with-intention/

 

04 Nov 2019
We Need AI That Is Explainable, Auditable, and Transparent

We Need AI That Is Explainable, Auditable, and Transparent

Every parent worries about the influences our children are exposed to. Who are their teachers? What movies are they watching? What video games are they playing? Are they hanging out with the right crowd? We scrutinize these influences because we know they can affect, for better or worse, the decisions our children make.

Just as we concern ourselves with who’s teaching our children, we also need to pay attention to who’s teaching our algorithms. Like humans, artificial intelligence systems learn from the environments they are exposed to and make decisions based on biases they develop. And like our children, we should expect our models to be able to explain their decisions as they develop.

As Cathy O’Neil explains in Weapons of Math Destruction, algorithms often determine what college we attend, if we get hired for a job, if we qualify for a loan to buy a house, and even who goes to prison and for how long. Unlike human decisions, these mathematical models are rarely questioned. They just show up on somebody’s computer screen and fates are determined.

In some cases, the errors of algorithms are obvious, such as when Dow Jones reported that Google was buying Apple for $9 billion and the bots fell for it or when Microsoft’s Tay chatbot went berserk on Twitter — but often they are not. What’s far more insidious and pervasive are the more subtle glitches that go unnoticed, but have very real effects on people’s lives.

Once you get on the wrong side of an algorithm, your life immediately becomes more difficult. Unable to get into a good school or to get a job, you earn less money and live in a worse neighborhood. Those facts get fed into new algorithms and your situation degrades even further. Each step of your descent is documented, measured, and evaluated.

Consider the case of Sarah Wysocki, a fifth grade teacher who — despite being lauded by parents, students, and administrators alike — was fired from the D.C. school district because an algorithm judged her performance to be sub-par. Why? It’s not exactly clear, because the system was too complex to be understood by those who fired her.

Make no mistake, as we increasingly outsource decisions to algorithms, the problem has the potential to become even more Kafkaesque. It is imperative that we begin to take the problem of AI bias seriously and take steps to mitigate its effects by making our systems more transparent, explainable, and auditable.

Sources of Bias

Bias in AI systems has two major sources: the data sets on which models are trained, and the design of the models themselves. Biases in the data sets on which algorithms are trained can be subtle, for example, such as when smartphone apps are used to monitor potholes and alert authorities to contact maintenance crews. That may be efficient, but it’s bound to undercount poorer areas where fewer people have smartphones.

In other cases, data that is not collected can affect results. Analysts suspect that’s what happened when Google Flu Trends predicted almost double as many cases in 2013 as there actually were. What appears to have happened is that increased media coverage led to more searches by people who weren’t sick.

Yet another source of data bias happens when human biases carry over into AI systems. For example, biases in the judicial system affect who gets charged and sentenced for crimes. If that data is then used to predict who is likely to commit crimes, then those biases will carry over. In other cases, humans are used to tag data and may direct input bias into the system.


This type of bias is pervasive and difficult to eliminate. In fact, Amazon was forced to scrap an AI-powered recruiting tool because they could not remove gender bias from the results. They were unfairly favoring men because the training data they used taught the system that most of the previously-hired employees of the firm that were viewed as successful were male. Even when they eliminated any specific mention of gender, certain words which appeared more often in male resumes than female resumes were identified by the system as proxies for gender.

A second major source of bias results from how decision-making models are designed. For example, if a teacher’s ability is evaluated based on test scores, then other aspects of performance, such as taking on children with learning differences or emotional problems, would fail to register, or even unfairly penalize them. In other cases, models are constructed according to what data is easiest to acquire or the model is overfit to a specific set of cases and is then applied too broadly.

Overcoming Bias

With so many diverse sources of bias, we do not think it is realistic to believe we can eliminate it entirely, or even substantially. However, what we can do is make our AI systems more explainable, auditable, and transparent. We suggest three practical steps leaders can take to mitigate the effects of bias.

First, AI systems must be subjected to vigorous human review. For example, one study cited by a White House report during the Obama administration found that while machines had a 7.5% error rate in reading radiology images, and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.

Second, much like banks are required by law to “know their customer,” engineers that build systems need to know their algorithms. For example, Eric Haller, head of Datalabs at Experian told us that unlike decades ago, when the models they used were fairly simple, in the AI era, his data scientists need to be much more careful. “In the past, we just needed to keep accurate records so that, if a mistake was made, we could go back, find the problem and fix it,” he told us. “Now, when so many of our models are powered by artificial intelligence, it’s not so easy. We can’t just download open-source code and run it. We need to understand, on a very deep level, every line of code that goes into our algorithms and be able to explain it to external stakeholders.”

Third, AI systems, and the data sources used to train them, need to be transparent and available for audit. Legislative frameworks like GDPR in Europe have made some promising first steps, but clearly more work needs to be done. We wouldn’t find it acceptable for humans to be making decisions without any oversight, so there’s no reason why we should accept it when machines make decisions.

Perhaps most of all, we need to shift from a culture of automation to augmentation. Artificial intelligence works best not as some sort of magic box you use to replace humans and cut costs, but as a force multiplier that you use to create new value. By making AI more explainable, auditable and transparent, we can not only make our systems more fair, we can make them vastly more effective and more useful.

Source: https://hbr.org/2019/10/we-need-ai-that-is-explainable-auditable-and-transparent

03 Nov 2019
AI May Not Kill Your Job—Just Change It

AI May Not Kill Your Job—Just Change It

Don’t fear the robots, according to a report from MIT and IBM. Worry about algorithms replacing any task that can be automated. 

Martin Fleming doesn’t think robots are coming to take your jobs. The chief economist at IBM, Fleming says those worries aren’t backed up by the data. “It’s really nonsense,” he says. A new paper from MIT and IBM’s Watson AI Lab shows that for most of us, the automation revolution probably won’t mean physical robots replacing human workers. Instead, it will come from algorithms. And while we won’t all lose our jobs, those jobs will change, thanks to artificial intelligence and machine learning.

Fleming and a team of researchers analyzed 170 million online US job listings, collected by the job analytics firm Burning Glass Technologies, that were posted between 2010 and 2017. They found that, on average, tasks such as scheduling or credential validation, which could be performed by AI, appeared less frequently in the job listings in the more recent years. The recent listings also included more “soft skills” requirements like creativity, common sense, and judgment. Fleming says this shows that work is being resorted. AI is taking over more easily automated tasks and workers are being asked to do things that machines can’t do.

If you’re in sales, for example, you’ll spend less time figuring out the ideal price for your product, because an algorithm can determine the optimal price to maximize profits. Instead, you might spend more time managing customers or designing attractive marketing materials or websites.

In the study, researchers divided the listings into three groups based on the advertised pay, then examined how different tasks were being valued. What they found is that how we value tasks may be starting to change.

Design skills, for example, were in particularly high demand and increased the most across wage brackets. Within personal care and services occupations—which generally are low-wage—pay for jobs that included design tasks, such as presentation design or digital design, increased by an average of $12,000 over the study period, after inflation. The same can be said of higher wage earners in business and finance who have deep industry expertise that can’t yet be matched by AI. Their wages went up more than $6,000 annually.

Some low-wage occupations like home health care, hairstyling, or fitness training are insulated from the impact of AI because those skills are hard to automate. But middle-wage earners are starting to feel the squeeze. Their wages are still rising, but after adjusting for the shifts in tasks for those jobs, the report found, those wages weren’t growing as quickly as low-wage and high-wage jobs. In some industries, like manufacturing and production, wages actually decreased. There are also fewer middle-wage jobs. Some are getting simpler and being replaced by low-wage jobs. Others now require more skills and are becoming high wage.

Fleming is optimistic about what AI tools can do for work and for workers. Just as automation made factories more efficient, AI can help white-collar workers be more productive. The more productive they are, the more value they add to their companies. And the better those companies do, the higher wages get. “There will be some jobs lost,” he says. “But on balance, more jobs will be created both in the US and worldwide.” While some middle-wage jobs are disappearing, others are popping up in industries like logistics and health care, he says.

As AI starts to take over more tasks, and the middle-wage jobs start to change, the skills we associate with those middle-class jobs have to change too. “I think that it’s rational to be optimistic,” says Richard Reeves, director of the Future of the Middle Class Initiative at the Brookings Institution. “But I don’t think that we should be complacent. It won’t just automatically be OK.”

The report says these changes are happening relatively slowly, giving workers time to adjust. But Reeves points out that while these changes may seem incremental now, they are happening faster than they used to. AI has been an academic project since the 1950s. It remained a niche concept until 2012, when tests showed neural networks could make speech and image recognition more accurate. Now we use it to complete emails, analyze surveillance footage, and decide prison sentencing. The IBM and MIT researchers used it to help sort through all the data they analyzed for this paper.

That fast adoption means that workers are watching their jobs change. We need a way to help people adjust from the jobs they used to have to the jobs that are now available. “Our optimism actually is rather contingent on our actions, on actually making good on our promise to reskill,” says Reeves. “We are rewiring our economy but we haven’t rewired our training and education programs.”

Read more: https://www.wired.com/story/ai-not-kill-job-change-it/

02 Nov 2019
AI Stats News: 64% Of Workers Trust A Robot More Than Their Manager

AI Stats News: 64% Of Workers Trust A Robot More Than Their Manager

Recent surveys, studies, forecasts and other quantitative assessments of the progress of AI highlighted workers’ positive attitudes toward AI and robots, challenges in implementing enterprise AI, the perceived benefits of AI in financial services, and the impact of AI on the business of Big Tech.

AI business adoption, attitudes and expectations

50% of workers are currently using some form of AI at work compared to only 32% last year; workers in China (77%) and India (78%) have adopted AI over 2X more than those in France (32%) and Japan (29%); 65% of workers are optimistic, excited and grateful about having robot co-workers and nearly a quarter report having a loving and gratifying relationship with AI at work; 64% of workers would trust a robot more than their manager and half have turned to a robot instead of their manager for advice; workers in India (89%) and China (88%) are more trusting of robots over their managers, but less so in the U.S. (57%), UK (54%) and France (56%); 82% think robots can do things better than their managers, including providing unbiased information (26%), maintaining work schedules (34%), problem solving (29%) and managing a budget (26%); managers are better than robots in understanding workers’ feelings (45%), coaching them (33%) and creating a work culture (29%) [Oracle survey of 8,370 employees, managers and HR leaders in 10 countries]

The growth of AI applications in deployment was actually less this year than last year, with the total percentage of CIOs saying their company has deployed AI now at 19%, up from 14% last year—far lower than the 23% of companies that thought they would newly roll out AI in 2019 [Gartner]

74% of Financial Services Institutions (FI) executives said AI was extremely or very important to the success of their companies today, while 53% predicted it would be extremely important three years from now; about 75% expected that over the next three years their organizations will gain major or significant benefits from AI in increased efficiency/lower costs; while 61% of FI executives said they knew about an AI project at their companies, only 29% of these executives reported on a project that had been fully implemented; only 29% of AI projects are within full implementation phase, with 46% still pilots, 35% in proof of concept and 24% in initial planning; challenges include securing senior management commitment (45%) and securing adequate budget (44%); technologies used in AI projects include virtual agents (72%) and natural language analysis (56%); 50% found it extremely or very challenging to secure talent and 49% found it extremely or very challenging to attract and retain professionals with appropriate skills [Cognizant survey of FI executives in US and Europe]

82% of CEOs say they have a digital initiative or transformation program, but only 23% think their organizations are very effective at harvesting the results of digital, and even fewer CIOs would say they are very strong at this [Gartner surveys of CEOs and CIOs]

Read more: https://www.forbes.com/sites/gilpress/2019/11/01/ai-stats-news-64-of-workers-trust-a-robot-more-than-their-manager/#777497912b21

31 Oct 2019
Employees Worldwide Welcome ‘AI Coworkers’ To The Office

Employees Worldwide Welcome ‘AI Coworkers’ To The Office

Last year, many Americans worried that artificial intelligence (AI) might replace them at work. This year, employees around the world are wondering why their employers don’t provide them with the kind of AI-enabled technology they’re starting to use at home. 

That’s one way to think about the results of a second annual survey about AI in the workplace, conducted by Oracle and research firm Future Workplace. This year, 50% of survey respondents say they’re currently using some form of AI at work—a major leap compared to only 32% in last year’s survey.

Last year, many Americans worried that artificial intelligence (AI) might replace them at work. This year, employees around the world are wondering why their employers don’t provide them with the kind of AI-enabled technology they’re starting to use at home. 

That’s one way to think about the results of a second annual survey about AI in the workplace, conducted by Oracle and research firm Future Workplace. This year, 50% of survey respondents say they’re currently using some form of AI at work—a major leap compared to only 32% in last year’s survey.

Source: https://www.forbes.com/sites/oracle/2019/10/31/employees-worldwide-welcome-ai-coworkers-to-the-office/#47aa68266681