Month: August 2019

31 Aug 2019
AI-enhanced ECGs may soon assess overall health

AI-enhanced ECGs may soon assess overall health

Scientists have trained an artificial intelligence tool to predict sex and estimate age from electrocardiogram readouts. They suggest that, with further development, the tool could soon be helping doctors to assess the overall health of their patients.

An electrocardiogram, also known as an ECG or EKG, is a painless, simple test that records the electrical activity of a person’s heart.

A recent paper in the journal Circulation: Arrhythmia and Electrophysiology, describes how the team developed an artificial intelligence (AI) tool to predict sex and estimate age from ECG data.

The researchers, from the Mayo Clinic College of Medicine and Science, in Rochester, MN, trained the AI tool, which is of a type known as a convolutional neural network (CNN), using ECG readouts from nearly 500,000 individuals.

When they tested the CNN’s accuracy on a further 275,000 people, they found that it was very good at predicting sex but less good at predicting age. The AI tool got the sex right 90% of the time but only got the age right 72% of the time.

 

The team then focused on 100 people in the test batch for whom they had at least 20 years of ECG readouts.

This closer investigation revealed that the accuracy of the AI tool’s age estimates depended on whether the individuals had experienced heart conditions.

AI has potential to glean ‘physiologic age’

For individuals who had experienced heart conditions, the AI tool’s age estimates tended to be greater than their chronological ages.

For those who had experienced few or no heart conditions, the AI tool’s age estimates were much closer to the participants’ chronological ages.

The results showed that for people who had experienced low ejection fraction, high blood pressure, and heart disease, the AI tool estimated their ages to be at least 7 years greater than their chronological ages.

Ejection fraction is a measure of how well the heart is pumping.

The researchers say that these results suggest that the tool appears to be estimating biological, or physiologic, age, which, in contrast to chronological age, reflects a person’s overall health status and body function.

“This evidence,” says senior study author Dr. Suraj Kapa, assistant professor of medicine at the Mayo Clinic, “that we might be gleaning some sort of ‘physiologic age’ was certainly both surprising and exciting for [AI’s] potential role in future outcomes research and may foster a new area of science where we seek to better understand the biologic underpinnings of such a finding.”

 

Physiologic age marker to aid overall health assessment

Even people with no medical training can see that different people appear to age differently.

Scientists investigating aging research are increasingly turning to physiologic age as a way to measure progress of biological aging processes, as opposed to the simple passage of time.

To this end, they have proposed a number of biomarkers, including those that measure substances in the blood, epigenetic alterations to DNA, and the level of frailty.

Dr. Kapa and colleagues suggest that the ability to detect discrepancies between chronological age and the age suggested by the heart’s electrical signals could serve as a useful biomarker for hidden heart disease and other conditions.

“Being able to more accurately assess overall health status may help doctors determine which patients they should examine further to determine if there are asymptomatic or currently silent diseases that could benefit from early diagnosis and intervention,” Dr. Kapa explains.

The researchers call for more research to validate the use of the AI-enhanced ECG as a way to estimate physiologic age in healthy people. 

The data that they used came from people who had undergone ECGs for clinical reasons.

Source: https://www.medicalnewstoday.com/articles/326208.php

29 Aug 2019
How Artificial Intelligence Is Going to Transform the Engineers' Jobs

How Artificial Intelligence Is Going to Transform the Engineers’ Jobs

In an exclusive interview, David Wood, Futurist, Chair of London Futurists, and Peter Jackson, Software Engineer member of London Futurists, share with us how Artificial Intelligence is going to impact the future of engineers’ jobs and how to prepare for it.

An engineer checking and controlling welding robotics automatic arms machines in an intelligent industrial automotive factory using a monitoring system software used to belong to the science fiction realm.

However, this is the reality of digital manufacturing operations in today’s Industry 4.0 smart factories. Industry 4.0 is a concept that originated in Germany and is related to the integration of Internet of Things (IoT) and Machine-to-Machine (M2M) technology with networking and analytics with industry processes.

Another ingredient in Industry 4.0 is Artificial Intelligence, one of the fastest growing emerging technologies with a market expected to reach $70 billion by 2020. Artificial Intelligence is transforming jobs across all industries already, a tendency that is increasing. Naturally, humans’ fear is on the rise.

The adoption of extreme automation, AI, robotics, as well as extreme connectivity will continue to put pressure on low and middle-skilled workers. On the other hand, we are going to see an increase in demand for really skilled and adaptable professionals in the industry.

Futurists and industry analysts anticipate the creation of new companies and sectors as well as new positions that not yet exist.

The first jobs affected by automation include clerical work, sales, customer service, and support functions. Robotic process automation, automatic reporting, and virtual assistants are becoming increasingly more common.

Automation also takes over insurance processing, incoming customer queries, and customer calls. Robo-advisors can go quickly through millions of emails; they can dramatically cut the cost of legal investigations.

We can also expect to see a decrease in managerial positions as a result of the absence of lower and middle-skilled workers. These workers now must re-skill into tasks that extreme automation cannot perform. Or, they can also move into other industries in order to avoid unemployment.

Artificial Intelligence is inevitably evolving into yet more advanced Natural Language Processing (NLP). This means that higher-skilled workers who do routine tasks may also be at risk. However, futurists don’t expect this Fourth Industrial Revolution to result in an aggregate increase in global unemployment. In time, evolution and adaptation are going to play their part.

One thing is certain: There is no time to waste fighting change. Now it is time to embrace change and flexibility. These are the keys to success in the Fourth Industrial Revolution. After all, change cannot exist without evolution.

So, what is there to do? To dive into the future, into how Artificial Intelligence is going to change, transform, and evolve the engineers’ jobs I sat down with David Wood, D.Sc., Futurist, Chair of London Futurists, member of the Institute for Ethics and Emerging Technologies (IEET) Board of Directors, author of Transcending Politics: A technoprogressive roadmap to a comprehensively better future, and Sustainable Superabundance: A Universal Transhumanist Invitation, and Peter Jackson, Software Engineering Consultant and active member of London Futurists.

The interview, edited for length here below, took place after a London Futurists meeting in London, England.

How is Artificial Intelligence going to affect the engineers’ jobs?
Peter Jackson: As an engineer, I would say that throughout my life it has been a process of constantly adapting to changes in technology. So, an engineer’s life is never static. One is always learning new skills in order to keep up with changes in the ways where developments actually happen. If one has to be a software engineer, or if one is another sort of engineer there will be several technological changes which will affect the course of their career. The job of an engineer, in a way, is to make ourselves redundant.

David Wood: One thing that is different with engineers in the next couple of decades compared to the past is the pace at which engineers will have to learn new skills. In the past, people who wanted to do well in their career needed to be able to adapt to the changes in tools and technologies. But they will have to do it more quickly in the future than in the past.

What kind of new skills do engineers will need to learn?
Peter Jackson: Learning how to learn. Rather than getting entrenched in a particular way of working or using a particular set of tools, being always open to developing techniques with the latest tools that are available.

David Wood: The three skills that everybody is going to need in the near future are how to live with robots, how to work alongside robots, and then how to design work so that the interaction between humans and robots is better. Engineers are going to need all of these skills, frankly.

The ones who are going to do very well are the ones who understand the possibilities of the technology but then design that in a way that human engineers can work best with that technology. But not just collaborating more with technology. I think it is also collaborating with each other.

Because no individual is going to be able to understand all the different possible tools and techniques that might be relevant for them to do better in their jobs. The skill that is going to be critical there is the skill of figuring out the right communities, the right partners, the right people in the communities who can help people to stay very current with the knowledge.

And last but not least, to be a bit controversial. I think the skill of emotional intelligence is going to become critical. Because, without the self-courage to embrace change, without the willingness to try something risky people are more readily going to get more often stuck in a rut. Sometimes this is called a soft skill. But frankly, for success in the future more of us are going to need this particular soft skill.

How do you see the collaboration between humans and machines in the future when humans will have to co-live and co-work with Artificial General Intelligence (AGI)?
Peter Jackson: As the machine technology advances, in a sense it becomes more human-like in the way that it interacts with the people that are using it. And, just as one develops a certain empathy for the people one is working with, particularly as a certain kind of engineer one develops certain empathy with the machines with whom one works also.

On the ethics of Artificial Intelligence . . .
David Wood: To me, ethics means something we could do but we decide not to do it. It also means paying careful attention to issues of safety and fairness. But even more than that, it also means ensuring that we have maximum possible benefits when you measure benefits widely and you are not just pursuing a profit, for example, on the bottom line.

So, it’s quite a wide topic. In the past, we often shrugged aside ethics and said it doesn’t really impact jobs very much. But frankly, with the pace of change that is coming and with the set of rich possibilities ahead we will have to think harder about ethics.

Peter Jackson: Not only you do have to choose carefully what not to do that sometimes you have to ensure that you make sure that you do the right thing in whatever context that makes sense. There are some ways in which technologies can develop what it would be almost criminal not to take advantage of what’s available to improve the human condition.

David Wood: In the short-term some people might say they don’t want to adopt a particular technology because it’s going to leave people with less work to do. On the other hand, that technology might enable goods, services to be delivered or created more cheaply and with higher quality. And frankly, I think that’s the bigger picture.

I don’t look forward to a world in which everybody is working flat out 40 hours a week or more. I look forward to a world in which people are working less often, small amounts of time, and automation is producing lots of goods and services for all of us more reliably. If that means less hours in employment I personally think it’s not a bad thing but something we should actually welcome.

Is distributing a Universal Basic Income the answer to balance more automated jobs and less work hours?
David Wood: On the question of ensuring that people who aren’t working so many hours still have a sufficient income, I think the ultimate thing that is required here isn’t so much soft skills but it is more political skills.

What I mean by that is, unless we are able to change society’s social contract which will then look more generously at the needs of people who are not working, without judging them as being inadequate or second class citizens or even third class citizens. Unless we can have that transformation, we might end up in a situation of great inequality, technological unemployment, and technological underemployment.

And the way to fix that is not somehow to expect people to learn new skills to make them more capable than the robots and the AI, and the algorithms. It is to ensure that society redistributes effectively and fairly the abundance which is generated by automation.

And that is going to require politics as well as engineering; rather, is going to require an engineering of politics, which is perhaps the future of some engineers.

Source: https://interestingengineering.com/how-artificial-intelligence-is-going-to-transform-the-engineers-jobs

28 Aug 2019
Realizing the Potential of Disruptive Technologies

Realizing the Potential of Disruptive Technologies

Electric scooters dominate the streets of our cities, used for nearly 40 million trips across the country just last year alone. Bitcoin and other cryptocurrencies continue to emerge as formidable alternatives to cash and credit, with companies like Square boasting nearly $125 million in sales. And autonomous cars are making their debut as soon as this month in New York City and California.

These are just a few examples of how disruptive technologies are reaching into all corners of society and reshaping contemporary life; from how we pay for goods and services to how we commute and engage with co-workers and peers from across the globe.

Yet, as disruptive technologies continue to make headlines, they also prompt the need for specialized regulations that protect people and preserve existing infrastructure. This leaves many local and state governments with a daunting challenge: how to reconcile seemingly competing impulses of safety with innovation.

Cities across the nation have banned scooters, citing rises in accidents and fears of fatalities. U.S. Department of the Treasury officials, in the wake of Facebook’s Libra, have recently argued that cryptocurrencies represent threats to national security. And widespread distrust in autonomous cars has driven automakers to halt the once-breakneck pace of development for this technology.

Rather than propel innovators to work to improve these new technologies, history tells us that stringent regulatory legislation or even outright bans have, more often than not, caused innovators to abandon them. In 1865, for example, the British Parliament responded to the advent of steam-powered vehicles — and the fear that they would endanger other users of public roadways — with a law requiring that such vehicles be preceded by a pedestrian waving a red flag as a warning signal. Unsurprisingly, this law discouraged further development of “horseless carriages” in Britain, effectively smothering a nascent industry and creating opportunity for more forward-looking nations. Just 15 years later, the first internal-combustion vehicles were introduced in Germany.

In the face of rapid change, experimentation, and the “failing forward” that defines the current era of disruptive innovation, regulators struggle to keep pace because they continue to adopt a reactive posture, developing rules in response to new technologies on a fixed timeline. Such a standpoint, though, fails to account for the rapid cycles of iterative development that innovations will undergo. And so, to catch up, regulators are often tempted to implement bans or strict precautionary regulations that stifle new technologies from achieving their potentially transformative potential.

In turn, the United States runs the same risk of falling behind other nations at the forefront of this technological revolution.

However, history also illustrates that when regulators strike that right balance in their policies and guidelines, they can encourage innovators and businesses to improve their technologies in ways that reflect needed protections. While federal anti-pollution laws enacted in the late 1970s didn’t ban gasoline-powered automobiles, they did require automakers to reduce emissions to a certain level. Although General Motors initially resisted the new legislation, GM eventually turned to scientists at Corning Incorporated to develop the materials essential for catalytic converters, which made those necessary reductions in emissions possible.

By taking a more adaptive approach to regulation, characterized by a similar cycle of trial and error for the very technologies that they are monitoring, regulators can more effectively respond to new developments, jettison rules that no longer work well, and quickly implement new, more effective policies.

Regulatory spaces, sometimes referred to as “sandboxes,” are emerging as promising incubators for adaptive regulations, encouraging innovators to develop safer or better technology through waivers, close partnerships across sectors, and testing opportunities with small cohorts of customers. These spaces not only allow for technological improvements, but enable the collaborative creation of regulations that benefit both businesses and society. The United Kingdom, for instance, has already put this vision into practice for two dozen companies at the forefront of financial technology — and other countries have followed suit.

Why not extend the same approach to autonomous cars, scooters, and other emerging technologies ranging from 5G to artificial intelligence? We can encourage rapid prototyping and real-time monitoring that surfaces necessary adjustments in the interest of safety and aligned regulations — and all in a low-stakes testing environment.

As we step over yet another electric scooter splayed across the sidewalk or read about problems with another form of cryptocurrency, we should not just gripe about the inconvenience, danger or aesthetic demerits of the technology — or call for the technology to be banned outright.

Instead, let’s facilitate conversations between government officials and private companies to regulate them better. Let’s structure opportunities for innovators and entrepreneurs to pilot their products under close oversight and consultation. If we want these companies to behave better, and technologies to be safer, let’s create a framework that balances their interests with those of the public.

Source: https://www.govtech.com/analysis/Realizing-the-Potential-of-Disruptive-Technologies-Contributed.html

27 Aug 2019
Why blockchain, despite some early success, remains a corporate enigma

Why blockchain, despite some early success, remains a corporate enigma

While the benefits of blockchain seem straight forward, the nuances around implementing it – including adding business partners to a network, integrating it with legacy systems and navigating uncertain regulatory waters – make its future uncertain.

While blockchain is moving beyond pilot projects and proofs of concept testing in some industries, companies still struggle to justify development spending and continue to have concerns around security, interoperability, bandwidth and regulatory uncertainly.

Earlier this month, research firm IDC published its semi-annual blockchain spending guide; it showed blockchain spending this year is forecast to be $2.7 billion, up 80% over 2018.

By 2023, spending on blockchain hardware, software and services is expected to reach $15.9 billion, according to IDC. Adoption of blockchain for financial services, identity, trade, and other markets “is encouraging,” IDC stated.

Global blockchain spending will be led by the banking industry, which will account for roughly 30% of the worldwide total in IDC’s five-year forecast, which runs from 2018 through 2023. Discrete manufacturing and process manufacturing will be the next largest industries, with a combined share of more than 20% of overall spending.

Process manufacturing will also see the fastest rate of spending growth (68.8% combined annual growth rate), making it the second-largest industry for blockchain spending by the end of the 2023. (Overall, IDC expects blockchain spending to grow at a combined annual growth rate of 60.2% through 2023.)

In a recent survey of enterprise executives by IDC, the results of which have yet to be published, 62% of respondents indicated they’re considering blockchain in the long run, are currently deploying it in production or evaluating it, according to IDC Research Director James Wester. That number is up slightly from last year when 55% indicated they were involved with blockchain projects, or 2017 when 51% said they were deploying or considering it.

Blockchain, Wester said, has reached a “tipping point” across multiple use cases, such as cross-border payments and settlement and supply chain management and tracking, and many companies in those areans are quickly moving from pilots into production.

 

Read more: https://www.computerworld.com/article/3434067/despite-growth-in-some-industries-blockchains-future-remains-cloudy.html

26 Aug 2019
The Three Types of Artificial Intelligence: Understanding AI

The Three Types of Artificial Intelligence: Understanding AI

AI is rapidly evolving. Artificial Super Intelligence could be here sooner than expected.

According to a Gartner Survey of over 3,000 CIOs, Artificial intelligence (AI) was by far the most mentioned technology and takes the spot as the top game-changer technology away from data and analytics, which is now occupying a second place.

AI is set to become the core of everything humans are going to be interacting with in the forthcoming years and beyond.

Robots are programmable entities designed to carry out a series of tasks. When programmers embed human-like intelligence, behavior, emotions, and even when they engineer ethics into robots we say they create robots with an embedded Artificial Intelligence that is able to mimic any task a human can perform, including debating, as IBM showed earlier this year at CES Las Vegas.

IBM has made a human-AI debate possible through its Project Debater, aimed at helping decision-makers make more informed decisions.

Depending on the type of tasks carried out by AI robots, AI has been divided into different categories. It is worth noting, however, that AI is still in its infancy. In the future, AI is going to look and behave quite differently from what it is today.

To be prepared for the future, we need to start brushing-up our knowledge of AI. Humans also need to be prepared for the challenges and changes AI will bring to society and humankind as a whole. So, what Artificial Intelligence actually is?

What is AI?: The three types of Artificial Intelligence 

“AI is the science and engineering of making intelligent machines, especially intelligent computer programs.” – Alan Turing 

First of all, to be able to participate in today’s discussions about Artificial Intelligence and to understand the changes it will bring to the future of humanity we need to be knowledgeable of the basics. 

The different types of AI depend on the level of intelligence embedded into a robot. We can clearly categorize AI into three types: 

Artificial Narrow Intelligence (ANI) 

Artificial Narrow Intelligence (ANI), also known as Narrow AI or Weak AI, is a type of Artificial Intelligence focused on one single narrow task. It possesses a narrow-range of abilities. This is the only AI in existence today, for now. 

Narrow AI is something most of us interact with on a daily basis. Think of Google Assistant, Google Translate, Siri, Cortana, or Alexa. They are all machine intelligence that use Natural Language Processing (NLP). 

 

NLP is used in chatbots and other similar applications. By understanding speech and text in natural language they are programmed to interact with humans in a personalized, natural way. 

AI systems today are used in medicine to diagnose cancers and other illnesses with extreme accuracy by replicating human-like cognition and reasoning. 

Artificial General Intelligence (AGI) 

When we talk about Artificial General Intelligence (AGI) we refer to a type of AI that is about as capable as a human.

However, AGI is still an emerging field. Since the human brain is the model to creating General Intelligence, it seems unlike that will happen relatively soon because there is lack of a comprehensive knowledge of the functionality of the human brain.

 

Yet, as history has shown many times, humans are prone to creating technologies that become dangerous to human existence. Why then trying to create algorithms to replicate brain function would be different? When this happens, humans will have to accept the consequences this might bring.  

Artificial Super Intelligence (ASI) 

Artificial Super Intelligence (ASI) is way into the future. Or, that is what we believe. To reach this point and to be called an ASI, an AI will need to surpass humans at absolutely everything. The ASI type is achieved when AI is more capable than a human.

This type of AI will be able to perform extraordinary well at things such as arts, decision making, and emotional relationships. These things are today part of what differentiates a machine from a human. In other words, things that are believed to be strictly human. 

 

However, many could argue that humans have not yet mastered the art of emotional relationships, or good decision making. Does it mean that perhaps, a few centuries into the future, Artificial Super Intelligence will master areas where humans have failed?  

Source:

https://interestingengineering.com/the-three-types-of-artificial-intelligence-understanding-ai

25 Aug 2019
The Real Benefits of Blockchain Are Here. They’re Being Ignored

The Real Benefits of Blockchain Are Here. They’re Being Ignored

Introducing as many people as possible to the benefits of decentralization is a cause almost everyone in this industry shares. The issue is that, in making the technology more accessible, many developers are sacrificing the benefits of decentralization for the sake of convenience.

A decentralized product should keep three key promises to its customers:

Censorship-resistant: your stuff is safe and can’t be tampered with
Self-sovereign: you own and control your assets, identity, and data
Open ecosystems: everyone gets value from new contributions
Dapper Labs has a few horses in this race: we started with CryptoKitties, still the most popular blockchain game by transaction volume, and recently announced NBA Top Shot, a new blockchain-based ecosystem being developed in partnership with the NBA and NBPA. We also shipped Dapper, one of the first ‘smart wallets’ for ethereum.

The value of censorship resistance and customers owning their own data is relatively well understood. Less attention is being paid to the other big benefit of crypto that centralized approaches compromise: open ecosystems.

Open ecosystems are the cornerstone
Open ecosystems enable anyone to contribute to a platform or someone else’s work on the platform and receive rewards for their work. On ethereum, we’re seeing open ecosystems appear in the realm of decentralized finance (DeFi).

MakerDAO’s DAI, an algorithmic stablecoin, is used by dapps like Dharma, Compound Finance, and many others. These decentralized lending applications provide competitive rates using Dai to attract borrowers while enabling lenders to earn from assets they already own.

Compound Finance and Uniswap make MakerDAO stronger when combined together as opposed to existing individually. These open ecosystems are even multi-layered, using smart contracts from multiple primitives to create infinite possibilities. For example, Opyn is a non-custodial trading platform built on top of Ethereum, Compound, Uniswap, and MakerDAO’s DAI.

Without Compound or Uniswap, Opyn wouldn’t be able to exist.

“The combination of Primitives will enable the creation of protocols and systems that weren’t possible prior to their existence. These emergent systems will be greater than any of the individual primitives on their own.” — The Emergence of Cryptoeconomic Primitives by Jacob Horne

Turning creators, users and developers into stakeholders
In an open ecosystem, users, developers, and the original creators can all capture value.

Users get more choice (because anyone can add features on anything), and users ultimately decide what’s important. The speed of software innovation increases because developers can use each others’ code like lego blocks.

Developers who build on existing code are, in many ways, marketing the original creator’s product for them, further increasing the reach of the brand. In return, developers tap into an existing and qualified user base.

As a result, trust is built through a cyclical relationship between all participating parties.

“I feel like we’re in a unique position where the users of the platform have an incentive to work hard to see the platform succeed, and if given the opportunity, we would move mountains.”

– kabciane, a KittyVerse developer creating numerous utility contracts

In the context of MakerDAO’s DAI, every developer using DAI in their dapp is preaching what MakerDAO has done for the decentralized finance ecosystem.

Why aren’t there more blockchain games?
Open ecosystems have significant long-term benefits, but as CoinDesk’s Brady Dale recently pointed out, they’re difficult to create in games. By using sidechains or centralizing the data that matters most to third-party creators, dapp developers are inhibiting potential open ecosystems tied to their experiences.

Developers are building full-stack games, with most of the data existing off-chain, resulting in less composability, less shared data, and effectively closed ecosystems.

One of the major design decisions for CryptoKitties was to compute and store the genes on the ethereum blockchain. It would have been far easier not to do so, and the resulting experience would have been more accessible — but many of the things that make CryptoKitties interesting or valuable to this day would not have been possible.

Developers need access to these genes to make third-party games like KotoWars and Mythereum, both of which create more utility and value for specific genes (i.e. certain cats are more valuable because these experiences exist).

If CryptoKitties had decided to reduce the decentralized value of the game for the sake of accessibility, The KittyVerse wouldn’t exist, the game wouldn’t be as trustworthy, and the tokens wouldn’t have nearly as much value or utility to players as a result.

Open ecosystems are important outside of DeFi
Cheeze Wizards, Dapper Labs’ newest game, attempts to leverage as many lessons as possible from CryptoKitties.

It’s specifically designed as an open ecosystem: third-party developers can utilize the Cheeze Wizards API and art assets before the game launches its first official tournament later this summer. Cheeze Wizards is further encouraging developers to play in the open ecosystem via a month-long hackathon, with $15,000 in cash prizes and a whole host of other rewards as incentives.

Cheeze Wizards itself is composed of “tournaments” hosted by either Dapper Labs or third-party developers. The contract and logic for these tournaments are entirely on-chain, which means any developer can create their own tournament and take a percentage from the amount raised.

The tournament contract is a built-in business model for developers to build on top of existing IP, something that has never been possible before with second-layer experiences.

Acknowledging the reality that ethereum doesn’t scale today, CheezeWizards is really by and for the crypto community.

Blockchains and dapps can be designed so developers can earn their fair share in contributing to an ecosystem. Rewarding developers for maintaining or improving a network is the hidden treasure that’s yearning to be discovered by open ecosystems.

Read more: https://www.coindesk.com/the-real-benefits-of-blockchain-are-here-theyre-being-ignored

24 Aug 2019
AI Facial Recognition Software Has Been Used to Catch a Murder Suspect in China

AI Facial Recognition Software Has Been Used to Catch a Murder Suspect in China

The concept of someone using artificial intelligence to identify a criminal is no longer just a science fiction plotline.

Developments in AI, as well as facial recognition technology, has opened a pandora’s box to the coming age of AI policing, something that is equally exciting and terrifying. Don’t believe us?

Recently a man accused of taking the life of his girlfriend in Southeast China was caught all thanks to an AI-driven facial recognition program. Welcome to the future.

Using facial recognition to catch a wanted man
Crime-fighting AI is a commonly used trope in both film, television, and anime. Minority Report, Person of Interest, and the critically acclaimed anime Psycho-Pass have plots that center around an advanced technological system ( which is usually AI) that can identify potential criminals.

Though we are not there yet exactly, AI is already becoming a powerful tool for police officers in China and is getting very close to that possibility, already breaking some ground. 

In the case of this Chinese murderer, authorities from the Fujian province were able to catch on to the murderer after he tried to scan his victim’s face to apply for a loan. It has been reported that the 29-year-old criminal was caught while trying to dispose of the body. 

However, it was the lending company, Money Station, who made the discovery, or better yet, the AI used to track the potential borrowers. Money Station uses artificial intelligence to verify applicants identity.

The online lending company’s software noticed something fishy going on when the victim’s lifeless face was being used to apply for the loan, finding no signs of movement in the victim’s eyes.

The murderer allegedly strangled his girlfriend to death after a dispute about money. 

Artificial Intelligence is being used to fight crime 

Though this is more an act of coincidence rather than an active act of surveillance, there are far more dystopian cases.

Chinese authorities have made headlines over the past year as “China is using artificial intelligence for social control, and authorities hope to utilize autonomous surveillance technologies to predict crimes before they are committed,” reports the Express.

Though the alleged aim is to predict crime, terrorism, and social unrest before it happens, some can not help but draw parallels to the media mentioned above, especially Minority Report.

Even Meng Jianzhu, the head of the Chinese Community Party’s central commission for political and legal affairs, is ecstatic about the country’s use of AI to fight potential crime stating:

“Artificial intelligence can complete tasks with a precision and speed unmatchable by humans, and will drastically improve the predictability, accuracy, and efficiency of social management.”

Do you think artificial intelligence can/should be used to fight or even identify potential criminals?

Source: https://interestingengineering.com/ai-facial-recognition-software-has-been-used-to-catch-a-murder-suspect-in-china

 

22 Aug 2019
Unlocking the future of blockchain innovation with privacy-preserving technologies

Unlocking the future of blockchain innovation with privacy-preserving technologies

The origins of blockchain as many are familiar with it today can be traced back to the Bitcoin whitepaper, first published in 2008 by Satoshi Nakamoto, which offered a vision of a new financial system underscored by cryptography and trust in code.

Throughout the past decade, iterations of this technological infrastructure have gradually built out a diverse industry ecosystem, allowing for use cases that extend beyond cryptocurrencies and peer-to-peer transactions. From smart contracts to asset tokenization, across industries ranging from gaming, supply chain, Internet of Things (IoT), real estate, and many others, the proliferation of use cases across verticals are a testament to the technology’s inherent versatility.

Yet, while projects remain fixated on addressing calls for mainstream adoption and enterprise implementation, existing infrastructural flaws continue to hinder these efforts. Beyond the criticisms directed at today’s open source, decentralized, public blockchains pertaining to scalability, there’s also the matter of privacy.

In most public chains today, transactions and on-chain data exchanges are fully visible to all nodes in the whole network, allowing for greater auditability and transparency. Across industries where the sharing of sensitive data is crucial, this transparency comes at a cost, posing a critical risk that far outweighs the benefits.

The tipping point of transparency
Amid the throes of digitalization, the importance of data protection schemes cannot be overstated. Throughout the years, the rise of data as a bonafide asset has led to its prominence as a key driver of economic growth across a myriad of sectors.

Historically, whether internally or to external third-parties, data has been shared across centralized networks, leaving systems vulnerable to devastating breaches and security risks. To mitigate concerns pertaining to misuse, legislators have looked to stringent regulations and privacy compliance frameworks. Though regulations are a starting point, they certainly fail to address fundamental infrastructural weaknesses.

In turn, blockchain provides an alternative, with decentralization as an added security measure that eradicates that threat of a single point of failure across a distributed network.

Simultaneously, the immutability of these permission-less networks preserves the provenance of data shared on the network, mitigating risks of tampering. As companies, big and small, transition to blockchain in the hopes of benefiting from efficient data sharing and ease in information transfer, the question of privacy in blockchains is often overlooked, forgotten amid demands for greater transparency and accountability.

To address this, many projects are now looking to employ privacy-preserving mechanisms on their infrastructures, ranging from non-interactive zero-knowledge proofs (zk–SNARKS) to encryption algorithms such as secure Multi-Party Computation (MPC). In aggregate, these technologies encrypt data as it’s shared and only reveal the specific elements that are pertinent to a specific purpose.

A collective force
Data can be perceived as a historical record of behaviors – human, mechanical, or otherwise. From the personal details we voluntarily input into forms, to driving patterns transmitted from ride-sharing vehicles to train driverless cars, or the GPS coordinates transmitted from our phones to servers, millions of data points are transmitted every day, each producing an ongoing trail of activity. In the age of automation, these granular details play a critical role in optimizing mechanized processes such as those in machine learning, strategic decision making, and identifying valuable behavioral patterns.

In healthcare, for example, optimization is a key benefit derived from data collaboration. One can see this in the training of artificial intelligence systems in order to make more precise diagnoses or the treatment of rare diseases with trained algorithms. In these cases, large quantities of sensitive data such as electronic medical records provide valuable insights for research.

Take a scenario where a hospital does not have sufficient data to perform sophisticated healthcare research using machine learning – a hospital would have the ability to access a larger pool of data if they could access it from other healthcare providers. With privacy-preserving encryption algorithms, the aggregation of patient information from several hospitals can be used to make a “collective” calculation without revealing the inputs from any hospital or the raw patient data.

This means that the information needed to conduct medical research is made available, and may be processed algorithmically to produce a result, without revealing it to anyone. With blockchain, this computational output could be transmitted over the network and users could access this with the necessary assurance that it has not been tampered with.

On the other hand, risk mitigation and fraud prevention is a prime benefit for the financial services sector, where industry-specific data privacy requirements often hinder the benefits of seamless data sharing. A potential use case would be in credit-rating investigations where there exists no current global standard for which institutions are responsible, ranging from third-party agencies to central banks. However, these institutions have a fragmented financial data profile of a given individual and often require additional information from other banks, leading to a lengthy, drawn-out process.

With a decentralized, blockchain platform equipped with privacy-preserving mechanisms, raw data can be securely encrypted by mathematically-provable cryptographic algorithms that are sent for cross-checking and computation – only the outcome of that computation would be shared and seen on the network, allowing banks to freely exchange only the relevant data required. These mechanisms help to encourage data collaboration across banks and other financial institutions efficiently and securely, without fear of unpredictable human factors that could potentially impact how the data is used.

The next chapter
With an increased emphasis on connectivity to allow for real-time optimization, risk mitigation, and personalization, data collaboration now serves as the backbone of today’s digital economy. While the advent of blockchain has largely improved the way in which information systems are able to share and transfer data, privacy concerns continue to stand in the way of maximizing the full potential of data exchanges. As incidents of misuse continue to shape public discourse and consumer confidence, a growing sentiment of distrust will only continue to fester unless critical changes are made on an infrastructural level.

The introduction of privacy-preserving mechanisms will ultimately result in benefits for businesses and users alike. Cost-efficiencies are gained as everyday processes are advanced and optimized, leading to a better understanding of consumer needs. Simultaneously, the reinvigoration of trust and value in the consumer-corporate relationship will come to underscore a greater emphasis on data ownership and sovereignty.

As we look to a future where data and privacy go hand in hand, we’ll come to see a modern data marketplace underscored by equitability and trust that has the potential to unlock new possibilities that enhance multiple areas of our everyday lives.

Source: https://www.helpnetsecurity.com/2019/08/22/blockchain-privacy/

21 Aug 2019
AI's Memory Problem

AI’s Memory Problem

Artificial intelligence, which allows computers to learn and perform specific tasks such as image recognition and natural language processing, is loosely inspired by the human brain.

Artificial Intelligence (AI), which allows computers to learn and perform specific tasks such as image recognition and natural language processing, is loosely inspired by the human brain. 

The challenge is that while the human brain has evolved over the last 3 million years, Artificial Neural Networks, the very “brain” of AI, which have only been around for a few decades and aren’t nearly as finely tuned or as sophisticated as the gray matter in our heads, are expected to perform tasks associated with human intelligence. So in our quest to create AI systems that can benefit society — in areas from image classification to voice recognition and autonomous driving — we need to find new paths to speed up the evolution of AI. 

Part of this process is figuring out what types of memory work best for specific AI functions and discovering the best ways to integrate various memory solutions together. From this standpoint, AI faces two main memory limitations: density and power efficiency. AI’s need for power makes it difficult to scale AI outside of datacenters where power is readily available — particularly to the edge of the cloud where AI applications have the highest potential and value.

To enable AI at the edge, developments are being made toward domain-specific architectures that facilitate energy-efficient hardware. However, the area that will open the way for the most dramatic improvements, and where a large amount of effort should be concentrated, is the memory technology itself.

Driving AI to the Edge

Over the last half-century, a combination of public and private interest has fueled the emergence of AI and, most recently, the advent of Deep Learning (DL). DL models have — due to the exceptional perception capabilities they offer — become one of the most widespread forms of AI. A typical DL model must first be trained on massive datasets (typically on GPU servers in datacenters) to tune the network parameters, an expensive and lengthy process, before it can be deployed to make its own inferences based on input data (from sensors, cameras, etc.). DL models require such massive amounts of memory to train their many parameters that it becomes necessary to utilize off-chip memory. Hence, much of the energy cost during training is incurred because of the inefficient shuffling of gigantic data loads between off-chip DRAM and on-chip SRAM (an approach that often exceeds 50% of the total energy use). Once the model is trained, the trained network parameters must be made available to perform inference tasks in other environments.

Until recently AI applications had been confined to datacenters because of their large energy consumption and space requirements. However, over the past few years the growing demand for AI models at high scale, low latency and low cost has been pushing these applications to be run at the edge, namely in IoT and on mobile devices where power and performance are highly constrained.

This is driving a rapidly expanding hardware ecosystem that supports edge applications for inference tasks and even a nascent effort at enabling distributed training (e.g., Google’s Federated Learning models). These new architectures are primarily driven by speech recognition and image classification applications.

The growing demand combined with the increasing complexity of DL models is unsustainable in that it is causing a widening gap between what companies require in terms of energy dissipation, latency and size, and what current memory is capable of achieving. With the end of Moore’s Law and Dennard Scaling in the rear-view mirror exacerbating the issue, the semiconductor industry needs to diversify toward new memory technologies to address this paradigm shift and fulfill the demand for cheap, efficient AI hardware.

The Opportunity for New Memories

The AI landscape is a fertile ground for innovative memories with unique and improving characteristics and presents opportunities in both the datacenter and at the edge. New memory technologies can meet the demand for memory that will allow edge devices to perform DL tasks locally by both increasing the memory density and improving data access patterns, so that the need for transferring data to and from the cloud is minimized. The ability to perform perception tasks locally, with high accuracy and energy efficiency is key to the further advancement of AI.

This realization has led to significant investment in alternative memory technologies, including NAND flash, 3D XPoint (Intel’s Optane), Phase-Change Memory (PCM), Resistive Memory (ReRAM), Magneto-Resistive Memory (MRAM) and others that offer benefits such as energy efficiency, endurance and non-volatility. While facilitating AI at the edge, such memories may also allow cloud environments to perform DL model training and inference more efficiently. Additional benefits include the potential improvements in reliability and processing speed. These improvements in the memory technology will make it possible to circumvent the current hardware limitations of devices at the edge.

In particular, certain new memories offer distinct benefits due to specific inherent or unique qualities of the technology for a number of AI applications. ReRAM and PCM offer advantages for inference applications due to their superior speed (compared to Flash), density and non-volatility. MRAM offers similar advantages to ReRAM and PCM; furthermore, it exhibits ultra-high endurance such that it can compete with and complement SRAM as well as function as Flash replacement. Even at these early stages of their lifetime, these new memory technologies show enormous potential in the field of AI.

And although we are still decades away from implementing the AI we’ve been promised in science fiction, we are presently on the cusp of significant breakthroughs that will affect many aspects of our lives and provide new efficient business models. As Rockwell Anyoha writes in a Harvard special edition blog on AI, “In the first half of the 20th century, science fiction familiarized the world with the concept of artificially intelligent robots. It began with the ‘heartless’ Tin Man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis.”

The next competitive battle is being fought in memory, and as a result, there is a tremendous amount of time, money and brain power being dedicated to figuring out how to fix AI’s memory problem. Ultimately, while these computerized brains don’t yet hold a candle to our human brains — especially pertaining to energy efficiency — it is that very uniqueness of our own minds that enables our capacity to create solutions to our many fantasies and bring artificial intelligence to life.

Source: https://www.eetimes.com/author.asp?section_id=36&doc_id=1335050#

18 Aug 2019
A.I. Is Learning From Humans. Many Humans.

A.I. Is Learning From Humans. Many Humans.

BHUBANESWAR, India — Namita Pradhan sat at a desk in downtown Bhubaneswar, India, about 40 miles from the Bay of Bengal, staring at a video recorded in a hospital on the other side of the world.

The video showed the inside of someone’s colon. Ms. Pradhan was looking for polyps, small growths in the large intestine that could lead to cancer. When she found one — they look a bit like a slimy, angry pimple — she marked it with her computer mouse and keyboard, drawing a digital circle around the tiny bulge.

She was not trained as a doctor, but she was helping to teach an artificial intelligence system that could eventually do the work of a doctor.

Ms. Pradhan was one of dozens of young Indian women and men lined up at desks on the fourth floor of a small office building. They were trained to annotate all kinds of digital images, pinpointing everything from stop signs and pedestrians in street scenes to factories and oil tankers in satellite photos.

A.I., most people in the tech industry would tell you, is the future of their industry, and it is improving fast thanks to something called machine learning. But tech executives rarely discuss the labor-intensive process that goes into its creation. A.I. is learning from humans. Lots and lots of humans.

Before an A.I. system can learn, someone has to label the data supplied to it. Humans, for example, must pinpoint the polyps. The work is vital to the creation of artificial intelligence like self-driving cars, surveillance systems and automated health care.

Tech companies keep quiet about this work. And they face growing concerns from privacy activists over the large amounts of personal data they are storing and sharing with outside businesses.

Earlier this year, I negotiated a look behind the curtain that Silicon Valley’s wizards rarely grant. I made a meandering trip across India and stopped at a facility across the street from the Superdome in downtown New Orleans. In all, I visited five offices where people are doing the endlessly repetitive work needed to teach A.I. systems, all run by a company called iMerit.

There were intestine surveyors like Ms. Pradhan and specialists in telling a good cough from a bad cough. There were language specialists and street scene identifiers. What is a pedestrian? Is that a double yellow line or a dotted white line? One day, a robotic car will need to know the difference.

What I saw didn’t look very much like the future — or at least the automated one you might imagine. The offices could have been call centers or payment processing centers. One was a timeworn former apartment building in the middle of a low-income residential neighborhood in western Kolkata that teemed with pedestrians, auto rickshaws and street vendors.

In facilities like the one I visited in Bhubaneswar and in other cities in India, China, Nepal, the Philippines, East Africa and the United States, tens of thousands of office workers are punching a clock while they teach the machines.

Tens of thousands more workers, independent contractors usually working in their homes, also annotate data through crowdsourcing services like Amazon Mechanical Turk, which lets anyone distribute digital tasks to independent workers in the United States and other countries. The workers earn a few pennies for each label.

Based in India, iMerit labels data for many of the biggest names in the technology and automobile industries. It declined to name these clients publicly, citing confidentiality agreements. But it recently revealed that its more than 2,000 workers in nine offices around the world are contributing to an online data-labeling service from Amazon called SageMaker Ground Truth. Previously, it listed Microsoft as a client.

Read more: https://www.nytimes.com/2019/08/16/technology/ai-humans.html