Month: December 2018

30 Dec 2018

2018 is the year AI got its eyes

Computer scientists have spent more than two decades teaching, training and developing machines to see the world around them. Only recently have the artificial eyes begun to match (and occasionally exceed) their biological predecessors. 2018 has seen marked improvement in two areas of AI image processing: facial-recognition technology in both commerce and security, and image generation in — of all fields — art.

In September of this year, a team of researchers from Google’s DeepMind division published a paper outlining the operation of their newest Generative Adversarial Network. Dubbed BigGAN, this image-generation engine leverages Google’s massive cloud computing power to create extremely realistic images. But, even better, the system can be leveraged to generate dreamlike, almost nightmarish, visual mashups of objects, symbols and virtually anything else you train the system with. Google has already released the source code into the wilds of the internet and is allowing creators from anywhere in the world to borrow its processing capabilities to use the system as they wish.

“I’ve been really excited by all of the interactive web demos that people have started to turn these algorithms into,” Janelle Shane, who is a research scientist in optics by day and a neural-network programmer by night, told Engadget. She points out that in the past, researchers would typically publish their findings and call it a day. You’d be lucky to find even a YouTube video on the subject.

“But now,” she continued, “they will publish their model, they’ll publish their code and what’s even greater for the general creative world is that they will publish a kind of web application where you can try out their model for yourself.”

This is exactly what Joel Simon, developer of GANbreeder has done. This web app enables users to generate and remix BigGAN images over multiple generations to create truly unique creations. “With Simon’s web interface, you can look at what happens when you’re not generating pictures of just symbols, for example,” Shane points out. “But you’re generating something that’s a cross between a symbol and a comic book and a shark, for example.”

Read more:

29 Dec 2018

Human Brain Project: EU’s shocking €1BILLION plan to grow SILICON BRAINS in a lab

A EUROPEAN UNION (EU) funded project is pioneering cutting-edge research into the human brain and is inspiring artificial intelligence breakthroughs, its scientific director has exclusively revealed.

The Human Brain Project (HBP) is the EU’s £899 (€1billion) flagship science initiative working on developing human-machine hybrids. The ambitious enterprise’s primary aim is to simulate the human brain using computers, improving science and technology on the way. Professor Katrin Amunts, HBP’s scientific director, believes tangible results are starting to arrive, halfway through the Human Brain Project’s ten-year tenure.

She said: “We are trying to emulate the capabilities of the brain, we are trying to understand the brain’s principles and the organisational rules behind cognitive function.”

We are trying to emulate the capabilities of the brain

Professor Katrin Amunts

“What we are trying to do at HBP is try and understand how we can use our knowledge about brain organisation and transfer it, for instance, to new computing devices called neuromorphic devices.”

The Human Brain Project is developing two major neuromorphic machines; Manchester University’s SpiNNaker and the University of Heidelberg’s BrainscaleS.

Read more:

26 Dec 2018

AI can help businesses stay ahead of the curve

Disruptive new-generation technologies are seen to have dramatically changed the way businesses run. Enterprises are considering realigning their strategies to retain their market position, and gain advantages through cutting-edge technologies like artificial intelligence (AI), robotics and IoT. Companies adopting the new-age exponential technologies earlier have a better chance of staying ahead of the curve and competition. With its potential to augment the capabilities of humans, and help businesses improve productivity, AI has the power to transform businesses across industries and sector.

In the wake of the fourth industrial revolution, artificial intelligence and automation are the new norm for today’s enterprise. What once was a competitive edge is now becoming a prerequisite for business growth, efficiency and productivity. It is no longer enough to just implement AI; it is about ensuring that AI is effectively integrated across all business platforms.
The focus needs to move away from what technologies are being offered and instead focus on how these technologies are impacting a company’s specific use-cases and enhancing their outcomes. The goal should be to build an AI-enabled organisation and not look at AI as an add-on.

Organisations should look at building a robust AI strategy to imbibe AI in the way they operate. Forrester reports that 40 “insight-driven companies” are going to grab $1.8 trillion by 2021. In this list we have young companies that are less than 8 years old.

What unifies them? Their obsession with data and AI. There are essentially two types of organisations with respect to AI adoption — first, the “talkers”: there are organisations wetting their feet with AI initiatives taking small risk-averse steps in organisational silos and in some cases getting tangled by bureaucracies; and a minority few unfortunately focusing more on press coverage than actual outcome. Then the “Do-ers”: These are the insights-driven companies, that have integrated or are on ..

24 Dec 2018

Why Has It Become Risky To Be An AI-Based Software Startup?

In the last two decades, the software industry provided a healthy breeding ground for incubating new businesses and ideas. From solving the day-to-day problems of end-users to building complementary tools for software developed by large companies, startups thrive in the disruptive market. They compete to differentiate based on the value they deliver to customers.

But the changing dynamics of the industry have made it extremely risky to be an independent software vendor or a startup in the cloud and AI market.

There was a time when large platform companies delivering enterprise software chose not to compete with ISVs. The primary reason for this move was to let the partners survive while focusing on solving complex problems for enterprise customers. For example, many companies were complementing Microsoft Office suite through additional tools and plugins. Microsoft enabled and encouraged the ecosystem to grow around the office platform which directly helped the industry. Same was the case with Oracle and SAP when they shipped shrink-wrapped enterprise software while closing million-dollar licensing deals.

The other reason that influenced the growth of 3rd party ISVs was that the large platform companies were secretive of their differentiating factor. The secret sauce provided by internal research teams was used to improve the products and platforms but was never directly exposed to customers. Startups could deliver a less sophisticated version of the same feature as a standalone product for an affordable price. Multiple software components that were embedded in enterprise software were available from ISVs for a price. Enterprise software companies never bothered to productize those features as standalone products as they wanted to stay focused on the larger platform than spreading themselves too thin.

Read more: 

23 Dec 2018

3 ways AI will improve healthcare in 2019

In a recent piece, I explained why AI’s complexity shouldn’t be a deterrent for its adoption. In fact, I went as far as stating that artificial intelligence will be just as disruptive as the internet was. This is a view I am doubling down on.

As far as artificial intelligence is concerned, one industry most likely to be most disrupted is the healthcare industry. Why will AI have such an impact on the healthcare industry? Facts like the ones below are why:

  • Hospital error — which can be significantly addressed and prevented by AI — is the third leading cause of patient death
  • Depending on what source you pay attention to, up to 440,000 Americans die annually from preventable medical errors
  • According to data from the National Safety Council, 2016 was the deadliest year on American roads in a decade — with 40,000 deaths. This means preventable hospital deaths due to hospital error are 11 times the number of road accidents in the United States
  • 86 percent of mistakes in the healthcare industry are purely administrative and preventable
  • Effective application of AI is projected to result in annual savings of $150 billion in the US healthcare industry
  • The AI health market is projected to grow more than 10 times within the next five years

When we look at both the physical and economic implications, the healthcare industry is one industry that is very much in need of an artificial intelligence disruption. That said, there are certain key healthcare AI trends to pay attention to in 2019:

A rise of AI in medical imaging

One AI trend to watch out for in the healthcare industry is the rise of artificial intelligence in the medical imaging.

Besides the fact that AI will result in improved accuracy of medical imaging diagnosis, it will also make it much easier to personalize treatment planning and transmit results. Not only will this enhance productivity in the radiologist community, but profitability can be attained sooner than expected.

While AI was once viewed as a threat in the medical imaging community, not anymore. Demand for, and interest in, AI in the radiologist community has increased significantly in the recent past and investment in AI imaging technology is on the rise.

Investments from tech giants like Tencent, Alibaba, and the partnership between NVIDIA and Nuance are just a few examples of high-profile investments in AI medical imaging in recent times.

In fact, research has revealed that there has been a rapid increase in capital investment in companies developing machine learning solutions for medical imaging in 2018. This trend is expected to continue well into 2019.

Rise of AI involvement in healthcare communication

Whether it is patient-physician communication, or communication between different healthcare organs, the healthcare industry’s communication problem is one that can be most easily and quickly addressed by AI, and that we can expect to see serious developments in as early as 2019.

Statistics show that 80 percent of serious medical errors are due to miscommunication between caregivers during patient transfer. AI can, and will, more effectively address these communication issues.

Already, we’re seeing signs of progress in healthcare when it comes to AI and communication: Apple already launched the ResearchKit and CareKit framework that allow researchers and developers to create medical apps to communicate with and monitor patients.

This has resulted in development of apps like mPower that use finger tapping and gait analysis to study patients suffering with Parkinson’s disease. Healthcare centers like the Mayo Clinic are already embracing AI in communication with patients: besides making it possible to schedule appointment with physicians, the app can also deliver test results to patients and enable in-app doctor-patient communication.

In 2019, we can expect to see significant progress when it comes to the use of AI in healthcare communication: whether it is communication between health centers and patients through the use of apps and chatbots, making it easy to schedule appointments with healthcare centers, or simply ensuring fast and efficient delivery of information requested by patients.

More developments in medical dosage error reduction

According to research by Accenture, effective application of AI to medical dosage error reduction can result in a savings of $16 billion for the healthcare industry by 2026.

Medication and dosage errors are one of the leading causes of unanticipated but easily preventable death or serious injury in the healthcare industry today.

AI is most renowned for its ability to process and interpret vast amounts of data, and this will serve as a major advantage in 2019 and beyond — this will help make application and administration of medications with the precise dosage easy.


22 Dec 2018

The case for taking AI seriously as a threat to humanity

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could end all life on earth.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic threat, in nine questions:

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.

Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation, at games like chess and Go, at important research biology questions like predicting how proteins fold, and at generating images. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They are being developed to improve drone targeting and detect missiles.

But narrow AI is getting less narrow. Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches.

Read more:

20 Dec 2018

Brain power: Mind-controlled drones focus of USF research

 – USF graduate student Dante Tezza is pretty good at flying a drone. He literally uses his head. He controls the drone with his brain.

“These are the sensors,” he says, showing us a round, plastic circle that fits on his head as he flies the small drone in a lab.

His brainwaves are transmitted to the drone. When he imagines walking, the drone flies forward. When he thinks of sitting down, the drone lands. He says you can learn to make it fly in a day.

“But to master it, it may take you weeks or even months of training,” says Tezza.

He and fellow computer science and engineering graduate student Sarah Garcia are working on their PhDs with BCI, or brain-computer interface.

They’re searching for some USF students to participate in the first International Brain Controlled Drone Races.

“The students who get the fastest time will get to compete in our actual event in February,” says Garcia.

It’s being held at the Yuengling Center on February 9.

What does it take to be the fastest brain drone flyer?

“That’s good research question,” smiles Tezza. “We don’t have an answer yet.”

It could be in the pilot’s ability to focus on how the sensors make contact with an individual’s brain. They believe drones are just the beginning of brain-controlled technology that could do everything from opening doors to mowing lawns to helping disabled people.

The power of the brain could be the greatest equalizer.

“It doesn’t matter if you are a man or a woman, or if you’re in a wheelchair. All that matters is how much you can focus your brain waves,” says Tezza.

They will soon advertise how USF students can come to their lab and try out for the 16-member brain drone team on a simulator game they’ve designed.

They say watch social media for details on  when the try-outs will happen. In the meantime, think drones and practice focusing your brain.

Read more:

19 Dec 2018

Redesigning HR in the era of disruptive technologies

The third edition of FICCI HR Conference2018 debated and reviewed as to why a successful digital transformation sits at the heart of HR and how can the HR function be an evangelist for seeding cultural changes within organizations, and embrace the future looking technologies to successfully ride the wave of digitalization.

The digital age is moving at an unprecedented rate and is fundamentally transforming the way organizations operate while necessitating HR executives to embrace disruption and redesign their talent management strategies to succeed. Digitalization has fundamentally reshaped value chains and altered consumer behaviors and expectations be it defense, education, financial services, government services, healthcare, IT & ITES, manufacturing, oil and gas, retail, telecommunications  – there is none who would, rather should be left behind. The payoffs arising out of it are phenomenal – from accelerated profitability, improved customer satisfaction to spikes in speed-to-market. But the most intriguing aspect is that this agenda is being driven from the top and is a top priority in most of the boardroom conversations.

However, it is equally important to note that any journey of successful digitalization in any organization goes beyond investment and technology. It actually rests on the most crucial function of the organization – the Human Resources. The Federation of Indian Chambers of Commerce and Industry (FICCI) recently hosted its third edition of HR Conference 2018 on the theme of “Redesigning HR in the era of the disruptive technology.” The event began with an inaugural session which saw an eclectic mix of speakers including Anna Roy, Advisor (DM&A, Industry), NITI Aayog, Ranjan Mohapatra, Director HR, Indian Oil Corporation, and Sreekanth Arimanithaya, Senior Vice President, Integrated Workforce Management and India Co-Managing Director, DXC Technologies.

Digitalization has fundamentally reshaped value chains and altered consumer behaviors and expectations, and there is none who would, rather should be left behind

Speaking at the conference, Anna Roy stated that “Today the use of disruptive technologies like Artificial Intelligence (AI) is pervasive in all sectors and verticals. Human Resource, being one of most important vertical is bringing new opportunities to the fore and giving rise to new areas, leading to better results.” And this necessitates approaching and investing in new technologies as imperative to driving digital transformation. Experts attending the event also discussed the imperative to reskill the workforce and connect with the academia and other parts of the ecosystem to ensure that companies hire the right set of people to gain competitive advantage.

The conference also touched upon the various facets of digital transformation including how technologies like AI and Machine Learning are reshaping the employee lifecycle- recruitment, workforce planning, performance management, rewards, and engagement while integrating these elements with the uberization of work. The event also saw the launch of the report titled, “Are we ready for future?” by FICCI in partnership with Helix, which highlighted HR readiness for the new digital wave. The report revealed that organizations are confident to take the digital challenge but need to put some building blocks in place to focus on strategic benefits. The report also mentioned that HR executives need to see Digital HR as a means to achieve operational efficiency and a tool for better decision making.

The third edition of FICCI HR Conference 2018 was filled with rich content mixed with power packed sessions with live examples, case studies involving practitioners, experts from across industry ensuring take away for everyone. The event reiterated the mantra that the cohesive and collective efforts of both Business and HR are crucial for transformation in the digital era.


18 Dec 2018

New AI system mimics how humans visualize and identify objects

UCLA and Stanford University engineers have demonstrated a computer system that can discover and identify the real-world objects it “sees” based on the same method of visual learning that humans use.

The system is an advance in a type of technology called “computer vision,” which enables computers to read and identify . It could be an important step toward general artificial intelligence systems—computers that learn on their own, are intuitive, make decisions based on reasoning and interact with humans in a much more human-like way. Although current AI  are increasingly powerful and capable, they are task-specific, meaning that their ability to identify what they see is limited by how much they’ve been trained and programmed by humans.

Even today’s best computer vision systems cannot create a full picture of an object after seeing only certain parts of it—and the systems can be fooled by viewing the object in an unfamiliar setting. Engineers are aiming to make computer systems with those abilities—just like humans can understand that they are looking at a dog, even if the animal is hiding behind a chair and only the paws and tail are visible. Humans, of course, can also easily intuit where the dog’s head and the rest of its body are, but that ability still eludes most artificial intelligence systems.

Current computer vision systems are not designed to learn on their own. They must be trained on exactly what to learn, usually by reviewing thousands of images in which the objects they’re trying to identify are labeled for them. Computers, of course, also can’t explain their rationale for determining what the object in a photo represents: AI-based systems don’t build an internal picture or a common-sense model of learned objects the way humans do.

The engineers’ new method, described in the Proceedings of the National Academy of Sciences, shows a way around those shortcomings.


17 Dec 2018


Artificial intelligence systems for health care have the potential to transform the diagnosis and treatment of diseases, which could help ensure that patients get the right treatment at the right time, but opportunities and challenges are ahead.

In a new article in the Journal of the American Medical Association, two AI experts discuss the best uses for AI in health care and outline some of the challenges for implementing the technology in hospitals and clinics.

In health care, artificial intelligence relies on the power of computers to sift through and make sense of reams of electronic data about patients—including ages, medical histories, health status, test results, medical images, DNA sequences, and many other sources of health information.

AI excels at the complex identification of patterns in these reams of data, and can do so at a scale and speed beyond human capacity. The hope is that this technology can be harnessed to help doctors and patients make better health-care decisions.

Here, the authors—Philip Payne, a professor at and director of the Institute for Informatics, and Thomas M. Maddox, a professor of medicine and director of the Health Systems Innovation Lab, both at Washington University in St. Louis—answer questions about AI, including its capabilities and limitations, and how it might change the way doctors practice.