Author: Manahel Thabet

20 Sep 2018

Thinking Like a Human: What It Means to Give AI a Theory of Mind

Last month, a team of self-taught AI gamers lost spectacularly against human professionals in a highly-anticipated galactic melee. Taking place as part of the International Dota 2 Championships in Vancouver, Canada, the game showed that in broader strategic thinking and collaboration, humans still remain on top.

The AI was a series of algorithms developed by the Elon Musk-backed non-profit OpenAI. Collectively dubbed the OpenAI Five, the algorithms use reinforcement learning to teach themselves how to play the game—and collaborate with each other—from scratch.

Unlike chess or Go, the fast-paced multi-player Dota 2 video game is considered much harder for computers. Complexity is only part of it—the key here is for a group of AI algorithms to develop a type of “common sense,” a kind of intuition about what others are planning on doing, and responding in kind towards a common goal.

“The next big thing for AI is collaboration,” said Dr. Jun Wang at University College London. Yet today, even state-of-the-art deep learning algorithms flail in the type of strategic reasoning needed to understand someone else’s incentives and goals—be it another AI or human.

What AI needs, said Wang, is a type of deep communication skill that stems from a critical human cognitive ability: theory of mind.

Theory of Mind as a Simulation

By the age of four, children usually begin to grasp one of the fundamental principles in society: that their minds are not like other minds. They may have different beliefs, desires, emotions, and intentions.

And the critical part: by picturing themselves in other peoples’ shoes, they may begin to predict other peoples’ actions. In a way, their brains begin running vast simulations of themselves, other people, and their environment.

By allowing us to roughly grasp other peoples’ minds, theory of mind is essential for human cognition and social interactions. It’s behind our ability to communicate effectively and collaborate towards common goals. It’s even the driving force behind false beliefs—ideas that people form even though they deviate from the objective truth.

When theory of mind breaks down—as sometimes in the case of autism—essential “human” skills such as story-telling and imagination also deteriorate.

To Dr. Alan Winfield, a professor in robotic ethics at the University of West England, theory of mind is the secret sauce that will eventually let AI “understand” the needs of people, things, and other robots.

“The idea of putting a simulation inside a robot… is a really neat way of allowing it to actually predict the future,” he said.

Unlike machine learning, in which multiple layers of neural nets extract patterns and “learn” from large datasets, Winston is promoting something entirely different. Rather than relying on learning, AI would be pre-programmed with an internal model of itself and the world that allows it to answer simple “what-if” questions.

For example, when navigating down a narrow corridor with an oncoming robot, the AI could simulate turning left, right, or continuing its path and determine which action will most likely avoid collision. This internal model essentially acts like a “consequence engine,” said Winston, a sort of “common sense” that helps instruct its actions by predicting those of others around it.

In a paper published early this year, Winston showed a prototype robot that could in fact achieve this goal. By anticipating the behavior of others around it, a robot successfully navigated a corridor without collisions. This isn’t anything new—in fact, the “mindful” robot took over 50 percent longer to complete its journey than without the simulation.

But to Winston, the study is a proof-of-concept that his internal simulation works: [it’s] “a powerful and interesting starting point in the development of artificial theory of mind,” he concluded.

Eventually Winston hopes to endow AI with a sort of story-telling ability. The internal model that the AI has of itself and others lets it simulate different scenarios, and—crucially—tell a story of what its intentions and goals were at that time.

This is drastically different than deep learning algorithms, which normally cannot explain how they reached their conclusions. The “black box” model of deep learning is a terrible stumbling block towards building trust in these systems; the problem is especially notable for care-giving robots in hospitals or for the elderly.

An AI armed with theory of mind could simulate the mind of its human companions to tease out their needs. It could then determine appropriate responses—and justify those actions to the human—before acting on them. Less uncertainty results in more trust.

Theory of Mind In a Neural Network

DeepMind takes a different approach: rather than a pre-programmed consequence engine, they developed a series of neural networks that display a sort of theory of mind.

The AI, “ToMnet,” can observe and learn from the actions of other neural networks. ToMNet is a collective of three neural nets: the first leans the tendencies of other AIs based on a “rap sheet” of their past actions. The second forms a general concept of their current state of mind—their beliefs and intentions at a particular moment. The output of both networks then inputs to the third, which predicts the AI’s actions based on the situation. Similar to other deep learning systems, ToMnet gets better with experience.

In one experiment, ToMnet “watched” three AI agents maneuver around a room collecting colored boxes. The AIs came in three flavors: one was blind, in that it couldn’t compute the shape and layout of the room. Another was amnesiac; these guys had trouble remembering their last steps. The third could both see and remember.

After training, ToMnet began to predict the flavor of an AI by watching its actions—the blind tend to move along walls, for example. It could also correctly predict the AI’s future behavior, and—most importantly—understand when an AI held a false belief.

For example, in another test the team programmed one AI to be near-sighted and changed the layout of the room. Better-sighted agents rapidly adapted to the new layout, but the near-sighted guys stuck to their original paths, falsely believing that they were still navigating the old environment. ToMnet teased out this quirk, accurately predicting the outcome by (in essence) putting itself in the near-sighted AI’s digital shoes.

To Dr. Alison Gopnik, a developmental psychologist at UC Berkeley who was not involved in the study, the results show that neural nets have a striking ability to learn skills on their own by observing others. But it’s still far too early to say that these AI had developed an artificial theory of mind.

ToMnet’s “understanding” is deeply entwined with its training context—the room, the box-picking AI and so on—explained Dr. Josh Tenenbaum at MIT, who did not participate in the study. Compared to children, the constraint makes ToMnet far less capable of predicting behaviors in radically new environments. It would also struggle modeling the actions of a vastly different AI or a human.

But both Winston’s and DeepMind’s efforts show that computers are beginning to “understand” each other, even at if that understanding is still rudimentary.

And as they continue to better grasp each others’ minds, they are moving closer to dissecting ours—messy and complicated as we may be.

Source: SingularityHub

19 Sep 2018

What is disruptive innovation?

Disruptive innovation is an idea that goes against the norms, significantly changing the status quo of regular business growth and developing what is known as a new market that accelerates beyond the scope of existing markets.

At the beginning, it may not be as good as the technology it’s replacing, but it’s usually cheaper, meaning more people can snap it up and it can become a commodity fast.

The key differentiator between disruptive innovation and standard innovation is that what is considered to be a revolutionary innovation doesn’t shift the entire market. It may make a difference to some people, but if it’s not accessible to all or doesn’t have an effect on the entire market, it’s not considered disruptive.

Although the introduction of cars in the 19th century was a revolutionary innovation, not everyone was able to afford them and so they didn’t become a commodity until much later.

Horse-drawn carriages remained as the primary mode of transport for a long time after the introduction of motorised vehicles – in fact until the Ford Model T was launched. It was a mass-produced car that made vehicles more affordable for everyone and is therefore considered a disruptive innovation.

The history of disruptive innovation

The idea of a disruptive innovation was first introduced by Harvard Business School scholar Clayton M. Christensen in 1995. He wrote a paper called Disruptive Technologies: Catching the Wave and then discussed the theory further in his book The Innovator’s Dilemma.

This latter book examined the evolution of disk drives but formed the argument that its the business model that allows for a product to become a disruptive innovation rather than the technology itself.

The term is now considered ‘modern jargon’ by some thinkers and publications, although it’s still widely used.

Examples of disruptive innovation in technology

The iPhone Although products aren’t necessarily always the definition of disruptive innovation, the iPhone is one that had a huge impact on the industry. It redefined communication and although at launch had a pretty narrow focus, quickly became mainstream, mobilising the laptop and developing a new business model in the process – one that relied upon app developers (a new, revolutionary concept in itself) to make it a success.

The iPhone completely disrupted the technological status quo – as smartphones because a way to replace laptops and evolved into the iPad – even further eradicating the need for a laptop.

Video on demand The concept of streaming TV programmes at any time to your TV would have been completely insane probably a decade ago.

The ability to stream content using the internet direct to the TV has completely disrupted the market. According to a recent report by the Office of National Statistics, half of adults now consume content using streaming services.

But it’s also disrupted the advertising market, with an entirely new revenue stream for sponsors and advertisers, shifted the balance in rights and caused startups to overtake traditional TV networks in popularity and income.

Digital photography Digital photography is a solid example of disruptive innovation, because not only did it introduce a technology – and whole ecosystem of accessories that grew in popularity because of it (such as memory cards, photo printers etc.), but it also completely shifted the manufacturer market, knocking Kodak, one of the previous market leaders, into oblivion.

The concept of digital photography was actually developed by Kodak engineer Steve Sasson, but it wasn’t popularised by the firm because Kodak wanted to concentrate on film – its bread and butter. Unfortunately, other firms jumped onto the idea and it completely revolutionised photography, leaving film a hobbyist technology in many cases, rather than a commercial opportunity.

Source: ITPRO

18 Sep 2018
Manahel Thabet

Preparing for the Future of AI

Artificial intelligence is taking the world by storm, and many experts posit that the technology has brought us to the cusp of a fourth industrial revolution that will fundamentally alter the business landscape. AI and machine learning are responsible for a constant stream of innovation and disruption in the way organizations operate. To avoid being left behind, business leaders need to prepare for this future now.

While the earliest iterations of AI emerged in the 1950s, hardware limitations prevented the technology from reaching its true potential. Of course, the amount of processing power in our pockets today would have astounded scientists in that era, and advanced algorithms are allowing us to put it to work, combing through reams of data in seconds at the mere touch of a button.

AI isn’t exactly real intelligence, but it is capable of spotting patterns buried deep within data sets that human eyes may or may not notice, and in a fraction of the time. Additionally, due to deep learning techniques, it’s capable of learning and improving over time, meaning it becomes more and more effective at its job. Thanks to this functional facet, AI is powering an exciting array of applications from investment strategies to autonomous vehicles.

AI is sweeping through one industry after another, and while the technolgy is still in its infancy, it’s important to get started implementing now, so you don’t fall behind in the future. Rome wasn’t built in a day, so its safe to assume that implementation of AI technology won’t happen overnight either. To position your company in the sweet spot to grasp this incredible opportunity now, here are three steps to follow.

1. Cultivate an open culture.

According to McKinsey, a profit gap is already emerging between early AI adopters and those who have yet to implement the technology. Unfortunately for the firms that are being left behind, catching up is more than the matter of purchasing new software.

While the tempo of technological change is difficult enough to keep up with, the pace of cultural change is glacial. To take advantage of AI requires a team effort, which necessitates organizations build that culture of trust and openness to encourage collaboration. Encourage this kind of open culture now — for example, promote cross-team collaboration, invite process experimentation and redefine key performance initiatives — to foster positive attitudes toward technological change and AI adoption.

2. Partner with the pioneers.

AI is on the bleeding edge of technological innovation, and the pioneers pushing it to the next innovative level are startups. These small companies aren’t going it alone, however. From financial institutions to automotive companies, large organizations are funding incubators and accelerators to nurture the next generation of startups whose technology will change industries.

Working with startups has a couple advantages for bigger companies. According to Hossein Rahnama, founder and CEO of Flybits, “Joining up with a young company gives each entity a partner to lean on and grow with over the years.” He adds, “Through these partnerships, you’re not just procuring technology; you’re gaining access to talent, consulting services, new ideas, and more.” Working with small, agile startups provides an excellent launchpad for technology strategy and adoption.

3. Capitalize on creativity.

AI can crunch numbers better than anyone you could ever hope to hire, but it can’t do everything. It certainly can’t exercise creative problem-solving capabilities, and it’s still up to your employees to turn the insights AI unlocks into high-level strategies that drive business value.

The issue is that many companies see AI as the remover of jobs, when really it is a job creator and efficiency upper. Instead of requiring a creative team to constantly multitask between crunching the numbers and strategy, use AI to perform some of the grunt work, and free up your creative teams to do what they do best. The initial implementation might make operations less efficient, but over time, your marketing and other creative teams will become much better at wielding the technology. With a bit of experience, they’ll aim it at the right data to determine your company’s next step in reaching its goals.

As AI becomes more widespread, it will morph from a competitive advantage to the price of admission. To keep pace with the rest of the pack, start creating an implementation strategy today.

Source: Entrepreneur

17 Sep 2018
Manahel Thabet

5 Things You Need to Know Before Investing in Cryptocurrency

However, Bitcoin holders who bought at $19,000 probably rushed to sell their holdings when the Bitcoin price dropped to $15,000. As the bull market progresses, the risk of a major sell off increases. And this is what happened.

At the end of 2017, investors started realizing their gains which caused a minor correction. Then investors who came late to the party saw red in their crypto portfolio so they also started selling their holdings. Further selling means further price declines and that’s how a deep market correction develops.

The market correction brought down everything with it and you ought to be particularly careful with your crypto investments, especially if you participate in new ICOs. You are investing your hard earned income in crypto so make sure you get all the help you need.

To get you started, have a look at the 5 basic rules you need to know about investing in crypto.

1. Understand the risks you’re entering into

Make sure you understand the inherent risks you’re undertaking when buying crypto currencies (more specifically utility tokens).

  • When you buy into an ICO, you are buying into a startup. Most startups fail.
  • Most crypto (especially tokens that can’t be used yet because the platform is not live yet or in its infancy) is illiquid and easier to manipulate than securities
  • Crypto tokens (bar security tokens) don’t pay a dividend
  • Crypto markets are unregulated so a few big boys might corner the market at your expense
  • Also, what the Whitepaper states is rarely reflected in the Smart Contract and even worse, the founders sometimes keep the back door open in case they wish to amend the Smart Contract in the future (e.g. they allow themselves to change the supply of a coin after the ICO; or the Whitepaper mentions a lockup period but the Smart Contract doesn’t have any logic that locks up founders’ tokens). See here if you’d like to read more about this

2. Don’t be Silly

I get it – you see a few token shooting through the roof and you want to join the party. Crypto is risky – for the reasons I mentioned above. Start with small amounts and don’t go all in.

Keep your exposure to crypto to a reasonable level (and stick to it!). My current exposure to crypto is 5% of my total investment portfolio. I might consider upping it to 10% if the market goes much lower but 10% is my hard limit. Of course my exposure limit should be different to yours and your exposure limit depends on your age, your income, your level of wealth and in which asset classes you’ve invested your wealth in. Don’t copy my exposure limits!

Also, diversify your crypto holdings – especially in an asset class that largely depends on the speculation that a certain DLT platform will become popular. The more you spread your coins, the more likely you’ll hit the jackpot.

3. Be choosy

A deep market correction (including the crypto correction) brings everything down: good and bad tokens. Given that you’re within the exposure limits you’ve set yourself, buying during a severe market correction means you have the opportunity to shop around for great tokens at a deep discount.

Look at the drivers that support the price of a token

Go after those tokens with a strong, fundamental demand for the platform they support. Let me give you an example to show you what I mean.

You buy computation power on Ethereum by paying for Ether. Creating a new Smart Contract on Ethereum costs Ether; executing smart contract logic costs Ether and so on. Given that Ethereum seems to be the platform of choice to run Smart Contracts and in a world moving to automation and decentralization, one may take a position in Ether on the expectation that the demand for Ether (to execute even more Smart Contracts on Ethereum) will be strong.

If the thesis is correct, strong demand for executing Smart Contracts on Ethereum will underpin the value of the token in the long term.

Look for large economic moats

I’m not reinventing the wheel here – I’m borrowing the term from Warren Buffet, one of greatest investors out there. Economic moat refers to the degree of competition an Issuer faces (or will face in the future).

There are some business models which are harder to emulate than others. It’s hard to copy the brand loyalty that Apple has and it’s also not easy to build a double sided market place like eBay. On the other hand, it’s less difficult to build an Android-based phone and compete with Samsung. And, LED TVs are sold at low margins in a saturated market. Get the idea?

And this links to the point I made in Section 3.1 above. Most popular Altcoins execute the same function, have similar levels of market capitalisation and are (generally) equally accepted. In other words, they’re interchangeable. Buy coins that give you access to unique platforms if you want to sleep at night – especially in the current market conditions.

4. Don’t be Lazy

If you really insist on buying into ICOs, you need to get a lot of dirty work done. Around 600 ICOs went to market in the first half of 2018 (and I’m not mentioning those ICOs which didn’t reach their soft cap!) . That’s a lot of Whitepapers to analyse. Being a hard worker isn’t enough though:

  • You need to make sure you’re good at distinguishing a good startup from a bad one
  • You need to look at the founders, their expertise and their track record
  • Determine if certain major organisations are backing the blockchain application or otherwise

Or you can engage an advisor which does this. Really and truly, you need a team of analysts to sift through Whitepapers on an ongoing basis.

5. Technical Analysis is key

In a token world where no dividends are paid and no earnings are attributed to (utility or payment) tokens, it’s quite impossible to determine the fair value of a crypto asset (in the traditional finance world, the price of equity is a function of the future dividend that the Issuer is expected to pay). How much, say is Bitcoin worth? Coming up with a reply is impossible. If you can’t value an asset you can’t determine if you’re over paying for it or otherwise. It’s like walking blindfolded.

Technical analysis may come in handy (but trust me, it’s not a perfect science). Technical analysis is basically a study of how the psychology of the market is manifested in how a chart for an instrument looks. For example, when prices are going up and volumes dry up, you ought to be very careful with that price movement because it indicates that a very tiny portion of the market is participating in that price increase. Conversely, if there’s a market correction accompanied with large volumes in that token, then it should mean that the correction is supported by a relatively large portion of the market.

Source: Blockonomi

16 Sep 2018
Manahel Thabet

How AI is Changing the Future of How we Interact with Cars

Technology has always had a home in the automotive industry. With car production lines amongst the first implementers of the Hawthorne Experiment findings, cars have been at the forefront of how we behave alongside machines. And vice versa.

Over the years technology has changed from handheld spanners to robotic arms and is now moving to a full-fledged Artificial Intelligence (AI) modules. While production will always have a limited scope, user experience with AI is a continually evolving process, developing protocols for personal experiences. What started with chips sensing tune-up schedules, we have today the future- where cars may just decide to take the next step and get themselves to the body shop.

Automated vehicle prognostics

Cars require tune-ups and we forget it. This is a truth of life which AI is changing one day at a time. While basic models have lights & sensors, AI-based models will continuously map the user journey to check for problems, analyse trends and inform the rider for future conditions which could crop up. This will help the user know everything about the car and the domino effect as well, which may be happening in the car.

Service & repair estimation

Maintenance schedules can have varying costs depending on the condition of the car and usage patterns. Servicing schedules could put a dent in budgets because of uncertainty on the user’s part. With implemented AI modules, service costs & estimations can be a reality because it continuously monitors the condition of the car and lets riders know what to expect. A successful module will also let the user know the resale value to be expected at defined periods.

Voice Commands

While auto-driving will take some more time to be commonplace, voice commands have already started with basic instructions. While playing music, placing a call or checking for directions are becoming the norm with docked smartphones, AI implemented vehicles will become true partners like Kitt the night rider. A hearty conversation seems to be on the cards with machine language turning to human language with every passing day.

Accident Warnings

Multi-tasking is AI’s USP. While driving, playing a song and talking to you, the car is still calculating terabytes, possibly petabytes, of data. This data could be in checking the weather, sensing humidity levels, road tractions & friction, cars & their speeds in the vicinity, gauging sudden road obstructions and more- all to keep you safe. AI is not just about letting you talk, it’s also about letting you walk.

System failure diagnosis

A fully implemented AI module will not just find out a problem, it will remedy it as well. With smart cars to be the norm in the future, successful AI modules will be able to correct most issues in a car because they will be software related. While changing tyres may be a little distant for now, AI will be the decision maker in making an effective diagnosis, recommending a course of action and in several cases, will take an action.

Source: Entrepreneur

15 Sep 2018
Manahel Thabet

Behold: A Robot That Can Do Improv Comedy Just as Badly as a Human


If you think human improv comedy is insufferable, wait until you see a show that can’t pass the Turing Test.

Kory Mathewson, an artificial intelligence researcher at the University of Alberta, Edmonton, created an algorithm designed to riff with him onstage. He trained it to create lines of dialogue to be used in an improv performance by feeding it subtitles from hundreds of thousands of movies.

Then, according to a new paper uploaded to the preprint server arXivMathewson used the AI to perform a mind-bending experiment to test if a live audience could differentiate it from a human performer.


In the experiment, three human performers took the stage. One improvised dialogue in response to audience cues. Another repeated dialogue fed by an offstage human through an earpiece. And a third said lines provided by the AI, read to them by an offstage human via earpiece.

After the performance, audience members guessed which performer had been reciting the AI’s lines. Most of them correctly identified the AI — but a handful of viewers were fooled.


It’s an intriguing experiment, but watching the AI spit out stilted dialogue with Mathewson, it’s hard to imagine the routine becoming a comedy classic.

“Blueberry, I created you,” Mathewson tells a robot programmed with the improv AI during a performance earlier this year, in a video by Bloomberg Businessweek. “I downloaded a voice into your brain so that you could perform in front of these people.”

by Jon Christian September 14, 2018 Artificial Intelligence


If you think human improv comedy is insufferable, wait until you see a show that can’t pass the Turing Test.

Kory Mathewson, an artificial intelligence researcher at the University of Alberta, Edmonton, created an algorithm designed to riff with him onstage. He trained it to create lines of dialogue to be used in an improv performance by feeding it subtitles from hundreds of thousands of movies.

Then, according to a new paper uploaded to the preprint server arXivMathewson used the AI to perform a mind-bending experiment to test if a live audience could differentiate it from a human performer.


In the experiment, three human performers took the stage. One improvised dialogue in response to audience cues. Another repeated dialogue fed by an offstage human through an earpiece. And a third said lines provided by the AI, read to them by an offstage human via earpiece.

After the performance, audience members guessed which performer had been reciting the AI’s lines. Most of them correctly identified the AI — but a handful of viewers were fooled.


It’s an intriguing experiment, but watching the AI spit out stilted dialogue with Mathewson, it’s hard to imagine the routine becoming a comedy classic.

“Blueberry, I created you,” Mathewson tells a robot programmed with the improv AI during a performance earlier this year, in a video by Bloomberg Businessweek. “I downloaded a voice into your brain so that you could perform in front of these people.”

“So they do not know what I am going to say?” the robot asks.

“I don’t know what you’re going to say either,” Mathewson says, laughing. Good stuff.

Source: Futurism

12 Sep 2018
Manahel Thabet

Human-Technology Symbiosis in Manufacturing: Changing the Discussion About Automation and Workforce

Will technology help or replace workers?

The debate within manufacturing about whether technology will completely replace people is interesting, but it’s the wrong debate to be having. Technology is changing the workforce, it’s a fact, and it has eliminated low-skilled manufacturing jobs in the past; but it’s not as black-and-white as most arguments suggest.

Rather, the discussion should be about the concept of human-machine (or man-computer) symbiosis, the mutually beneficial relationship between humans and technology, and how machines and software can intelligently and physically increase the productivity of the systems to be more than that of human or machine alone. With the emergence of Industry 4.0 and the capabilities of Industrial Internet of Things (IIoT), the conversation should refocus on how to best transition displaced workers into the high-salaried, higher-skilled roles that are created along with the adoption of technology.

In fact, human-machine symbiosis is not a new concept. We know this from historical experience–and not just the old go-to story of the first Industrial Revolution.  Accountants have abandoned handwritten ledgers in favor of electronic spreadsheets. Product designers and architects have transitioned from manual to automated drawing tools. And few scientists and engineers use the once ubiquitous slide rule to assist with calculations. In each of these instances, technology eliminated tedious, time-consuming manual work, even as it augmented the education, skills and experience of the professionals.

Source: Forbes

11 Sep 2018


People often use the term “artificial intelligence” without really understanding what it’s supposed to mean. We can’t blame them. AI can mean what you want it to. It’s an abstract term, to a good extent. And if you talk to experts in the science of machine learning, you might even learn that they don’t really recognize artificial intelligence as a technology but more as a marketing buzzword used to sell machine learning. So, for the sake of simplicity, understand that machine learning is one of the most effective and mature approaches to realizing algorithms that make programs and machines seem “intelligent.”

Well, we’re going to go a lot deeper than that to help you understand how machine learning is the key force shaping the world of artificial intelligence.

Machine learning is mostly based on using lots and lots of training data and good algorithms. Though there’s a lot of excitement in technology circles about sophisticated algorithms, particularly deep learning, it must be understood that most applications of machine learning are a result of good data. Machine learning could exist without good algorithms, but it can’t exist without good data.

This is a major fact for everyone involved in the artificial intelligence industry. The path to the future is paved on the foundation of good data more than anything else. That’s why you’d observe that the most remarkable examples of artificial intelligence are in industries where data scientists have access to massive data.

It all goes back to ‘GIGO’

“Garbage in, garbage out” — this is the rule of thumb in software development, as old as the idea of software itself. It applies particularly well to machine learning, and it’s very important that developers and data scientists understand it. This is a key limitation of machine learning. It can only identify patterns that exist in the training data, and that’s the sole basis for its learning.

If a machine learning model generates good results based on its training data, expect it to generate results of a similar quality in a production environment. However, that’s only when the production data follows the same distribution as that of the training data. Skews between training and production data are guaranteed to result in the errant behavior of your model.

This is why continual improvement is a key to successful machine learning. The best models performing consistently in real life scenarios are the ones that are being constantly reviewed and improved. That’s the guiding light for anybody looking to be successful in the artificial intelligence industry.

Don’t have massive data? Simplify the model

Having lots of data means that it becomes possible to extract reliable patterns from it. This, in turn, means that you can train a model reliably. Also, with lots of patterns, you can experiment with different models, each with a different set of parameters.

When you don’t have a lot of data, you run the risk that a model with too many parameters will overfit to the training data. The result — the model will not be able to generalize beyond the training data.

This ties closely to what we’re observing in the artificial intelligence space. Most failure cases can be explained by the same fallacy. That’s why you’ll observe that in industries where the use of machine learning is nascent, the use cases are very tight in scope. The golden rule — when data is not in abundance, rein in your models to very specific use cases.

The era of smart machines

machine learning

“Smart machines” is a futuristic idea centered on the concept of using machine learning algorithms and programs in machines so that they are able to solve problems and alter their behavior without any requirement of human intervention. Smart speakers, autonomous vehicles, unmanned aerial vehicles, context-aware devices — all these and more are primitive smart machines. Machine learning is at the forefront of all the innovation and research being carried out in this space. Since this is essentially what futuristic AI means for most people, it’s clear how machine learning is the perfect vehicle for anybody who wants to be at the pinnacle of the industry in the years to come.

Machine learning: Bringing sense to the idea of AI

Examples of machine learning in action are all around us. E-commerce websites use it to recommend suitable products to users, Gmail uses it to fight spam, music streaming and entertainment apps use it to show good content to people, Facebook uses it for photo tagging, and Google uses it to rank web pages for search keywords. This is in utter contrast to the black box that AI has been for quite some time now. The lack of understanding and the ambiguity associated with the term are responsible for the doomsday predictions that people have been making about AI, which is not good for the industry. Machine learning has brought a sense of calm and helped people understand how the technology can add value to human lives, instead of creating existential risks.

Paving the way for the ‘Semantic Web’

Web 1.0 was the era of the Internet as a repository of information. Then came Web 2.0, where the Internet became social. Next up, Web 3.0 will be the era of the semantic web. The core idea here is that machines will be able to understand complex human queries and solve them quickly and credibly. This can be achieved when information is semantically structured in a manner that makes it easy for machines to process the info.

Machine learning: A lot to learn

Machine learning is as close as we have come to be able to create code that can learn on its own, much like humans do. Machine learning is touching everyone’s lives daily, without us really realizing it. The nature of work that humans need to do to sustain themselves is changing, as programs and machines become intelligent enough to do it all. Machine learning is changing the present and shaping the future for the better.

Source: TechGenix

10 Sep 2018
Manahel Thabet

MIT’s New Robot Can Visually Understand Objects It’s Never Seen Before


Computer vision — a machine’s ability to “see” objects or images and understand something about them — can already help you unlock your fancy new iPhone just by looking at it. The thing is, it has to have seen that information before to know what it’s looking at.

Now, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a computer vision system that can identify objects it has never seen before. Not to mention it’s an important step towards getting robots to move and think the way humans do.

They published their research on Monday and plan to present it at the Conference on Robot Learning in Zürich, Switzerland, in October.


The MIT team calls its system “Dense Object Nets” (DON). DON can “see” because it looks at objects as a collection of points that the robot processes to form a three-dimensional “visual roadmap.” That means scientists don’t have to sit there and tediously label the massive datasets that most computer vision systems require.


When DON sees an object that looks like one it’s already familiar with, it can identify the various parts of that new object. For example, after researchers showed DON a shoe and taught it to pick the shoe up by its tongue, the system could then pick other shoes up by their tongues, even if it hadn’t seen the shoes previously or they were in different positions than the original.

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” researcher Lucas Manuelli said in a press release. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”


This advanced form of computer vision could do a lot that robots can’t do now. We could someday get a robot equipped with the system to sort items in a recycling center as they move down a conveyor belt without needing to train it on huge datasets. Or maybe we could even show one of these “self-supervised” robots an image of a tidy desk and have it organize our own.

Ultimately, this is another step forward on the path to machines that are as capable as humans. After that, we’ll just have to see how much more capable than us they can get.

Source: Futurism

09 Sep 2018
Manahel Thabet

Artificial Intelligence Could Be The Key To Longevity

What if we could generate novel molecules to target any disease, overnight, ready for clinical trials? Imagine leveraging machine learning to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000. It’s a multibillion-dollar opportunity that can help billions.

The worldwide pharmaceutical market, one of the slowest monolithic industries to adapt, surpassed $1.1 trillion in 2016. In 2018, the top 10 pharmaceutical companies alone are projected to generate over $355 billion in revenue. At the same time, it currently costs more than $2.5 billion (sometimes up to $12 billion) and takes over 10 years to bring a new drug to market. Nine out of 10 drugs entering Phase I clinical trials will never reach patients. As the population ages, we don’t have time to rely on this slow, costly production rate. Some 12 percent of the world population will be 65 or older by 2030, and “diseases of aging” like Alzheimer’s will pose increasingly greater challenges to society. But a world of pharmaceutical abundance is already emerging. As artificial intelligence converges with massive datasets in everything from gene expression to blood tests, novel drug discovery is about to get more than 100 times cheaper, faster, and more intelligently targeted.

One of the hottest startups I know in this area is Insilico Medicine. Leveraging AI in its end-to-end drug pipeline, Insilico Medicine is extending healthy longevity through drug discovery and aging research. Their comprehensive drug discovery engine uses millions of samples and multiple data types to discover signatures of disease and identify the most promising targets for billions of molecules. These molecules either already exist or can be generated de novo with the desired set of parameters.

Insilico’s CEO Dr. Alex Zhavoronkov recently joined me on an Abundance Digital webinar to discuss the future of longevity research. Insilico announced the completion of a strategic round of funding led by WuXi AppTec’s Corporate Venture Fund, with participation from Pavilion Capital, Juvenescence, and my venture fund BOLD Capital Partners. What they’re doing is extraordinary — and it’s an excellent lens through which to view converging exponential technologies.

Case Study: Leveraging AI for Drug Discovery

You’ve likely heard of deep neural nets — multilayered networks of artificial neurons, able to ‘learn’ from massive amounts of data and essentially program themselves. Build upon deep neural nets, and you get generative adversarial networks (GANs), the revolutionary technology that underpins Insilico’s drug discovery pipeline.

What are GANs?  By pitting two deep neural nets against each other (“adversarial”), GANs enable the imagination and creation of entirely new things (“generative”). Developed by Google Brain in 2014, GANs have been used to output almost photographically accurate pictures from textual descriptions.

Source: Futurism