Month: July 2018

31 Jul 2018
Manahel Thabet

Biometric Mirror is an interactive application that shows how you may be perceived by others. But it’s flawed and teaches us an important lesson about the ethics of AI

In 2002, the sci-fi thriller Minority Report gave us a fictionalised glimpse of life in 2054. Initially, the movie evokes a perfect utopian society where artificial intelligence (AI) is blended with surveillance technology for the wellbeing of humanity.

The AI supposedly prevents crime using the predictions from three precogs – these psychics visualise murders before they happen and police act on the information.

“The precogs are never wrong. But occasionally, they do disagree.”

So says the movie’s lead scientist and these disagreements result in minority reports; accounts of alternate futures, often where the crime doesn’t actually occur. But these reports are conveniently disposed of, and as the story unfolds, innocent lives are put at stake.

Ultimately, the film shows us a future where predictions are inherently unreliable and ineffective and that is worth keeping in mind as we grapple with the ongoing advances in artificial intelligence.

Minority Report may be fiction, but the fast-evolving technology of AI isn’t. And although there are no psychics involved in the real world, the film highlights a key challenge for AI and algorithms: what if they produce false or doubtful results? And what if these results have irreversible consequences?


Industry and government authorities already maintain and analyse large collections of interrelated datasets containing personal information.

For instance, insurance companies collate health data and track driving behaviours to personalise insurance fees. Law enforcement use driver’s licence photos to identify criminals and suspected criminals, and shopping centres analyse people’s facial features to better target advertising.

While collecting personal information to tailor an individual service may seem harmless, these datasets are typically analysed by ‘black box’ algorithms, where the logic and justification of the predictions are opaque. Plus, it’s very difficult to know whether a prediction is based on incorrect data or data that has been collected illegally or unethically, or data that contains erroneous assumptions.

Minority Report shows us a future where the predictions are inherently unreliable and ineffective. Picture: Twentieth Century Fox

What if a traffic camera incorrectly detects you speeding and automatically triggers a licence cancellation? What if a surveillance camera mistakes a handshake for a drug deal? What if an algorithm assumes you look similar to a wanted criminal? And imagine having no control over an algorithm that wrongfully decides you’re ineligible for a university degree?

Even if the underlying data is accurate, the opacity of AI processes make it difficult to redress algorithmic bias, as is found in some AI systems that are sexist, racist, or discriminate against the poor.

How do you appeal against poor decisions if the underlying data or the rationale for the decision is unavailable?

One response is to create explainable AI, which is part of an ongoing research progam led by University of Melbourne’s Associate Professor Tim Miller, where the underlying justification of an AI decision is explained in a manner that can be easily understood by everyone.


Another response is to create human-computer interfaces that are open and transparent about the assumptions made by AI. Clear, open and transparent representations of AI capabilities can contribute to a broader discussion of its possible societal impacts and more informed debate about the ethical implications of human-tracking technologies.

In order to stimulate this discussion, we created Biometric Mirror.

Biometric Mirror is an interactive application that takes your photo and analyses it to identify your demographic and personality characteristics. These include traits such as your level of attractiveness, aggression, emotional stability and even your ‘weirdness’.

The AI uses an open dataset of thousands of facial images and crowd-sourced evaluations – where a large pool of people have previously rated the perceived personality traits for each of those faces. The AI uses this dataset to compare your photo to the crowd-sourced dataset.

Biometric Mirror then assesses and displays your individual personality traits. One of your traits is then chosen – say, your level of responsibility – and Biometric Mirror asks you to imagine that this information is now being shared with someone, like your insurer or future employer.

Biometric Mirror can be confronting. It starkly demonstrates the possible consequences of AI and algorithmic bias, and it encourages us reflect on a landscape where government and business increasingly rely on AI to inform their decisions.


Despite its appearance, Biometric Mirror is not a tool for psychological analysis – it only calculates the estimated public perception of personality traits based on facial appearance. So, it wouldn’t be appropriate to draw meaningful conclusions about psychological states.

It is a research tool that helps us to understand how people’s attitudes change as more of their data is revealed, while a series of participant interviews go further to reveal people’s ethical, social and cultural concerns.

The discussion around ethical use of AI is ongoing, but there’s an urgent need for the public to be involved in the debate about these issues. Our study aims to provoke challenging questions about the boundaries of AI. By encouraging debate about privacy and mass-surveillance, this discussion will contribute to a better understanding of the ethics that sit behind AI.

Although Minority Report is just a movie, here in the real world, Biometric Mirror aims to raise awareness about the social implications of unrestricted AI – so a fictional dystopian future, doesn’t become a dark reality.

Biometric Mirror is in the Eastern Resource Centre, Parkville campus until early September. A series of interviews and observations will complement the study to reveal people’s ethical, social and cultural concerns. Members of the public aged 16 and over can also take part during Science Gallery Melbourne’s exhibition, Perfection, which runs 12 September – 3 November 2018.

Source: Pursuit

30 Jul 2018
Manahel Thabet

How to hack your unconscious… to boost your memory and learn better

It seems like hard conscious work, but much of the learning process goes on deep in the mind. Here are the top tips to improve how you recall facts

We tend to think of learning as hard work, requiring a lot of conscious effort. However, much of the process goes on behind the scenes. If you could improve the unconscious processing and retrieval of memories, you could game the system. And it turns out that you can – often with very little effort.

If you are learning facts such as foreign phrases or historical dates, giving your study a boost could be as simple as taking a break. Lila Davachi at New York University has found that breaks help to consolidate new memories, improving recall later. However, for a time out to work, different brain cells need to be activated to those you used during the learning period. So, try not to think about what you have just been working on.

hack your unconscious illustration

Hack your unconscious

Your unconscious mind is not a black box of fears and desires working to undermine you, but a powerhouse of thought. Discover how you can take advantage

Better yet, sleep on it. It is well established that the brain processes memories during sleep, but it will do this more effectively if you leave the optimum time between learning and sleeping. Christoph Nissen at the University of Bern, Switzerland, found that a group of 16 and 17-year-olds performed best on tests of factual memory if they studied the material mid-afternoon, but they acquired skills involving movements faster if they practised in the evening. He suspects that the “critical window” between learning and sleep is shorter for movement-related learning than for other types of memory. Whether adults can benefit as much as …

Continue Reading 

29 Jul 2018
Manahel Thabet

Virtual Reality Has Reached A “Tipping Point.” It’s Officially Here to Stay.

Thanks to virtual reality, you can swim with the dolphins, play some tennis, or spend some alone time, all from the comfort of your own living room. But it’s not yet perfect — a horrible wave of nausea can hit anytime, right in the middle of these activities.

This problem — motion sickness caused by laggy, choppy virtual reality experiences — has been around since the technology first emerged.

Yes, virtual reality isn’t perfect. But there’s reason to believe that virtual reality technology has finally proved that it’s here to stay.

“VR isn’t where I want it to be, but this current generation of products — I think it’s proved that VR is real,” David Ewalt told Futurism. Ewalt is a writer and journalist who focuses on new technology. His book on virtual reality, “Defying Reality” was published on July 17.

“It’s not hype anymore,” he added. “It needs to get much better, but I think we reached that tipping point where you can try the products we have now and say, ‘damn, that really works. VR is real.’”

There are still some glitches that Ewalt thinks will need to be fixed before we’ll get a mass-marketable virtual reality headset that everyone will want to use. As it is now, the technology will sometimes lag or glitch. If the wearer turns her head faster than the headset can render, the image can seem blurry, and sometimes the motion tracking software will just seem off. Problems inherent to the headsets themselves, Ewalt says, are responsible for all those times you felt like you were gonna hurl while, say, exploring a simulated cat’s innards.

“It’s not hype anymore… we reached that tipping point where you can try the products we have now and say, ‘damn, that really works. VR is real.’”

“The headsets are great right now, but they’re not perfect. Resolution needs to keep getting better,” Ewalt says. “The headsets need to keep getting lighter and more comfortable. People don’t want to do VR for a half hour or an hour or two because they’ve got this big thing on their face.”

Image Credit: TheDigitalArtist

Beyond that, Ewalt believes that the future of VR will be shaped by people improving their simulations piece by piece to make them more realistic than ever. He argues that past attempts at VR failed because the tech just wasn’t good enough.

Better headsets will likely mean better things to use on them. Right now, the companies that make headsets such as Oculus and Sony want people to purchase and experience simulations on their own platform, their own independent stores. But there are a growing number of exceptions — some companies like porn studio Naughty America develop experiences that are platform agnostic, as The Wall Street Journal reported.

Ewalt likened the current state of the VR marketplace to that of console video games: if you want to play Mario Kart, you’ve gotta go to Nintendo. BSo far, there’s nothing for VR like the PC gaming market, where people don’t need to buy a new console each time they want to play a different company’s games. There’s also no VR equivalent of Steam, the massive PC gaming platform and marketplace which has helped democratize video game development and sales.

“We’re not there yet, it’s still early,” says Ewalt about a more centralized and less restrictive VR store. “The number of people who have VR headsets who are buying software is still so small, we haven’t reached critical mass yet. It wouldn’t be good business.”

And because many VR experiences are interactive and more complex than your typical .mp4, sharing them among your friends isn’t as simple as sticking that movie you pirated onto a thumb drive. If you want to play with your friends, everyone will have have to buy their own copy, like you would for a video game.

“There will probably be a platform for 360 video. Something along the lines of a Netflix for VR, a Hulu for VR,” says Ewalt. Because users can enjoy non-interactive VR experiences on any headset, Ewalt suspects those kinds of videos may be better suited for a universal platform and marketplace.

In the meantime, Ewalt suspects that augmented reality might take hold more quickly, in part because it doesn’t isolate people in the same way that a VR headset does. But he doesn’t see that as bad news for VR developers.

“I think AR will be way more common than virtual reality because, to use VR, you have to shut yourself away. But AR will be everywhere — I think it’s gonna be one of the major ways we interface with technology,” Ewalt says. “[The two technologies] are profound in different ways. I absolutely think they’re complementary technologies; they’re not at odds.”

You may have never touched an Oculus Rift or a Sony PSVR, but odds are you’ve heard of them. Ewalt argues that it’s time to stop dismissing virtual reality just because it’s not perfect yet or it hasn’t become ubiquitous in society.

“People need to be reminded that no one is saying this is taking over the world right now,” he says. “But the remarkable thing that’s happened is that we finally have proof that VR is real, and it’s gonna happen.”

Source: Futurism

28 Jul 2018
Manahel Thabet


Google Glass lives—and it’s getting smarter.

On Tuesday, Israeli software company Plataine demonstrated a new app for the face-mounted gadget. Aimed at manufacturing workers, it understands spoken language and offers verbal responses. Think of an Amazon Alexa for the factory floor.

Plataine’s app points to a future where Glass is enhanced with artificial intelligence, making it more functional and easy to use. With clients including GE, Boeing, and Airbus, Plataine is working to add image-recognition capabilities to its app as well.

The company showed off its Glass tech at a conference in San Francisco devoted to Google’s cloud computing business; the app from Plataine was built using AI services provided by Google’s cloud division, and with support from the search giant. Google is betting that charging other companies to tap AI technology developed for its own use can help the cloud business draw customers away from rivals Amazon and Microsoft.

Jennifer Bennett, technical director to Google Cloud’s CTO office, said that adding Google’s cloud services to Glass could help make it a revolutionary tool for workers in situations where a laptop or smartphone would be awkward. “Many of you probably remember Google Glass from the consumer days—it’s baaack,” she said, earning warm laughter, before introducing Plataine’s project. “Glass has become a really interesting technology for the enterprise.”

The session came roughly one year after Google abandoned its attempt to sell consumers on Glass and its eye-level camera and display, which proved controversial due to privacy concerns. Instead, Google relaunched the gadget as a tool for businesses called Google Glass Enterprise Edition. Pilot projects have involved Boeing workers using Glass on helicopter production lines, and doctors wearing it in the examining room.

Anat Karni, product lead at Plataine, slid on a black version of Glass Tuesday to demonstrate the app. She showed how the app could tell a worker clocking in for the day about production issues that require urgent attention, and show useful information for resolving problems on the device’s display.

A worker can also talk to Plataine’s app to get help. Karni demonstrated how a worker walking into a storeroom could say “Help me select materials.” The app would respond, verbally and on the display, with what materials would be needed and where they could be found. A worker’s actions could be instantly visible to factory bosses, synced into the software Plataine already provides customers, such as Airbus, to track production operations.

Plataine built its app by plugging Google’s voice-interface service, Dialogflow, into a chatbot-like assistant it had already built. It got support from Google, and also software contractor and Google partner Nagarro. Karni credits Google’s technology—which can understand variations in phrasing, along with terms such as “yesterday” that typically trip up chatbots—for managing a worker’s tasks and needs. “It’s so natural,” she said.

Karni told WIRED that her team is now working with Google Cloud’s AutoML service to add image-recognition capabilities to the app, so it can read barcodes and recognize tools, for example. AutoML, which emerged from Google’s AI research lab, automates some of the work of training a machine learning model. It also has become a flagship of Google’s cloud strategy. The company hopes corporate cloud services will become a major source of revenue, with Google’s expertise in machine learning and computing infrastructure helping other businesses. Diane Greene, the division’s leader, said last summer that she hoped to catch up with Amazon, far and away the market leader, by 2022.

Gillian Hayes, a professor who works on human-computer interaction at University of California at Irvine, said the Plataine project and plugging Google’s AI services into Glass play to the strengths of the controversial hardware. Hayes previously had tested the consumer version of the app as a way to help autistic people navigate social situations. “Spaces like manufacturing floors, where there’s no social norm saying it’s not OK to use this, are the spaces where I think it will do really well,” she added.

Improvements to voice interfaces and image recognition since Glass first appeared—and disappeared—could help give the device a second wind. “Image and voice recognition technology getting better will make wearable devices more functional,” Hayes said.

Source: Wired

26 Jul 2018
Manahel Thabet

There’s A Huge Subterranean Lake of Liquid Water on Mars

LIQUID WATER. It’s official. There is liquid water on Mars. And not just a little, either. A research team led by Roberto Orosei, a professor at the University of Bologna, has detected a lake of liquid water 20 kilometers (12.4 miles) wide about 1.5 kilometers (.9 miles) below the surface of Mars’ southern ice cap in a region known as Planum Australe. They suspect that dissolved salts from nearby minerals prevent the water from freezing, despite the low temperatures.

They published their research on the Martian lake Wednesday in the journal Science.

RADAR & THE RED PLANET. At the center of this discovery is Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS), an instrument aboard the European Space Agency’s (ESA’s) Mars Express spacecraft.

This instrument, which is positioned hundreds of kilometers above the surface of Mars, sends electromagnetic radar waves down toward the center of the planet. When those waves transition from one material to the next (from ice to rock, for example), they reflect back up to the instrument. Scientists can then analyze those reflected waves to determine what sort of material exists beneath the Martian surface.

Between May 2012 and December 2015, Orosei and his team focused MARSIS on the Planum Australe (southern plane) region of Mars. As they analyzed the data, they noticed an area had a radar profile similar to those of the lakes of liquid water we know exist beneath ice sheets here on Earth. After ruling out any other possibilities, the researchers concluded that a lake of liquid water is the only explanation for the data.

FOLLOW THE WATER. This isn’t the first discovery of water on Mars — we already knew the planet was home to ice as well as small amounts of water vapor. We’ve even found evidence of liquid water on the Red Planet, but this is the first time anyone has actually detected it.

There’s a reason NASA’s mantra toward studying Mars is “follow the water.” Not only is water important in helping us understand the climate of Mars, past and present, any deposits — liquid or otherwise — could play a role in our plans for Martian colonization.

Perhaps most exciting of all, though, it what this discovery of a Martian lake could mean in our search for extraterrestrial life. Almost anywhere on Earth that there’s water, there’s also life. If the same holds true on Mars, this discovery may mean the probability of the planet hosting some sort of life may have just increased dramatically.

Source: Futurism

25 Jul 2018
Manahel Thabet

DARPA Is Funding Research Into AI That Can Explain What It’s “Thinking”

LOOKING AHEAD. Researchers will hold the next wave of artificial intelligences (AI) to the same standard as high school math students everywhere: no credit if you don’t show your work.

On Friday, Defense Advanced Research Projects Agency (DARPA), a Department of Defense (DoD) agency focused on breakthrough technologies, announced its Artificial Intelligence Exploration (AIE) program. This program will streamline the agency’s process for funding AI research and development with a focus on third wave AI technologies — the kinds that can understand and explain how they arrived at an answer.

NEXT-LEVEL AI. Most of the AI in use today falls under the category of first wave. These AIs follow clear, logical rules (think: chess-playing AIs). Second wave AIs are the kind that use statistical learning to arrive at an answer for a certain type of problem (think: image recognition systems).

A system in the third wave of AI will not only be able to do what a second wave system can do (for example, correctly identify a picture of a dog), it’ll be able to explain why it decided the image is of a dog. For example, it might note that the animal’s four legs, tail, and spots align with its understanding of what a dog should look like.

In other words, it’ll be able to do more than give the right answer — it’ll be able to show us how it got there.

As John Launchbury, the Director of DARPA’s Information Innovation Office (I2O), noted in a DARPA video, these systems should be able to learn from far smaller datasets than second wave systems. For example, instead of feeding an AI 100,000 meticulously labels images to teach it how to recognize handwriting, we might be able to show it just one or two generative examples — examples that show how to form each letter. It can then use that contextual information to identify handwriting in the future.

These third wave systems would “think” rather than just churn out answers based on whatever datasets they’re fed (and as we’ve seen in the past, these datasets can include the biases of their creators). Ultimately, it is the next step to creating AIs that can reason and engage in abstract thought, which could improve how both the military and everyone else makes use of AI.

FUNDING THE THIRD WAVE. Here’s how DARPA’s AIE program hopes to speed up the approach of this third wave AI. First, the agency will periodically announce a notice it’s calling an “AIE Opportunity.” This notice will highlight an area of third wave AI research of particular interest to the military.

Researchers can then submit proposals for projects to DARPA, which will review them and potentially choose to award the researcher with up to $1 million in funding. The goal is to have researchers get started on projects within 90 days of the AIE Opportunity announcement and determine whether a concept is feasible within 18 months.

AI ON THE BRAIN. This is just the latest example of the U.S. military’s growing interest in AI. Recent projects include everything from AIs that analyze footage to improve drone strikes to systems that function like the human brain. Just last month, the DoD launched the Joint Artificial Intelligence Center (JAIC), a center designed to help the department integrate AI into both its business and military practices.

Both that center and the AIE program put a premium on speed, a wise move for the DoD given that nations all across the globe are racing to be the world leader in military AI.

Source: Futurism

24 Jul 2018
Manahel Thabet

A New Connection Between Smell and Memory Identified

Summary: A new study reveals how smells we encounter throughout life are encoded in memory. The findings could help develop new smell tests for Alzheimer’s disease.

Source: University of Toronto.

Neurobiologists at the University of Toronto have identified a mechanism that allows the brain to recreate vivid sensory experiences from memory, shedding light on how sensory-rich memories are created and stored in our brains.

Using smell as a model, the findings offer a novel perspective on how the senses are represented in memory, and could explain why the loss of the ability to smell has become recognized as an early symptom of Alzheimer’s disease.

“Our findings demonstrate for the first time how smells we’ve encountered in our lives are recreated in memory,” said Afif Aqrabawi, a PhD candidate in the Department of Cell & Systems Biology in the Faculty of Arts & Science at U of T, and lead author of a study published this month in Nature Communications.

“In other words, we’ve discovered how you are able to remember the smell of your grandma’s apple pie when walking into her kitchen.”

There is a strong connection between memory and olfaction – the process of smelling and recognizing odours – owing to their common evolutionary history. Examining this connection in mice, Aqrabawi and graduate supervisor Professor Junchul Kim in the Department of Psychology at U of T found that information about space and time integrate within a region of the brain important for the sense of smell – yet poorly understood – known as the anterior olfactory nucleus (AON).

Continue Reading.


“Read more about Dr Manahel Thabet.”

Manahel Thabet Membership, LinksResearch, Articles , In the Media, Public Speaking, Blogs, Photo Gallery, and Video Gallery

23 Jul 2018
Manahel Thabet

How the Brain Reacts to Food May Be Linked to Overeating

Summary: A new study reports when certain brain areas react more strongly to food rewards than financial rewards, children are more likely to overeat, even if they are not hungry or overweight.

Source: Penn State.

The reason why some people find it so hard to resist finishing an entire bag of chips or bowl of candy may lie with how their brain responds to food rewards, leaving them more vulnerable to overeating.

In a study with children, researchers found that when certain regions of the brain reacted more strongly to being rewarded with food than being rewarded with money, those children were more likely to overeat, even when the child wasn’t hungry and regardless of if they were overweight or not.

Shana Adise, a postdoctoral fellow at the University of Vermont who led the study while earning her doctorate at Penn State, said the results give insight into why some people may be more prone to overeating than others. The findings may also give clues on how to help prevent obesity at a younger age.

“If we can learn more about how the brain responds to food and how that relates to what you eat, maybe we can learn how to change those responses and behavior,” Adise said. “This also makes children an interesting population to work with, because if we can stop overeating and obesity at an earlier age, that could be really beneficial.”

Previous research on how the brain’s response to food can contribute to overeating has been mixed. Some studies have linked overeating with brains that are more sensitive to food rewards, while others have found that being less sensitive to receiving food rewards makes you more likely to overeat.

Additionally, other studies have shown that people who are willing to work harder for food than other types of rewards, like money, are more likely to overeat and gain weight over time. But the current study is the first to show that children who have greater brain responses to food compared to money rewards are more likely to overeat when appealing foods are available.

“We know very little about the mechanisms that contribute to overeating,” Adise said. “The scientific community has developed theories that may explain overeating, but whether or not they actually relate to food intake hadn’t yet been evaluated. So we wanted to go into the lab and test whether a greater brain response to anticipating and winning food, compared to money, was related to overeating.”

For the study, 59 children between the ages of 7 and 11 years old made four visits to the Penn State’s Children’s Eating Behavior Laboratory.

During the first three visits, the children were given meals designed to measure how they eat in a variety of different situations, such as a typical meal when they’re hungry versus snacks when they’re not hungry. How much the children ate at each meal was determined by weighing the plates before and after the meals.

On their fourth visit, the children had fMRI scans as they played several rounds of a game in which they guessed if a computer-generated number would be higher or lower than five. They were then told that if they were right, they would win either money, candy or a book, before it was revealed if they were correct or not.

The researchers found that when various regions of the brain reacted more to anticipating or winning food compared to money, those children were more likely to overeat.

“We also found that the brain’s response to food compared to money was related to overeating regardless of how much the child weighed,” Adise said. “Specifically, we saw that increased brain responses in areas of the brain related to cognitive control and self control when the children received food compared to money were associated with overeating.”

Previous research on how the brain’s response to food can contribute to overeating has been mixed. Some studies have linked overeating with brains that are more sensitive to food rewards, while others have found that being less sensitive to receiving food rewards makes you more likely to overeat. image is in the public domain.

Adise added that this is important because it suggests there may be a way to identify brain responses that can predict the development of obesity in the future.

Kathleen Keller, associate professor of nutritional sciences, Penn State, said the study — recently published in the journal Appetite — backs up the theory that an increased brain response in regions of the brain related to rewards is associated with eating more food in a variety of situations.

“We predicted that kids who had an increased response to food relative to money would be the ones to overeat, and that’s what we ended up seeing,” Keller said. “We specifically wanted to look at kids whose brains responded to one type of a reward over another. So it wasn’t that they’re overly sensitive to all rewards, but that they’re highly sensitive to food rewards.”

Keller said the findings give insight into how the brain influences eating, which is important because it could help identify children who are at risk for obesity or other poor eating habits before those habits actually develop.

“Until we know the root cause of overeating and other food-related behaviors, it’s hard to give good advice on fixing those behaviors,” Keller said. “Once patterns take over and you overeat for a long time, it becomes more difficult to break those habits. Ideally, we’d like to prevent them from becoming habits in the first place.”

Source: NeuroScience

22 Jul 2018
Manahel Thabet

Rolls-Royce Is Building Cockroach-Like Robots to Fix Plane Engines

DEBUGGING. Typically, engineers want to get bugs out of their creations. Not so for the U.K. engineering firm (not the famed carmaker) Rolls-Royce — it’s looking for a way to get bugs into the aircraft engines it builds.

These “bugs” aren’t software glitches or even actual insects. They’re tiny robots modeled after the cockroach. On Tuesday, Rolls-Royce shared the latest developments in its research into cockroach-like robots at the Farnborough International Airshow.

ROBO-MECHANICS. Rolls-Royce believes these tiny insect-inspired robots will save engineers time by serving as their eyes and hands within the tight confines of an airplane’s engine. According to a report by The Next Web, the company plans to mount a camera on each bot to allow engineers to see what’s going on inside an engine without have to take it apart. Rolls-Royce thinks it could even train its cockroach-like robots to complete repairs.

“They could go off scuttling around reaching all different parts of the combustion chamber,” Rolls-Royce technology specialist James Cell said at the airshow, according to CNBC. “If we did it conventionally it would take us five hours; with these little robots, who knows, it might take five minutes.”

NOW MAKE IT SMALLER. Rolls-Royce has already created prototypes of the little bot with the help of robotics experts from Harvard University and University of Nottingham. But they are still too large for the company’s intended use. The goal is to scale the roach-like robots down to stand about half-an-inch tall and weigh just a few ounces, which a Rolls-Royce representative told TNW should be possible within the next couple of years.

Source: Futurism

21 Jul 2018
Manahel Thabet

Origami-Inspired Device Catches Fragile Sea Creatures Without Harming Them

OOPS! SORRY, SQUID. When researchers want to study fish and crustaceans, it’s pretty easy to collect a sample — tow a net behind a boat and you’re bound to capture a few. But collecting delicate deep-sea organisms such as squid and jellyfish isn’t so simple — the nets can literally shred the creatures’ bodies.

Now, Zhi Ern Teoh, a mechanical engineer from Harvard Microrobotics Laboratory, and his colleagues have developed a better way for scientists to collect these elusive organisms. They published their research in Science Robotics on Wednesday.

Image Credit: Wyss Institute at Harvard University

THE OLD WAY. Currently, researchers who want to collect delicate marine life have two ways to do it (that aren’t nets). The first is a detritus sampler, a tube-shaped device with round “doors” on either end. To capture a creature with this device, the operator must slide open the doors, manually position the tube over the creature, then quickly shut the doors before the creature escapes. According to the researchers’ paper, this positioning requires the operator to have a bit of skill. The second type of device is one that uses suction to pull a specimen through a tube into a storage bucket. This process can destroy delicate creatures.

THE BETTER WAY. To create a device that is both easy to use and unlikely to harm a specimen, Teoh looked to origami, the Japanese art of paper folding. He came up with device with a body made of 3D-printed photopolymer and modeled after a dodecahedron, a shape with 12 identical flat faces. If you’ve ever played a board game with a 12-sided die, you’re already familiar with the shape of the device in the closed position; when open, it looks somewhat like a flat starfish.

The device was tested up to 700 meters’ depth, but it’s designed to withstand pressure of “full ocean depth,” the researchers write (11 kilometers, or 6.8 miles deep).Despite the large number of joints, Teoh’s device can open or close in a single motion using the power of just a single rotational actuator, a motor that converts energy into a rotational force. Once the open device is in position right behind a specimen, the operator simply triggers the actuator, and the 12 sides of the dodecahedron fold around the creature and the water it’s in, though it doesn’t seal tightly enough to carry that water above the surface.

Eventually, the researchers hope to update the device to include built-in cameras and touch sensors. They also think it has the potential to be useful for space missions, helping with off-world construction, but for now, they’re focused on collecting marine life without destroying it in the process.

Source: Futurism