Trigo Vision, an AI startup founded by former members of the Israeli army’s special forces and intelligence community, just came out of stealth mode with a target in its sights: Amazon’s Go store.
When Amazon opened its cashier-less store in Seattle earlier this year, it was hailed as the future of brick-and-mortar shops. But, so far, there’s just the one. And even if the company follows through on plans to open half a dozen more, the locations will be little more than a novelty.
There are more than 150,000 convenience stores and 40,000 grocery markets in the US alone. Amazon may have enough money to build sophisticated storefronts full of expensive hardware from the ground up with no regard for profits – the marketing alone is worth it for the juggernaut worth $900 billion – but most other retail chains don’t. And mom-and-pop stores definitely can’t afford millions in R&D and installation costs. Luckily, they don’t have to.
Trigo Vision’s AI is designed to augment existing retail spaces. Through the use of regular cameras and advanced proprietary algorithms it’s able to accurately track shoppers and products in a way that facilitates cashier-free shopping experiences that, theoretically, could work in almost any retail environment.
The idea is to create a system that shopkeepers can integrate into their stores with minimal fuss. With Trigo Vision’s, there’s no need to rearrange products, spend days entering codes, or install sensors all over the place.
The benefits for shoppers go beyond the cool factor of skipping the long process of individually scanning each item to determine your total before paying. Depending on the retailer, customers will either confirm their purchases at kiosks positioned by exits, or simply log in with an app when they walk in, grab what they want, and then walk right out the front door letting the computers handle payment.
But, Trigo Vision is still perfecting its mix. Earlier this week it came out of stealth after successfully securing over $7 million during its first round of financing. Despite being a relative newcomer to the scene, however, its AI is already showing promise in tests with multiple clients.
We spoke with CEO Michael Gabay and COO Jenya Beilin who told us they’ve been developing the idea for years, even before Amazon launched its cashier-less Go store. Gabay told TNW the idea started as a smart shopping cart, but the team quickly realized it wasn’t the best solution.
It was too expensive, it broke a lot, people steal them, it cost retailers a lot of money. They have to replace batteries and do maintenance. That’s when we got the idea to put cameras overhead.
Trigo Vision’s team knew that, in order to make a product that appealed to most retailers, they’d have to come up with a solution that didn’t involve changing the existing infrastructure or requiring retailers to hire computer specialists.
Beilin told TNW that one of the biggest challenges was eliminating the bottlenecks that arise when implementing these kinds of systems. He says their AI doesn’t require store owners to manually input each product into a system, but instead it learns inventory as it goes. According to him:
For an 1800 square foot it takes a matter of weeks. Not days, yet, but also not months.
And, aside from the convenience of tracking purchases in real time, the cameras can also provide retailers with insights they’ve never had access to before. Beilin told us the AI gives retailers a bevy of data, but also respects consumer privacy:
Retailers get real time data of basically anything you can imagine. From shopping trends to the store’s inventory. But, we are dedicated to privacy, and we’re GDPR compliant. The privacy of our customers, and theirs, is very important to us. Our data is not personalized.
The team hopes to have a full-fledged product available for retailers by the end of the year, though at this point they aren’t putting a hard date on when it will officially roll out.
Leading hardware crypto wallet manufacturer and developer Ledger sold more than 1 million hardware wallets in 2017, recording a profit of $29 million.
In an interview with Forbes, Ledger president Pascal Gauthier stated that the lack of secure platforms which users can utilize to sign transactions on the immutable public blockchain led the demand for Ledger and hardware wallets in general, to rise.
“Blockchain itself is secure, but signing on the blockchain is a flaw. If you lose the [private key], there’s no bank looking after your assets or any way to recover them,” Gauthier told Forbes.
Eyeing Another Multi-Million Dollar Funding Round
In early 2018, Ledger raised $75 million led by billionaire early stage technology investor Tim Draper and Draper Venture Network funds. The Series B funding round of Ledger was a significant boost from its previous Series A funding round that closed a $7 million investment.
After having recorded a profit last year with impressive financial results, Ledger is set to raise yet another multi-million dollar funding round this year. Already, some of the largest technology conglomerates in the world such as Samsung, Siemens and Google’s ventures arm have viewed the financials of Ledger, with interest of investing in the company, Forbes revealed.
The previous Series B funding round of Ledger was mainly utilized to improve the infrastructure of Ledger and creating products targeted at retail traders and individual investors. In the future, Gauthier said that the company will focus on delivering products for large-scale institutional investors, whose entrance into the cryptocurrency sector is considered to be imminent primarily due to the debut of Coinbase Custody.
Currently, the majority of large-scale investors holding bitcoin are using vault systems of Xapo and Coinbase to store their funds. But, requesting vaults to hold massive amounts of bitcoin still require investors to trust the operators of the vault.
The vision of Ledger in the long-term is to enable an ecosystem that will allow even large institutions to hold cryptocurrencies like bitcoin, ether, and tokens without relying on third party service providers.
“You need a Ledger Nano S solution for institutions, products that are made for big and small financial institutions,” Gauthier explained, emphasizing that clients are “queueing” outside the office of Ledger in France to buy Ledger Vault.
Increasing Demand for Hardware Wallets is Positive
In 2018, the cryptocurrency sector has seen four major hacking attacks, suffered by Coincheck, Bithumb, and Coinrail, large crypto exchanges in the Japanese and South Korean crypto market.
Coincheck for instance experienced a $500 million hacking attack, due to a simple error in the company’s decision making to store all of its funds in NEM in a hot wallet.
The fundamental purpose of cryptocurrencies as decentralized peer-to-peer financial networks is to enable anyone on a public blockchain to send and receive information securely, in a trustless manner. Rising demand for hardware wallets like the Ledger Nano S and Ledger Vault demonstrate increasing awareness of cryptocurrency investors and users on the importance of security and non-custodial platforms.
While cryptocurrencies remain vulnerable to a number of cyber security attacks, the underlying blockchain technology is being used to protect user data from being modified.
We believe that blockchain technology will be transformative in the tech and IT sector in the coming years, similar to what the internet did for the world back in the 90s and early 2000s, said John Zanni, President of the Acronis Foundation. We started a few years ago working with the Ethereum blockchain to see how to better protect data. Today, part of our storage and backup software lets users notarize any digital data and put that fingerprint on the blockchain to ensure it can’t be tampered with.
As the physical world meets the digital world, data has become a key player for a number of businesses. Yet ensuring that data remains safe, secure, private and authentic has become an ongoing challenge.
For example, the recent Equifax cyber-security breach that occured in September 2017 compromised sensitive information of nearly half the U.S. population. Cybercriminals accessed approximately 145.5 million U.S. Equifax consumers’ personal data. Equifax has also confirmed that at least 209,000 consumers’ credit card credentials were taken in the attack.
Moreover, one of the biggest challenges a business faces today in terms of cybersecurity is “data tampering,” which is the threat of data being altered in authorized ways, either accidentally or intentionally.
Authenticity of data is actually one of the most important factors when it comes to cyber protection. Data can always be changed and modified, Serguei Beloussov, CEO and founder of Acronis, told me. Blockchain technology can be used so that data can be signed with a digital signature. That digital signature, called hash, can then be stored on either a public or private blockchain ledger, which is highly immutable, making its possible to check if data was modified at any given time.
How Blockchain Technology Protects User Data
While blockchain technology is most commonly defined as a decentralized, distributed ledger used to record transactions across multiple computers, it can also be seen as a distributed database that maintains a growing list (also known as a chain) of data transaction records.
Consider that every participant of this decentralized system has a copy of the list of transactions. This means that no “official” copy exists. The distributed nature of the chain prevents tampering and revisions, as every action on the blockchain is fully transparent. In turn, data that is stored on a blockchain can easily remain authentic, since the security of every transaction is recorded.
For example, in order to ensure that data isn’t tampered with, Acronis applies blockchain technology to compute a cryptographic hash, or “fingerprint,” that is unique for each data file it stores. This hash is an algorithm that produces the same output when given the exact same input file, making it useful for verifying the file’s authenticity. Any change in the input file, however slight, results in a dramatically different fingerprint. Because the hash algorithm is designed to work only in one direction, it is impossible to determine the original file inputs from the output alone, making the process tamper-proof.
Let’s image that we have a piece of data and we create a unique description for this data. Even if we modify only a single bit of that data and generate the signature again, created hash value will be completely different. So such hash value is effectively a unique signature of your data, explained Beloussov. With Blockchain, you store that hash in multiple places, so you have multiple journals where you have written your signature in a specially encrypted fashion. If someone would want to modify such a record, they will have to get all of those places to agree to modification. Even after that, one would need to spend a lot of compute capacity to un-encrypt it from all of the journals after the record, and encrypt it back. Even if this theoretically could be possible to get done, it would be extremely expensive to compute, as well as complicated
Dr. Abdalla Kablan, who is a renowned fintech expert and advisor on blockchain and AI for the Government of Malta, further explains why data stored on the blockchain is immutable.
Typically, once data is stored on the blockchain it cannot be manipulated or changed – it is immutable. This is because of the architectural nature of blockchain structures where every block has a specific summary of the previous block in the form of a secure hash value. Since these blocks are structured in the form of a ‘chain’ sequence, the timing, order and content of transactions cannot be manipulated. Also, these blocks cannot be replaced unless all the ‘nodes’ achieve consensus or agree with the proposed change, Dr. Kablan told me.
Blockchain In The Real World
When applying blockchain technology for protecting against data tampering in the real world, common use cases often involve protecting transaction logs, proving the existence of legal documents and even confirming creative works originated on a certain date.
For example, a use case might involve a musician who is skeptical about publishing music on the internet due to plagiarism and other security concerns. Using blockchain technology, however, allows the artist to create a backup that contains pieces of digital music or other materials that can be copyrighted. Once the backup is complete, a certificate with cryptographic evidence is issued to help copyright claim, in case of infringement. The record of the original music pieces and their creation dates are recorded in the blockchain, allowing for confirmation that a piece of music existed at a certain time in the past and was authored by that artist.
“In general, blockchain technology today focuses mainly on cryptocurrency and fintech. Yet the world needs to look beyond that to see how businesses and individuals can take advantage of this technology. The day someone figures out how small businesses can apply blockchain technology in a multitude of applications is the day it will really flourish,” said Zanni.
Picture the world of tomorrow, like, far future tomorrow. What does it look like? Is it the same setting featured in your favorite sci-fi film – complete with flying cars, skyscrapers that reach the outer atmosphere, and an abundance of other high tech features? What about a shorter timeline. Are Elon Musk’s hyperloop and Dubai’s shape-shifting skyscraper there? Seemingly far-off ideas of what a city can be are materializing. And it’s all happening at an unprecedented pace.
As technologies advance, and researchers have those eureka moments, urban planners are adapting their plans to include each new innovation. Traditionally though, architects, artists, designers and engineers, haven’t always been on target with their visions. Today, so many iterations dreamt up by previous generations seem almost laughable. Which puts even more pressure on those leading the redesigns.
In many ways, designing future cities is no longer just about the most inventive architecture — these environments demand faster and accessible transportation, urban planning that encourages healthy lifestyles, and technologies that conserve resources – all wrapped up in aesthetically-pleasing designs. These aspects need to have the power to withstand the test of time.
That’s a lot of pressure. We have the tools and resources available to put together our future cities, but how do we know the best way to do it? And if we don’t have the perfect roadmap for building our best metropolises, then what knowledge are we using when our cities unveil plans for massive infrastructure overhauls?
To answer these questions, Futurism is partnering with The Museum of Modern Art (MoMA) and Allianz to bring together some of the world’s leading urban futurists for The Future of Cities panel on July 12 at MoMA PS1. These experts will delve into the logistics and rationale behind the future-focused redesigns we see happening in cities today, and what projects we’re likely to see down the line. The panel, moderated by Futurism’s CEO Alex Klokus, is a diverse mix of artists and scientists who lead their respective fields in city design including:
Beth Simone Noveck, Professor of Technology, Culture, and Society at NYU’s Tandon School of Engineering, is also a director of the Governance Lab – a research group focusing on technology’s role in government. For years, Noveck has been imagining and implementing new methods for community members to take larger roles in their government – expertise that earned her a position as the first United States Deputy Chief Technology Officer, and as the director of the White House Open Government Initiative from 2009 to 2011.
Paola Antonelli, the Senior Curator in MoMA’s Department of Architecture & Design, and the museum’s founding Director of Research & Development – a department leading the innovative institution into the future of museum curation. Considered one of the world’s leading experts on contemporary architecture and design, Antonelli emphasizes that design be treated as art, and has curated several future-looking exhibitions on concepts (such as our coming work environments) that support this point.
James Ramsey, founder of Raad Studio, an award-winning design firm based in Manhattan that focuses on pushing design boundaries while staying functional. Besides launching Raad, Ramsey worked for NASA to build the Cassini satellites. He also executed the Lowline, the world’s first underground park. The installation is miraculously lush with plant life, thanks to a fiberoptic technology Ramsey invented.
Rarely do experts from such wide-ranging backgrounds come together in one room to not only share where they see urban development going, but to develop new insights into their own work. Join us Thursday, July 12 for a glimpse into what these futurists have in mind for our coming cities, followed by a chance to mingle with other forward-looking attendees.
Not in New York? No problem. The lecture will also be live streamed.
SAYING GOODBYE. At 5:42 am EDT Friday, SpaceX launched a Falcon 9 rocket from Florida’s Cape Canaveral Air Force Station. Everything went according to plan; the only things that blew up were the ones that should blow up. At the time of writing, the rocket is helping its payload — the Dragon capsule — enter low Earth orbit. By Monday, it should rendezvous with the International Space Station (ISS), dropping off 5,900 pounds’ worth of supplies and gear in what’s now become a somewhat-routine mission for SpaceX.
While the objective itself isn’t necessarily anything out of the ordinary, today’s launch was special for SpaceX: it marked the last time the company would launch a Falcon 9 Block 4.
A BRIEF HISTORY OF THE FALCON 9. The Falcon 9 is SpaceX’s workhorse rocket, completing 55 successful missions since its initial launch in 2010. In December 2015, the launch of the Falcon 9 v1.2, also known as the Falcon 9 Full Thrust, kicked off the era of the reusable rocket.
SpaceX started launching updated versions of the first Falcon 9 Full Thrust, known as Block 3, in 2017, eventually landing on its Block 4 design. SpaceX had already announced in 2016 that it had plans for a Block 5 version, and the company successfully launched the first Falcon 9 Block 5 on May 11 this year.
A BETTER BLOCK. In a briefing with reporters just prior to the first Falcon 9 Block 5 launch, SpaceX CEO Elon Musk said it would be the final version of the Falcon 9. He also described the various ways it was an improvement over its predecessors.
Block 5 is about twice as powerful as the Falcon 9 that first launched in 2010, according to Musk. It is also far more reusable than the Block 3 and Block 4 versions. SpaceX has never launched a Falcon 9 rocket more than twice (though Musk estimates a Falcon 9 Block 4 could probably launch upwards of 10 times with enough maintenance).
In contrast, a Block 5 should be able to launch 10 times with no refurbishment — just like an airplane. With some refurbishment every 10 or so launches, Musk estimates the rocket could handle 100 flights. The potential cost savings for a rocket that reusable are staggering.
LOOKING AHEAD. Now that SpaceX is officially retiring Block 4, it can focus on completing the seven successful launches Block 5 needs for NASA to allow it to carry people. SpaceX’s goal is to knock those out by December so it can launch its first crewed Dragon capsule before the end of the year. So, while today’s launch marked an end to one era of SpaceX rocketry, it opens the door for us to enter the next.
The Terminator was written to frighten us; WALL-E was written to make us cry. Robots can’t do the terrifying or heartbreaking things we see in movies, but still the question lingers: What if they could?
Granted, the technology we have today isn’t anywhere near sophisticated enough to do any of that. But people keep asking. At the heart of those discussions lies the question: can machines become conscious? Could they even develop — or be programmed to contain — a soul? At the very least, could an algorithm contain something resembling a soul?
The answers to these questions depend entirely on how you define these things. So far, we haven’t found satisfactory definitions in the 70 years since artificial intelligence first emerged as an academic pursuit.
Take, for example, an article recently published on BBC, which tried to grapple with the idea of artificial intelligence with a soul. The authors defined what it means to have an immortal soul in a way that steered the conversation almost immediately away from the realm of theology. That is, of course, just fine, since it seems unlikely that an old robed man in the sky reached down to breath life into Cortana. But it doesn’t answer the central question — could artificial intelligence ever be more than a mindless tool?
That BBC article set out the terms — that an AI system that acts as though it has a soul will be determined by the beholder. For the religious and spiritual among us, a sufficiently-advanced algorithm may seem to present a soul. Those people may treat it as such, since they will view the AI system’s intelligence, emotional expression, behavior, and perhaps even a belief in a god as signs of an internal something that could be defined as a soul.
As a result, machines containing some sort of artificial intelligence could simultaneously be seen as an entity or a research tool, depending on who you ask. Like with so many things, much of the debate over what would make a machine conscious comes down to what of ourselves we project onto the algorithms.
“I’m less interested in programming computers than in nurturing little proto-entities,” Nancy Fulda, a computer scientist at Brigham Young University, told Futurism. “It’s the discovery of patterns, the emergence of unique behaviors, that first drew me to computer science. And it’s the reason I’m still here.”
Fulda has trained AI algorithms to understand contextual language and is working to build a robotic theory of mind, a version of the principle in human (and some animal) psychology that lets us recognize others as beings with their own thoughts and intentions. But, you know, for robots.
“As to whether a computer could ever harbor a divinely created soul: I wouldn’t dare to speculate,” added Fulda.
There are two main problems that need resolving. The first is one of semantics: it is very hard to define what it truly means to be conscious or sentient, or what it might mean to have a soul or soul-function, as that BBC article describes it.
The second problem is one of technological advancement. Compared to the technology that would be required to create artificial sentience — whatever it may look like or however we may choose to define it — even our most advanced engineers are still huddled in caves, rubbing sticks together to make a fire and cook some woolly mammoth steaks.
At a panel last year, biologist and engineer Christof Koch squared off with David Chalmers, a cognitive scientist, over what it means to be conscious. The conversation bounced between speculative thought experiments regarding machines and zombies (defined as those who act indistinguishably from people but lack an internal mind). It frequently veered away from things that can be conclusively proven with scientific evidence. Chalmers argued that a machine, one more advanced than we have today, could become conscious, but Koch disagreed, based on the current state of neuroscience and artificial intelligence technology.
Neuroscience literature considers consciousness a narrative constructed by our brains that incorporates our senses, how we perceive the world, and our actions. But even within that definition, neuroscientists struggle to define why we are conscious and how best to define it in terms of neural activity. And for the religious, is this consciousness the same as that which would be granted by having a soul? And this doesn’t even approach the subject of technology.
“AI people are routinely confusing soul with mind or, more specifically, with the capacity to produce complicated patterns of behavior,” Ondřej Beran, a philosopher and ethicist at University of Pardubice, told Futurism.
“AI people are routinely confusing soul with mind”
“The role that the concept of soul plays in our culture is intertwined with contexts in which we say that someone’s soul is noble or depraved,” Beran added — that is, it comes with a value judgment. “[In] my opinion what is needed is not a breakthrough in AI science or engineering, but rather a general conceptual shift. A shift in the sensitivities and the imagination with which people use their language in relating to each other.”
Beran gave the example of works of art generated by artificial intelligence. Often, these works are presented for fun. But when we call something that an algorithm creates “art,” we often fail to consider whether the algorithm has merely generated sort of image or melody or created something that is meaningful — not just to an audience, but to itself. Of course, human-created art often fails to reach that second group as well. “It is very unclear what it would mean at all that something has significance for an artificial intelligence,” Beran added.
So would a machine achieve sentience when it is able to internally ponder rather than mindlessly churn inputs and outputs? Or is would it truly need that internal something before we as a society consider machines to be conscious? Again, the answer is muddled by the way we choose to approach the question and the specific definitions at which we arrive.
“I believe that a soul is not something like a substance,” Vladimir Havlík, a philosopher at the Czech Academy of Sciences who has sought to define AI from an evolutionary perspective, told Futurism. “We can say that it is something like a coherent identity, which is constituted permanently during the flow of time and what represents a man,” he added.
Havlík suggested that rather than worrying about the theological aspect of a soul, we could define a soul as a sort of internal character that stands the test of time. And in that sense, he sees no reason why a machine or artificial intelligence system couldn’t develop a character — it just depends on the algorithm itself. In Havlík’s view, character emerges from consciousness, so the AI systems that develop such a character would need to be based on sufficiently advanced technology that they can make and reflect on decisions in a way that compares past outcomes with future expectations, much like how humans learn about the world.
But the question of whether we can build a souled or conscious machine only matters to those who consider such distinctions important. At its core, artificial intelligence is a tool. Even more sophisticated algorithms that may skirt the line and present as conscious entities are recreations of conscious beings, not a new species of thinking, self-aware creatures.
“My approach to AI is essentially pragmatic,” Peter Vamplew, an engineer at Federation University, told Futurism. “To me it doesn’t matter whether an AI system has real intelligence, or real emotions and empathy. All that matters is that it behaves in a manner that makes it beneficial to human society.”
“To me it doesn’t matter whether an AI system has real intelligence… All that matters is that it behaves in a manner that makes it beneficial to human society.”
To Vamplew, the question of whether a machine can have a soul or not is only meaningful when you believe in souls as a concept. He does not, so it is not. He feels that machines may someday be able to recreate convincing emotional responses and act as though they are human but sees no reason to introduce theology into the mix.
And he’s not the one who feels true consciousness is impossible in machines. “I am very critical of the idea of artificial consciousness,” Bernardo Kastrup, a philosopher and AI researcher, told Futurism. “I think it’s nonsense. Artificial intelligence, on the other hand, is the future.”
Kastrup recently wrote an article for Scientific American in which he lays out his argument that consciousness is a fundamental aspect of the natural universe, and that people tap into dissociated fragments of consciousness to become distinct individuals. He clarified that he believes that even a general AI — the name given to the sort of all-encompassing AI that we see in science fiction — may someday come to be, but that even such an AI system could never have private, conscious inner thoughts as humans do.
“Siri, unfortunately, is ridiculous at best. And, what’s more important, we still relate to her as such,” said Beran.
Even more unfortunate, there’s a growing suspicion that our approach to developing advanced artificial intelligence could soon hit a wall. An article published last week in The New York Times cited multiple engineers who are growing increasingly skeptical that our machine learning, even deep learning technologies will continue to grow as they have in recent years.
I hate to be a stick in the mud. I truly do. But even if we solve the semantic debate over what it means to be conscious, to be sentient, to have a soul, we may forever lack the technology that would bring an algorithm to that point.
But when artificial intelligence first started, no one could have predicted the things it can do today. Sure, people imagined robot helpers à la the Jetsons or advanced transportation à la Epcot, but they didn’t know the tangible steps that would get us there. And today, we don’t know the tangible steps that will get us to machines that are emotionally intelligent, sensitive, thoughtful, and genuinely introspective.
By no means does that render the task impossible — we just don’t know how to get there yet. And the fact that we haven’t settled the debate over where to actually place the finish line makes it all the more difficult.
“We still have a long way to go,” says Fulda. She suggests that the answer won’t be piecing together algorithms, as we often do to solve complex problems with artificial intelligence.
“You can’t solve one piece of humanity at a time,” Fulda says. “It’s a gestalt experience.” For example, she argues that we can’t understand cognition without understanding perception and locomotion. We can’t accurately model speech without knowing how to model empathy and social awareness. Trying to put these pieces together in a machine one at a time, Fulda says, is like recreating the Mona Lisa “by dumping the right amounts of paint into a can.”
Whether or not the masterpiece is out there, waiting to be painted, remains to be determined. But if it is, researchers like Fulda are vying to be the one to brush the strokes. Technology will march onward, so long as we continue to seek answers to questions like these. But as we compose new code that will make machines do things tomorrow that we couldn’t imagine yesterday, we still need to sort out where we want it all to lead.
Will we be da Vinci, painting a self-amused woman who will be admired for centuries, or will we be Uranus, creating gods who will overthrow us? Right now, AI will do exactly what we tell AI to do, for better or worse. But if we move towards algorithms that begin to, at the very least, present as sentient, we must figure out what that means.
Just about anyone can be a biohacker now. Gene editing technology is cheaper and simpler than ever, and the Department of Defense (DoD) is paying close attention. After all, scientists have already used the tech to build a strain of horsepox, a virus not too genetically distant from smallpox, from scratch just for the fun of it. It may only be a matter of time before synthetic biology, such as weaponized pathogens, finds its way into a military’s or terrorist’s arsenal.
In order to stay prepared, the DoD commissioned the National Academy of Sciences to release a comprehensive report about the state of American biodefense. As the report suggests, it’s not great.
In the report, a team of scientists ranked the potential threats from gene editing and other bioengineering techniques, and examined what could be done to protect against them, taking into account how advances in bioengineering would make it more difficult to combat these threats. As MIT Technology Review reported, the DoD is most concerned that people might recreate known infectious viruses or enhance bacteria to become even more dangerous.
Many of the DoD’s official recommendations, such as investing in public health and advanced vaccines, run directly contrary to the priorities set by those currently running the federal government. Right now, the government pours money into defense but also slashes the public health infrastructure. Investing in it would do a who lot more to improve national security.
The portion of the report dedicated to “mitigating concerns” that synthetic bioweapons may be used against people in or out of the United States focuses on three key areas: deterring attacks, recognizing then, and then minimizing the damage done after the fact.
The solution that stood out the most is also perhaps the least concrete: invest and develop a robust public health system that could identify, prevent or counter, and respond to an epidemic or the use of a bioweapon. Because, as The Atlanticdemonstrated at length, America is absolutely not prepared for a major outbreak or epidemic of any sort — the country’s healthcare system can barely handle a rough flu season. Preparing treatments for a major viral or bacterial outbreak like the flu or Ebolarelies on a large and fragile international supply chain, and hospitals are mostly independent entities that may balk at the cost of preparing for such an epidemic. In short, not the best tactic.
With a solid, well-funded public health system in place, the DoD and National Academy of Sciences researchers argue, the country will be more resilient to an attack, even if there’s no immediate cure for whatever bioweapon its citizens may be struck with. Even though the U.S. leads the world in public health via funding to the Centers for Disease Control and Prevention, the Atlantic article argues, funding and readiness initiatives still mostly react to health problems that come up instead of preparing for them before they do. It’s hard to keep politicians interested in staying prepared when there’s no immediate threat.
Even so, a better public health system could help catch such an attack before it spreads out of control — if a doctor notes a bizarre case or symptom, then a strong national network would be able to flag that case and perhaps even identify a patient zero, creating treatments more quickly and quarantining that person to prevent others from getting sick.
This type of investment would improve many parts of American life, especially for the most vulnerable and underserved among us. If “national security” is the angle that makes it happen then, well, fine.
In the report, the DoD also called for increased research into vaccines, perhaps some that can prevent broad swaths of infections, and improved programs to help get them to people. No one, the new report argues, would attack a population that’s already impervious to the weapon. Once more, what should perhaps be an obvious solution seems groundbreaking in a country in which the dangerous, repeatedly-debunked rhetoric of the anti-vaxxer movement is going strong and supported by the president.
The report also listed some areas of concern made more pressing due to cheap, democratized gene editing and biohacking tools. For one, small DNA sequences under 200 base pairs long aren’t screened by DNA synthesis companies for potential misuse. Normally, these companies would check genetic sequences against those known to be derived from pathogens and those on the Federal Select Agent Program Select Agents and Toxins List maintained by the CDC. These short sequences can be purchased by researchers or hobbyist gene editors. If they were so inclined, they could build a pathogen from scratch or make one that already exists more dangerous.
In the past, these short strands haven’t been screened because there was no need. Anyone trying to create a dangerous virus or bacteria would need a huge number of these strands, which would, presumably, tip someone off. But still, they could feasibly gather them without raising any alarm, as long as they don’t purchase anything longer than 200 base pairs at a time and no one looks too closely.
The report suggests creating a separate screening process for short strands of DNA — by analyzing the likelihood that a particular genetic sequence might be used in a weapon and going from there, rather than catching well-meaning researchers in the crossfire. While no such algorithm exists, the authors suggest that a well-trained machine learning system would be well-suited to the task.
In fact, much of the existing deterrence to synthetic bioweapons is based on such good faith efforts — the researchers cite a long list of agreements among biologists, geneticists, and other people who hold the tools to create weapons in which they say they wouldn’t. If you’re looking to prevent an epidemic of our own making, relying on social norms seems weak. With data and privacy scandals coming out of Silicon Valley just about every day, one might begin to wonder whether maybe “oh, those guys know what they’re doing” is a strong enough stance.
But the report cites decades of agreements and accords among researchers designed to prevent anyone who might want to synthesize a bioweapon from doing so. And, so far, it’s worked.
The problem, the report notes, comes from the recent and massive diversification of the genetics community. No longer are gene editing tools restricted to reputable scientists working in prestigious government or university labs; D.I.Y. CRISPR-Cas9 kits and other means of genetic manipulation are available to hobbyists, independent biotech companies, and citizen scientists. Proliferating the means of scientific research and experimentation could lead to amazing breakthroughs (In April, Gizmodo noted that a lab developed a CRISPR tool that could make for highly-specific cancer diagnoses) but it could also lead to catastrophic accidents or the deliberate weaponization of infectious agents by people who don’t adhere to long-standing (and perhaps outdated) codes of conduct.
As a result, the DoD calls for regulations and monitoring over synthetic biology research, whether it happens in a prestigious university or a hobbyist’s garage. But it specifically calls on such regulations to be penned in such a way that they “safeguard science.” Any new rules would need to allow scientists — and hobbyists — to conduct their research. If they need to get approval for the studies they want to conduct and genetic manipulations they want to attempt, then the bad actors can be weeded out. The key is to make sure any such regulations are motivated by safety, not the political leanings of whoever is in charge, now or in the future.
It’s not yet clear what effect this report will actually have. It’s not the kind of thing that will go to a vote, nor require that legislators hop to action. The report merely recommends certain strategies to address specific points that worry biologists and security experts. But at least it’s an indication that the DoD has begun to think about how to move science forward in a safe and responsible way, without hampering research in the process.
So, let’s just say it: we are not on track to meet the ambitious goals of the Paris Accord, the ambitious international agreement intended to limit global warming.
If we are to reach our goals — and perhaps to limit the seemingly inevitable devastation — we need to do something to reduce the greenhouse gases we’ve pumped (and are pumping) into the atmosphere.
For decades, carbon capture has seemed like a promising solution. Why not just take all the carbon dioxide that’s baking the planet and put it somewhere else? The short answer: the technology was way too expensive and energy-intensive to be practical at scale.
Now, though, that might no longer be the case. A new study published Thursday in the journal Joule found a way to suck carbon dioxide out of the atmosphere for the bargain price of $94 to $232 per ton. That’s a major improvement over the researchers’ previous estimate of $1,000 dollars per ton.
While the technology still requires a great deal of energy (the researchers suggest using natural gas or electricity to satisfy it), it’s very feasible. All of the technology required to build the new carbon capture system already exists, according to MIT Technology Review.
Granted, it’s possible that, in practice, the new technology will be more expensive than those estimates predict, especially if it were to be implemented at any major scale. But that would still way better than what experts had assumed it would cost to suck carbon dioxide out of the atmosphere and store it elsewhere. And anything that helps get greenhouse gases out of the way and helps mitigate climate change-related destruction is good news.
Currently, Carbon Engineering, the company behind the new research, plans to use its captured carbon to synthesize new carbon-neutral fuels. It’s already begun creating these carbon-neutral energy sources but, as MIT Technology Review reported, fossil fuels remain much cheaper. So if new fuels are going to actually become widespread, the government may need to provide some subsidies to drop the cost.
There are plenty of hurdles to overcome before we can see any benefit from this technology. The company will have to: prove there’s a market for the carbon-neutral synthetic fuel, ramp up operations for large-scale plants, and keep costs low enough to be a feasible solution for climate change. But if it all works out, it’s possible that we might be able to meet some of our goals for the future of the planet, after all.
Crossing into or out of the U.S. by land is already an Orwellian nightmare. Here’s some of the technology border agents already use:
Towers equipped with surveillance cameras
A radiation-monitoring system to identify any potentially radioactive cargo
Handheld metal detectors
RFID travel document scanners
X-ray like imaging devices
Fiberoptic scopes to look into tight spaces
Radars to scan for distant movement
Now, there will soon be a new dystopian piece of technology to add to the list: advanced facial recognition.
That’s according to a new report by The Verge. Documents obtained through a Freedom of Information Act request reveal that, during a year-long test in 2016, researchers from Oak Ridge National Labs in Tennessee collected images and video of thousands of vehicles at two border crossings in Texas and Arizona. Because of the tests’ success, Customs and Border Protection will rolling it out on a larger scale this August, with cool new tech to boot, The Verge reports.
In the 2016 tests, Oak Ridge researchers used multiple conventional DSLR cameras to capture different focal lengths (these photos, about 1,400 in total, were never used to identify individuals, The Verge reports; in a statement to Futurism, Customs and Border Protection stated that the images were deleted after they were analyzed).
Dubbed the “Vehicle Face System,” the project tests brand-new camera technology that can capture multiple depths of field at once — just like Lytro, and other light-field cameras. That’s necessary since windshields, and other hindrances make getting a reliable scan of a driver or passenger very difficult with a single depth of field.
The Vehicle Face System will compare these scans with pictures already on file (passport photos, visa applications, etc.), the system could, theoretically, detect any anomalies, or identify persons of interest who may require further scrutiny.
The test is part of a larger initiative (and legal requirement) from Customs and Border Protection called Biometric Exit program. Its aim: to verify visa-holder’s identities as they make their way out of the country. While biometric scanning technology is far easier to implement at airports (there, travelers are stationary and not in vehicles), Customs and Border Protection is working to find new ways to apply this to land crossings as well, effectively scanning every face that enters and leaves the United States.
Advanced facial recognition tech at U.S. borders could be seen as a way to keep our borders safe. U.S. border patrol agents are increasingly short-staffed, and they’re only human (especially when asked to work 16-hour days). Facial recognition systems, then, could help make up for what they lack.
But here’s the thing: Facial recognition technology is still in its infancy, and it’s not only possible thatfalse positives will set off the alarms and incriminate the innocent, it’s likely. And as far as detecting people who require further scrutiny, that’s not what most of that cases were in the test run, as The Verge points out — documents reveal that those who were scanned were simply leaving work, or picking up their kids at school.
Plus, border agents do not have to ask for any of travelers’ consent to use the system, and don’t have to have any evidence that someone poses a threat (in a statement, Customs and Border Protection told Futurism: “In general, CBP has the authority to capture scene images from all vehicles, which includes an image of the driver and any other vehicle occupants in the field of view.”).
Facial recognition technology is still in its early stages; similar technology has shown highly imperfect results — face-scanning technology used by the Welsh police identified 92 percent of those scanned as criminals. Other systems have also been swayed by simple identifiers like skin color, opening itself up to the possibility of racial discrimination that’s already a very real problem at U.S. borders. And border control agents already don’t have the best track record; a civil liberties group at Georgetown Law estimated that one in 25 travelers were flagged by the Department of Homeland Security to face further scrutiny at a growing number of airport terminals.
The year-long pilot program for Vehicle Face System will begin in August as part of a new test at the Anzalduas border in Texas.
No facial recognition system is perfect, especially not an experimental one. Hopefully, human oversight will limit the effect of those inevitable errors.
In the near future, there will be a global network of brain-computer interfaces that allow for a total distribution of information in addition to the pervasive surveillance of our every thought. This is the premise of a (fictional) film called The Momentthat’s set to debut at the Sheffield International Documentary Festival in June. And just as the film explores a future where mind-reading technology dictates our society, the film’s music and pacing will be dictated in real time by reading an audience member’s brainwaves.
Richard Ramchurn, the film’s director, recently started screening the film for small groups. In each group, one person wore a headset that allegedly records their brainwaves. While the science surrounding the meanings of different brainwaves is somewhat dubious, different people created different versions of the film when it was tuned to their brain. When the readings indicated that the person was losing interest, the film’s music would change, jump to a new scene centered around a different character, or other real-time edits based on brain activity, as reported by MIT Technology Review.
Based on when the person whose brain is guiding the film, the audience may see different scenes and transition among them at different points in time. The central story is the same, but Ramchurn needed to film far more material than would be needed for a typical half-hour film in order to build up enough potential varieties.
It’s a pretty rad gimmick, but not one that’s easily shared. It doesn’t make sense to watch The Moment in a crowded theatre, Ramchurn told MIT Technology Review. People focus on different things at different times, and a movie specially catered to one person’s neural activity would likely be jarring or boring for everyone else.
For now this type of film shares a similar problem with virtual reality films — you need to watch it on your own then chat with friends about it later on. And once the gimmick wears off, people might remember why the film industry spends a great deal of money on proper mixing and editing.
Outside of films though, this mind reading technology has been used in some games, and can also help make neuroscience research more portable and applicable to the real world. For instance the same company that sells the headset also sells apps that neuroscientists can use to collect and analyze the brain signals that they record. Again, some of the science behind brainwaves isn’t fully established (one part of the app can boil a brain’s readings down to levels of “attention” and “meditation.”) But new, diverse applications for neuroscience-based technology could help us enhance our understanding of the brain. Not just to create personalized film experiences, but to give us more ways of reading, analyzing, and immediately creating a tangible effect of our neural activity.