Month: June 2018

30 Jun 2018
Manahel Thabet

How Scalable Blockchain Technology Can Bring Us Closer to Earth 2.0

There’s more to blockchain than Bitcoin.  The cryptocurrency may be one of the most infamous names associated with the platform, but the possibilities for blockchain technology extend into industries far beyond finance. Just ask the third generation blockchain, RChain.

Some of the most imaginative uses for blockchain technology sound like the gadgets and networks that exist only in the world of sci-fi. But the speed and scalability made possible by new blockchain providers are making a futuristic universe more than just a fantasy. The technology has the potential to drastically change the way we mobilize, design smarter communities, and help crack the code on creating social impact.

One of the industries that stands to gain the most from blockchain innovation is transportation. We’ve come a long way from the horse and buggy, where autonomous cars powered by a speedy and secure blockchain network are part of a future where fewer people waste time in traffic and more people have access to a safe ride. The cars will improve safety by eliminating human error that leads to road accidents, and autonomous car services could provide new ways for an elderly non-driver to get to a doctor’s appointment, for instance, or for a tiring commute to turn into extra nap time. An autonomous car owner could also rent their car out via a secure blockchain network, turning an autonomous vehicle investment into another revenue stream.

And the technology doesn’t have to be limited to cars. Amazon has dreamt of a future where autonomous drones deliver its packages, but the potential for a blockchain system that can power such a feat can do more than just same-day delivery. An NGO, for example, could use autonomous drones to send supplies to tsunami victims from Australia to Indonesia. As soon as word spread about the disaster, people from around the world could log into a blockchain-run app that would allow them to instantly donate. More importantly, because of the speed and security only possible on a peer-to-peer blockchain network, that organization would be able to instantly access those funds and start their relief efforts, minimizing the transactional costs and red tape most organizations face in such moments.

Imagining and executing the technology possible to run these kinds of civic applications is critical in today’s world, said Greg Meredith, president and co-founder of RChain, a company creating a blockchain platform that allows its developers to dream big.

“We have to coordinate at a level we have never coordinated at as a species,” he told Futurism. “We have to be able to form groups that are as agile as the best dancers, as clear-minded as the best poets and mathematicians and software engineers. The good news is there are all these coordination technologies that we’ve barely scratched the surface of. It’s a place where tech in the sense of bits and bytes and code meets tech in the sense of how we govern ourselves.”

Credit: Photographer is my life/Getty

The spirit and ability necessary to govern ourselves has gotten lost in industries plagued by bureaucratic slog, inefficiency, and paperwork. But a blockchain technology seamlessly woven into our stodgy systems could make expensive, time-consuming, or confusing tasks suddenly easy and affordable.

“There are so many applications of blockchain technology, it’s hard to know where to start,” Dr. Mike Stay, the CTO of Pyrofex, the partnering company building out the RChain blockchain, told Futurism. “If you’re buying a house, you’ll be able to check a title claim on the blockchain instead of purchasing title insurance for thousands of dollars. You’ll be able to see the history of repairs on your car instead of paying Carfax or AutoCheck. If a company does all its transactions on blockchain and the auditors run some of the nodes, then the audit process is orders of magnitude cheaper.”

In addition to saving time and money thanks to these applications, citizens could also be rewarded via the blockchain for the efforts towards creating a smart community. For instance, a city looking to control water consumption to avoid drought could incentivize homeowners by offering them cryptocurrency via the blockchain that they could then use at local stores or restaurants. Similar rewards exist today, but companies or governments waste valuable time and money on the labor and transactional costs that it takes to distribute those rewards. With the scale of a blockchain like the one the RChain envisions, those incentives could be distributed immediately at a fraction of the cost.

Developing and maintaining these smarter communities could reinvigorate geographic areas that have been restructured as a result of industries dying or relocating, said Nash Foster, the CEO of Pyrofex.

“I became very interested in Greg’s [Meredith]  ideas about decentralization of the global compute infrastructure,” Foster told Futurism. “I realized that Greg’s vision dovetailed with my own desire to re-energize American communities that had been left behind over the last few decades. I’ve seen many of the communities I love lose access to the productive economy as manufacturing and technology jobs were moved overseas. I have long been interested in finding a way to use the Internet to create more economic opportunities in such areas.”

Pyrofex, helmed by Foster and Stay, is one of the many companies investing in a future with RChain. Stay is a longtime colleague and collaborator with Meredith, and their research efforts formed the mathematical foundations of RChain’s technology. Like many of the collaborators behind RChain, they understood the possibilities of the blockchain platform but felt stifled by the limitations of its architecture. They knew that the massive centralized data centers that brought us search engines and social media lack the speed and efficiency to power autonomous vehicles and smarter communities. But they also knew that many of the current blockchain technologies were never designed with scale in mind. Original blockchain architecture relies on a single chain that works for small endeavors, but isn’t equipped to handle massive growth without taking up the energy supplies of entire countries.

RChain approached blockchain with that massive growth in mind, setting out to create a network that would have the scale of Facebook and the speed of Visa credit card transactions. Their decentralized system is powered by what they call the Rho Virtual Machine, or RhoVM.

Rather than having to only build up, bit by bit, the RhoVM allows for a network of parallel blockchains. That means that as the platform continues to grow, it can manage the load by building out linearly on top of a solid and secure foundation. Any kid who has played with building blocks can understand the technique. A tower made of a single stack of blocks will crumble in under a minute. But a fortress built short and wide with a strong base is one that will act as a foundation for several more hours of block building.

It’s only a foundation like RChain’s that can support the growth necessary for futuristic blockchain applications like autonomous medicine-delivering drones or smarter communities free of bureaucratic inefficiency. While RChain achieves scaling and safety through fundamental mathematics and computer science, other blockchain infrastructures might attempt to engineer around a problem to manage such a feat in a lab, but to operate on a global scale, these burgeoning technologies need the speed, efficiency, and scale of RChain’s RhoVM. With the help of a growing network of partners, developers, and bright minds that won’t settle in the past, they’re welcoming a future full of sci-fi dreams made into reality.

Source: Futurism

28 Jun 2018

One of Saturn’s Moons Has Everything Needed to Host Life

THE EXCEPTIONAL ENCELADUS. On Wednesday, scientists from the Southwest Research Institute (SWRI) published a paper in Nature outlining their discovery of complex organic molecules on Enceladus, one of Saturn’s 53 moons.

These large, carbon-rich molecules emanate from the ocean beneath the moon’s icy surface, escaping as plumes through warm cracks. This emergence of complex organic molecules from a liquid ocean makes Enceladus the only body besides Earth to boast all the basic requirements for life as we know it, said co-author Christopher Glein in a news release.

HELP FROM THE DEPARTED. For their paper, the scientists relied on data from NASA’s Cassini spacecraft, which plunged into Saturn’s surface in September 2017. During a flyby in 2015, the craft detected hydrogen within the materials emanating from the cracks in Encledaus’s surface. Hydrogen sometimes serves as an energy source for microbes living near hydrothermal vents in the Earth’s oceans, so the researchers suspect that Encledaus’s hydrogen formed due to the moon’s own hydrothermal activity.

ANOTHER STEP FORWARD. This isn’t the first discovery of organic molecules on Encledaus. However, previous discoveries were of simple molecules with masses below 50 atomic mass units — these newly discovered molecules have masses greater than 200 atomic mass units. Still, a single atom of carbon-12 is 12 atomic mass units, so these “complex” molecules are very tiny.

While this might not be the discovery of extraterrestrial life many are waiting for, these molecules do bring us one step closer to finding it. As Glein noted in the news release, future space missions could provide more in-depth analysis of Encledaus’s plumes, perhaps helping scientists figure out exactly how the moon’s complex molecules came to be and what sort of biological processes are happening beneath its icy surface.

Source: Futurism

27 Jun 2018

The Digest: This Floating Robot DRAGON Can Change Shape Mid-Flight

ONE CLUNKY ACRONYM. Roboticists from the University of Tokyo’s JSK Lab have created a flying robot they call DRAGON, an acronym for Dual-rotor embedded multilink Robot with the Ability of multi-deGree-of-freedom aerial transformatiON. A recent report by IEEE Spectrum includes a video highlighting the bot’s ability to change its shape mid-flight in order to navigate through tight spaces.

DRAGON comprises four modules, each boasting a pair of maneuverable fan thrusters. Battery-powered hinged joints link these modules. An Intel Euclid serves as both the eyes and the brain of DRAGON, letting the flying robot “see” the world around it and autonomously decide what shape it needs to assume to fit through a given area.

ONE IMPRESSIVE ROBOT. Indoor drone navigation comes with a variety of unique challenges, not least of which is the issue of having to fit through tight spaces. As noted in the IEEE Spectrum report, this has left developers with two options: make their drones smaller (in which case, they aren’t powerful enough to really do much of anything) or put them in protective cages (which also limits their abilities).

While DRAGON can only remain airborne for about three minutes at present, it’s both agile and fairly powerful. The JSK developers have big plans for its next stage of development, too. They want to increase the number of modules to 12 and add grippers on each end of the system, giving it the ability to pick up and move objects.

MAN’S NEW BEST FRIEND. It’s not hard to imagine using an advanced version of DRAGON to navigate dangerous indoor environments during rescue missions. It could search for survivors in collapsed buildings, removing rubble if necessary to free them. Ultimately, unlike its fictional counterparts, this DRAGON could save human lives.

Source: Futurism

26 Jun 2018
Manahel Thabet

Artificial Consciousness: How To Give A Robot A Soul

The Terminator was written to frighten us; WALL-E was written to make us cry. Robots can’t do the terrifying or heartbreaking things we see in movies, but still the question lingers: What if they could?

Granted, the technology we have today isn’t anywhere near sophisticated enough to do any of that. But people keep asking. At the heart of those discussions lies the question: can machines become conscious? Could they even develop — or be programmed to contain — a soul? At the very least, could an algorithm contain something resembling a soul?

The answers to these questions depend entirely on how you define these things. So far, we haven’t found satisfactory definitions in the 70 years since artificial intelligence first emerged as an academic pursuit.

Take, for example, an article recently published on BBC, which tried to grapple with the idea of artificial intelligence with a soul. The authors defined what it means to have an immortal soul in a way that steered the conversation almost immediately away from the realm of theology. That is, of course, just fine, since it seems unlikely that an old robed man in the sky reached down to breath life into Cortana. But it doesn’t answer the central question — could artificial intelligence ever be more than a mindless tool?

Victor Tangermann, The Birth of Alexa, Photoshop, 2018

That BBC article set out the terms — that an AI system that acts as though it has a soul will be determined by the beholder. For the religious and spiritual among us, a sufficiently-advanced algorithm may seem to present a soul. Those people may treat it as such, since they will view the AI system’s intelligence, emotional expression, behavior, and perhaps even a belief in a god as signs of an internal something that could be defined as a soul.

As a result, machines containing some sort of artificial intelligence could simultaneously be seen as an entity or a research tool, depending on who you ask. Like with so many things, much of the debate over what would make a machine conscious comes down to what of ourselves we project onto the algorithms.

“I’m less interested in programming computers than in nurturing little proto-entities,” Nancy Fulda, a computer scientist at Brigham Young University, told Futurism. “It’s the discovery of patterns, the emergence of unique behaviors, that first drew me to computer science. And it’s the reason I’m still here.”

Fulda has trained AI algorithms to understand contextual language and is working to build a robotic theory of mind, a version of the principle in human (and some animal) psychology that lets us recognize others as beings with their own thoughts and intentions. But, you know, for robots.

“As to whether a computer could ever harbor a divinely created soul: I wouldn’t dare to speculate,” added Fulda.

There are two main problems that need resolving. The first is one of semantics: it is very hard to define what it truly means to be conscious or sentient, or what it might mean to have a soul or soul-function, as that BBC article describes it.

The second problem is one of technological advancement. Compared to the technology that would be required to create artificial sentience — whatever it may look like or however we may choose to define it — even our most advanced engineers are still huddled in caves, rubbing sticks together to make a fire and cook some woolly mammoth steaks.

At a panel last year, biologist and engineer Christof Koch squared off with David Chalmers, a cognitive scientist, over what it means to be conscious. The conversation bounced between speculative thought experiments regarding machines and zombies (defined as those who act indistinguishably from people but lack an internal mind). It frequently veered away from things that can be conclusively proven with scientific evidence. Chalmers argued that a machine, one more advanced than we have today, could become conscious, but Koch disagreed, based on the current state of neuroscience and artificial intelligence technology.

Neuroscience literature considers consciousness a narrative constructed by our brains that incorporates our senses, how we perceive the world, and our actions. But even within that definition, neuroscientists struggle to define why we are conscious and how best to define it in terms of neural activity. And for the religious, is this consciousness the same as that which would be granted by having a soul? And this doesn’t even approach the subject of technology.

“AI people are routinely confusing soul with mind or, more specifically, with the capacity to produce complicated patterns of behavior,” Ondřej Beran, a philosopher and ethicist at University of Pardubice, told Futurism.

“AI people are routinely confusing soul with mind”

“The role that the concept of soul plays in our culture is intertwined with contexts in which we say that someone’s soul is noble or depraved,” Beran added — that is, it comes with a value judgment. “[In] my opinion what is needed is not a breakthrough in AI science or engineering, but rather a general conceptual shift. A shift in the sensitivities and the imagination with which people use their language in relating to each other.”

Beran gave the example of works of art generated by artificial intelligence. Often, these works are presented for fun. But when we call something that an algorithm creates “art,” we often fail to consider whether the algorithm has merely generated sort of image or melody or created something that is meaningful — not just to an audience, but to itself. Of course, human-created art often fails to reach that second group as well. “It is very unclear what it would mean at all that something has significance for an artificial intelligence,” Beran added.

So would a machine achieve sentience when it is able to internally ponder rather than mindlessly churn inputs and outputs? Or is would it truly need that internal something before we as a society consider machines to be conscious? Again, the answer is muddled by the way we choose to approach the question and the specific definitions at which we arrive.

“I believe that a soul is not something like a substance,” Vladimir Havlík, a philosopher at the Czech Academy of Sciences who has sought to define AI from an evolutionary perspective, told Futurism. “We can say that it is something like a coherent identity, which is constituted permanently during the flow of time and what represents a man,” he added.

Havlík suggested that rather than worrying about the theological aspect of a soul, we could define a soul as a sort of internal character that stands the test of time. And in that sense, he sees no reason why a machine or artificial intelligence system couldn’t develop a character — it just depends on the algorithm itself. In Havlík’s view, character emerges from consciousness, so the AI systems that develop such a character would need to be based on sufficiently advanced technology that they can make and reflect on decisions in a way that compares past outcomes with future expectations, much like how humans learn about the world.

But the question of whether we can build a souled or conscious machine only matters to those who consider such distinctions important. At its core, artificial intelligence is a tool. Even more sophisticated algorithms that may skirt the line and present as conscious entities are recreations of conscious beings, not a new species of thinking, self-aware creatures.

“My approach to AI is essentially pragmatic,” Peter Vamplew, an engineer at Federation University, told Futurism. “To me it doesn’t matter whether an AI system has real intelligence, or real emotions and empathy. All that matters is that it behaves in a manner that makes it beneficial to human society.”

“To me it doesn’t matter whether an AI system has real intelligence… All that matters is that it behaves in a manner that makes it beneficial to human society.”

To Vamplew, the question of whether a machine can have a soul or not is only meaningful when you believe in souls as a concept. He does not, so it is not. He feels that machines may someday be able to recreate convincing emotional responses and act as though they are human but sees no reason to introduce theology into the mix.

And he’s not the one who feels true consciousness is impossible in machines. “I am very critical of the idea of artificial consciousness,” Bernardo Kastrup, a philosopher and AI researcher, told Futurism. “I think it’s nonsense. Artificial intelligence, on the other hand, is the future.”

Kastrup recently wrote an article for Scientific American in which he lays out his argument that consciousness is a fundamental aspect of the natural universe, and that people tap into dissociated fragments of consciousness to become distinct individuals. He clarified that he believes that even a general AI — the name given to the sort of all-encompassing AI that we see in science fiction — may someday come to be, but that even such an AI system could never have private, conscious inner thoughts as humans do.

“Siri, unfortunately, is ridiculous at best. And, what’s more important, we still relate to her as such,” said Beran.

Image credit: Alex Knight/Victor Tangermann

Even more unfortunate, there’s a growing suspicion that our approach to developing advanced artificial intelligence could soon hit a wall. An article published last week in The New York Times cited multiple engineers who are growing increasingly skeptical that our machine learning, even deep learning technologies will continue to grow as they have in recent years.

I hate to be a stick in the mud. I truly do. But even if we solve the semantic debate over what it means to be conscious, to be sentient, to have a soul, we may forever lack the technology that would bring an algorithm to that point.

But when artificial intelligence first started, no one could have predicted the things it can do today. Sure, people imagined robot helpers à la the Jetsons or advanced transportation à la Epcot, but they didn’t know the tangible steps that would get us there. And today, we don’t know the tangible steps that will get us to machines that are emotionally intelligent, sensitive, thoughtful, and genuinely introspective.

By no means does that render the task impossible — we just don’t know how to get there yet. And the fact that we haven’t settled the debate over where to actually place the finish line makes it all the more difficult.

“We still have a long way to go,” says Fulda. She suggests that the answer won’t be piecing together algorithms, as we often do to solve complex problems with artificial intelligence.

“You can’t solve one piece of humanity at a time,” Fulda says. “It’s a gestalt experience.” For example, she argues that we can’t understand cognition without understanding perception and locomotion. We can’t accurately model speech without knowing how to model empathy and social awareness. Trying to put these pieces together in a machine one at a time, Fulda says, is like recreating the Mona Lisa “by dumping the right amounts of paint into a can.”

Whether or not the masterpiece is out there, waiting to be painted, remains to be determined. But if it is, researchers like Fulda are vying to be the one to brush the strokes. Technology will march onward, so long as we continue to seek answers to questions like these. But as we compose new code that will make machines do things tomorrow that we couldn’t imagine yesterday, we still need to sort out where we want it all to lead.

Will we be da Vinci, painting a self-amused woman who will be admired for centuries, or will we be Uranus, creating gods who will overthrow us? Right now, AI will do exactly what we tell AI to do, for better or worse. But if we move towards algorithms that begin to, at the very least, present as sentient, we must figure out what that means.

Source: Futursim

25 Jun 2018
Manahel Thabet

Synthetic Biological Weapons May Be Coming. Here’s How To Fight Them.

Just about anyone can be a biohacker now. Gene editing technology is cheaper and simpler than ever, and the Department of Defense (DoD) is paying close attention. After all, scientists have already used the tech to build a strain of horsepox, a virus not too genetically distant from smallpox, from scratch just for the fun of it. It may only be a matter of time before synthetic biology, such as weaponized pathogens, finds its way into a military’s or terrorist’s arsenal.

In order to stay prepared, the DoD commissioned the National Academy of Sciences to release a comprehensive report about the state of American biodefense. As the report suggests, it’s not great.

In the report, a team of scientists ranked the potential threats from gene editing and other bioengineering techniques, and examined what could be done to protect against them, taking into account how advances in bioengineering would make it more difficult to combat these threats. As MIT Technology Review reported, the DoD is most concerned that people might recreate known infectious viruses or enhance bacteria to become even more dangerous.

Many of the DoD’s official recommendations, such as investing in public health and advanced vaccines, run directly contrary to the priorities set by those currently running the federal government. Right now, the government pours money into defense but also slashes the public health infrastructure. Investing in it would do a who lot more to improve national security.

The portion of the report dedicated to “mitigating concerns” that synthetic bioweapons may be used against people in or out of the United States focuses on three key areas: deterring attacks, recognizing then, and then minimizing the damage done after the fact.

The solution that stood out the most is also perhaps the least concrete: invest and develop a robust public health system that could identify, prevent or counter, and respond to an epidemic or the use of a bioweapon. Because, as The Atlanticdemonstrated at length, America is absolutely not prepared for a major outbreak or epidemic of any sort — the country’s healthcare system can barely handle a rough flu season. Preparing treatments for a major viral or bacterial outbreak like the flu or Ebola relies on a large and fragile international supply chain, and hospitals are mostly independent entities that may balk at the cost of preparing for such an epidemic. In short, not the best tactic.

With a solid, well-funded public health system in place, the DoD and National Academy of Sciences researchers argue, the country will be more resilient to an attack, even if there’s no immediate cure for whatever bioweapon its citizens may be struck with. Even though the U.S. leads the world in public health via funding to the Centers for Disease Control and Prevention, the Atlantic article argues, funding and readiness initiatives still mostly react to health problems that come up instead of preparing for them before they do. It’s hard to keep politicians interested in staying prepared when there’s no immediate threat.

Even so, a better public health system could help catch such an attack before it spreads out of control — if a doctor notes a bizarre case or symptom, then a strong national network would be able to flag that case and perhaps even identify a patient zero, creating treatments more quickly and quarantining that person to prevent others from getting sick.

This type of investment would improve many parts of American life, especially for the most vulnerable and underserved among us. If “national security” is the angle that makes it happen then, well, fine.

In the report, the DoD also called for increased research into vaccines, perhaps some that can prevent broad swaths of infections, and improved programs to help get them to people. No one, the new report argues, would attack a population that’s already impervious to the weapon. Once more, what should perhaps be an obvious solution seems groundbreaking in a country in which the dangerous, repeatedly-debunked rhetoric of the anti-vaxxer movement is going strong and supported by the president.

The report also listed some areas of concern made more pressing due to cheap, democratized gene editing and biohacking tools. For one, small DNA sequences under 200 base pairs long aren’t screened by DNA synthesis companies for potential misuse. Normally, these companies would check genetic sequences against those known to be derived from pathogens and those on the Federal Select Agent Program Select Agents and Toxins List maintained by the CDC. These short sequences can be purchased by researchers or hobbyist gene editors. If they were so inclined, they could build a pathogen from scratch or make one that already exists more dangerous.

In the past, these short strands haven’t been screened because there was no need. Anyone trying to create a dangerous virus or bacteria would need a huge number of these strands, which would, presumably, tip someone off. But still, they could feasibly gather them without raising any alarm, as long as they don’t purchase anything longer than 200 base pairs at a time and no one looks too closely.

The report suggests creating a separate screening process for short strands of DNA — by analyzing the likelihood that a particular genetic sequence might be used in a weapon and going from there, rather than catching well-meaning researchers in the crossfire. While no such algorithm exists, the authors suggest that a well-trained machine learning system would be well-suited to the task.

In fact, much of the existing deterrence to synthetic bioweapons is based on such good faith efforts — the researchers cite a long list of agreements among biologists, geneticists, and other people who hold the tools to create weapons in which they say they wouldn’t. If you’re looking to prevent an epidemic of our own making, relying on social norms seems weak. With data and privacy scandals coming out of Silicon Valley just about every day, one might begin to wonder whether maybe “oh, those guys know what they’re doing” is a strong enough stance.

But the report cites decades of agreements and accords among researchers designed to prevent anyone who might want to synthesize a bioweapon from doing so. And, so far, it’s worked.

The problem, the report notes, comes from the recent and massive diversification of the genetics community. No longer are gene editing tools restricted to reputable scientists working in prestigious government or university labs; D.I.Y. CRISPR-Cas9 kits and other means of genetic manipulation are available to hobbyists, independent biotech companies, and citizen scientists. Proliferating the means of scientific research and experimentation could lead to amazing breakthroughs (In April, Gizmodo noted that a lab developed a CRISPR tool that could make for highly-specific cancer diagnoses) but it could also lead to catastrophic accidents or the deliberate weaponization of infectious agents by people who don’t adhere to long-standing (and perhaps outdated) codes of conduct.

As a result, the DoD calls for regulations and monitoring over synthetic biology research, whether it happens in a prestigious university or a hobbyist’s garage. But it specifically calls on such regulations to be penned in such a way that they “safeguard science.” Any new rules would need to allow scientists — and hobbyists — to conduct their research. If they need to get approval for the studies they want to conduct and genetic manipulations they want to attempt, then the bad actors can be weeded out. The key is to make sure any such regulations are motivated by safety, not the political leanings of whoever is in charge, now or in the future.

It’s not yet clear what effect this report will actually have. It’s not the kind of thing that will go to a vote, nor require that legislators hop to action. The report merely recommends certain strategies to address specific points that worry biologists and security experts. But at least it’s an indication that the DoD has begun to think about how to move science forward in a safe and responsible way, without hampering research in the process.


Source: Futurism

24 Jun 2018
Manahel Thabet

3 Reasons Why Bitcoin Market Remains Optimistic Despite $6,000

On June 23, CNBC Fast Trader hosted a show dedicated to presenting the “Bitcoin Funeral,” a witty introduction to BKCM founder and cryptocurrency investor Brian Kelly’s discussion on the recent bitcoin correction and its future price trend.

Three Reasons why Bitcoin Isn’t Dead

On CNBC Fast Trader, Kelly outlined three major reasons bitcoin will recover in the mid-term back to its previous support levels at over $10,000:

Negative sentiment from investors signalling imminence of a bottom
Positive development within the Japanese cryptocurrency exchange market
Mt. Gox liquidation of bitcoin postponed to 2019

Referring back to the basic rule of investing, Kelly noted that during a period in which the market is extremely bullish and optimistic, it is better to sell and eye a timely opportunity to enter and when the market is overly pessimistic, it is wise to look for a position to enter.

Given the negative sentiment towards the cryptocurrency market by investors, Kelly explained that the major correction of the cryptocurrency market will likely bottom out in the near future, likely in the next two to three months to begin a mid-term rally in the fourth quarter of this year.

More importantly, Kelly stated that the Japanese government’s move to tighten regulations, clean up the cryptocurrency market, and legitimize the Japanese cryptocurrency sector is a positive development for the long-term, as it would prevent major hacking attacks like the Coincheck security breach and allow investors in the public market to gain trust in local exchanges.

Kelly said:

“Japanese exchanges were ordered to improve business conditions [by the government]. It’s actually a good thing. Short run it’s going to be a little tough because they’re stopping new accounts from coming in but actually they’re cleaning up the system. They’re making sure it’s more robust. Making sure it’s better for people.”

South Korea, the third largest cryptocurrency exchange market behind the US and Japan, have also started to prepare a cryptocurrency regulatory framework to regulate cryptocurrency exchanges as banks, to prevent hacking attacks and money laundering, while legitimizing the cryptocurrency sector to protect investors and set an industry-wide standard for businesses.

No More Mt. Gox Liquidation

Throughout 2018, several Mt. Gox sell-offs of tens of thousands of bitcoin led the market to crash, preventing BTC from gaining momentum at certain periods.

Kelly emphasized that the delay of Mt. Gox bitcoin sell-off until early 2019 is an optimistic factor to consider, as a major factor to a potential market sell-off has been eliminated, at least in the mid-term.

“Mt. Gox is going into rehabilitation and they’re going to distribute the rest of the $1 billion worth of bitcoin. But here’s what is great about that, they’re not going to distribute it until quarter 1 of 2019. All of the sudden everyone thinks there is going to be a wave of selling. Not happening now,” explained Kelly.

The three factors outlined by Kelly could fuel the next mid-term rally of BTC and the positive developments in the Japanese and South Korean cryptocurrency markets will enable the market to grow with stability and trust from investors, that will be largely beneficial in the long run.

Source: CNN

23 Jun 2018
Manahel Thabet

Lab-Grown Neanderthal Minibrains Reveal How They’re Different From Humans’

So, you might have heard: Scientists figured out how to grow miniature brains out of stem cells. Cool, right? Well, now they managed to grow Neanderthal brains, too. As a result, we have more of an idea of why our populations flourished, helping us become the dominant species on Earth, while theirs faltered.

The short version: it comes down to the way the brain structures itself as it develops. Though the research has not yet found its way into a peer-reviewed publication, a presentation on the work from earlier this month (and reported by Science Magazine) noted that some key differences suggest that Neanderthals couldn’t communicate quite as goodly as we can able to. Their brains simply weren’t wired to handle it.

Humans aren’t very fast or strong. Our kneecaps are a cruel evolutionary joke and our hair makes no sense. Let’s not even get into those weird, saggy punching bags that hang off the front of some of us.

And despite all that, we conquered the world. The common understanding of how we did it, strengthened by these new Neanderoids (that’s what the researchers call the lab-grown minibrains) is that we are able to communicate and socialize. We developed massive tribes and communities that made us more powerful than any other animal out there.

When the Neanderthal minibrains self-assembled in the lab, they resembled a popcorn shape. The human minibrains, on the other hand, were much more spherical, according to Science Magazine. The scientists behind the project noted that the the way the neurons developed and connected with one another resembled the way some neurons develop in people with Autism Spectrum Disorder. They weren’t drawing parallels between Neanderthals and people with autism, they clarified — rather, the similarities in brain structures may suggest that the ability to communicate with others works differently there than with humans with different neural structures.

Minibrains grown from pluripotent stem cells give scientists a chance to better understand the brain and how it develops. And they give researchers a chance to test new pharmaceuticals on a (simplified) human model, which yields better results than animal tests.

And while these minibrains are still considered laboratory tools, scientists are already working out ethical guidelines for how they should be treated, should we someday develop the ability to grow more advanced brains in a lab.

But we aren’t there yet. Scientists, to be clear, didn’t grow a living Neanderthal — they used stem cells to carry Neanderthal genes to grow a tiny, simplified version of a brain-like organ.

Still, we’ve taken steps toward better understanding our less-fortunate evolutionary uncles, and that could help us better understand how we came to be the species that we are today.

Source: Futurism

21 Jun 2018

Ontario Plans to Let Companies Access A Database of Patient Health Records

All the diseases we’ve ever had; the medications we took to treat them; our genetic condition; the results of any test, scan, or swab to which we’ve ever been subjected. Our medical histories are packed with tremendous value.

In the right, thoughtful hands, these records could help researchers better understand the connections between genetics, diet, disease, and health. Pharmaceuticals could vastly improve.

In the wrong hands, these records hold a different type of value. Forbes reported last year that a medical record can be worth more than 100,000 times as much as a stolen social security number on the black market. These records can be misused even if they’re shared with the wrong people. Employers, for example, may want to know which job applicants are more likely to develop Alzheimer’s Disease; targeted advertisements could get a hell of a lot more personal.

Now, the government of Ontario — a hotbed of technological research — announced Project Spark, an initiative to make healthcare data more accessible to healthcare professionals, researchers, companies, and the people of Ontario themselves. So there’s reason to be excited, and a bit nervous.

Ontario, like all of Canada, provides a single payer healthcare system, meaning doctor visits and other medical expenses are subsidized by the government. That means the government of Ontario has accumulated a vast, central database of its citizens’ electronic health records that in other healthcare systems might be fragmented among various doctor’s offices, health maintenance organizations, and medical labs.

With all of these records in the same place, the government of Ontario claims that it’s easier than ever for people to keep track of their own medical histories and stay better informed of their conditions and risks as they go about their lives. Doctors won’t need to track down elusive records or start piecing together patients’ medical histories from scratch, risking allergic reactions or ordering tests on patients who have been through it all before but weren’t able to bring their paper trail with them.

That’s one of the proposed benefits of Project Spark — a platform that lets people access and contribute to their own medical record in a way that could democratize medicine and healthcare. But the main purpose of Spark is to let innovators, researchers, and other companies “plug in” to the province’s treasure trove of healthcare data.

“This is an interesting initiative that has potential to improve health outcomes and reduce costs,” Avi Goldfarb, a tech economics researcher at University of Toronto, told Futurism.

While the people of Ontario won’t have to contribute additional data to Project Spark — the government isn’t going to come knocking with cheek swabs for genetic tests — but it does turn them and their medical histories into commodities.

Commodities that could bring about medical breakthroughs but could also share more personal details than they may want to give.

Xinh Studio / Emily Cho


Right now, Ontario’s health records are stored in secured databases with tight controls over who can access what. But if Project Spark, or any other holder of big data repositories, is about to open for business, it needs to take extra care in advance. Ontario only gets one shot to do this right.

If the government fails to properly protect patient privacy, or opens the doors to the wrong companies, Ontarians whose data falls into the wrong hands could face dire consequences. The team behind Project Spark has not responded to Futurism’s request for a statement on how it handles data privacy and how it will choose and prioritize among the companies and organizations vying for access to its health records (we will update this article if and when we hear back).

In the meantime, there are some ways that Ontario’s Project Spark (or any other organization that finds itself in this situation) can develop a healthy marketplace that promotes medical transparency and biomedical research without sacrificing data privacy.

“Making health data available for academic research is an important step in advancing our understanding of diseases and cures,” Christian Catalini, an associate professor of technological innovation at Massachusetts Institute of Technology and founder of MIT’s cryptoeconomics lab, told Futurism. “At the same time, when multiple entities, including for-profit ones, receive access, it becomes extremely important to ensure that the data cannot be de-anonymized, especially when used in conjunction with other private datasets,” Catalini added.

Any company or research institute that gains access to electronic health records must be barred from ever learning who it is actually studying. For instance, if a team of scientists wants to determine whether or not people with a certain genetic makeup are predisposed to develop certain conditions, the team could be required to request and receive only the pertinent data from each health record — information on the genes in question and whether or not those people developed the condition being studied. No names or identities at all.

“This information has potential to improve healthcare substantially overall. In the process, it is important that any individual-level data is only accessible to those who need it to improve health outcomes,” Goldfarb said. “The key will be to ensure that individuals are protected as the overall benefit accrues.” Goldfarb cited research that suggests mishandling health data, specifically by keeping information hidden when it’s most needed by practitioners, can have serious repercussions on vulnerable populations in particular.

Luckily, there are plenty of ways to make sure that a system gives researchers and private companies only the data relevant to a study (and nothing else) so that they can’t learn who has had what conditions but just that someone has.

“Digital information is easy to copy and reuse outside of its intended purpose, so I hope the initiative takes data security and privacy very seriously,” added Catalini.

Of course, once the data is out there, it’s very difficult to make sure people don’t misuse it. This is why the government of Ontario needs to be particularly careful as it moves forward. To signal to the world that it respects and values its people and their privacy, Ontario needs to very carefully vet who will have access to Project Spark. As Quartz mentioned, over 100 companies are currently in line.

To make sure that data only goes to those who will use it responsibly, like conducting medical research that could benefit those who unwittingly donated their medical records, the government of Ontario ought to vet every single application to access its health data. Not just once per company, but for every study that would analyze them.

Project Spark could set up its system such that relevant data is available, but then automatically deleted once the study or project is completed. That way, if that same data works its way into another study or some marketing company’s database, it would be easy to tell who broke the rules and cut them off down the road. A model for this already exists: journalists can sometimes access academic papers before they’re released to better prepare their articles as long as they agree not to publish their article until the paper actually comes out. Those who publish early risk losing access in the future.

Again, these are proposed solutions to the problem of gleaning valuable insight from data that ought to be kept safe and anonymous. And we don’t yet know how Project Spark plans to handle these issues.

We live in a world where large troves of data are leaked or stolen on a seemingly daily basis. Whether it’s the latest Facebook privacy scandal, the recent leak of 150 million MyFitnessPal accounts, or the Equifax leak that now feels like ancient history, evidence suggests that just about any data put online could end up stolen. If we want people to trust that their data will be used to help people and not used against them, especially where their personal medical records are concerned, programs like Project Spark will have to invest in the right kind of digital infrastructure before kicking into high gear.

Sourcec: Futurism

20 Jun 2018
Manahel Thabet

Biohacker Who Implanted Transit Chip in Hand Evades Fine

Australian biohacker Meow-Ludo Disco Gamma Meow-Meow has made himself into a real-life cyborg.

In April 2017, Meow-Meow implanted a chip in his hand. The chip is from his Opal card, needed to ride public transportation in Sydney, and it essentially functions as a debit card — users add money to it, then whenever they use a bus, train, or other transportation service, they swipe their card to pay the fee.

Thanks to his new implant, Meow-Meow no longer had to worry about losing his card. The biohacker could just place his hand near the Opal card reader and be on his way.

All this was presumably working out OK for Meow-Meow until August 2017. That’s when the New South Wales transport authority, which issues the Opal cards, charged him with traveling without a ticket and failing to produce a ticket for transportation officials.

In March, Meow-Meow pled guilty to the charges, but argued that he shouldn’t have to pay a fine or have a conviction recorded. The court disagreed, recording his conviction and ordering him to pay a A$220 fine and A$1,000 in legal fees.

Not satisfied with that outcome, Meow-Meow appealed the decision in the District Court. This week, judge Dina Yehia issued her ruling, overturning the conviction and saving Meow-Meow from that A$220 fine. However, the biohacker will still need to cover the legal fees.

According to ABC News, Yehia based her decision on several factors:

  • Meow-Meow didn’t have any previous convictions.
  • He did pay for his ticket and wasn’t trying to skip out on paying Opal.
  • The crime wasn’t all that serious.

Meow-Meow appears satisfied with the outcome.  “I’ll have to pay costs…but won the moral victory,” he wrote in a Facebook post. “Cyborg justice has been served.”

It’s hard to say what sort of precedent this court ruling might set. Meow-Meow did violate Opal’s terms of use, which says users cannot “misuse, deface, alter, tamper with, or deliberately damage or destroy the Opal Card.” However, he insists the law is the problem — it needs to catch up with today’s technology.

Meow-Meow’s “moral victory” might speed up that process. NSW Transport Minister Andrew Constance told ABC News the government would continue to review its transport policies, and perhaps the next update will take into account the case of the chip-wielding cyborg with that really odd name.

19 Jun 2018
Manahel Thabet

Google’s AI Can Predict When A Patient Will Die

AI knows when you’re going to die. But unlike in sci-fi movies, that information could end up saving lives.

A new paper published in Nature suggests that feeding electronic health record data to a deep learning model could substantially improve the accuracy of projected outcomes. In trials using data from two U.S. hospitals, researchers were able to show that these algorithms could predict a patient’s length of stay and time of discharge, but also the time of death.

The neural network described in the study uses an immense amount of data, such as a patient’s vitals and medical history, to make its predictions. A new algorithm lines up previous events of each patient’s records into a timeline, which allowed the deep learning model to pinpoint future outcomes, including time of death. The neural network even includes handwritten notes, comments, and scribbles on old charts to make its predictions. And all of these calculations in record time, of course.

What can we do with this information, besides fear the inevitable? Hospitals could find new ways to prioritize patient care, adjust treatment plans, and catch medical emergencies before they even occur. It could also free up healthcare workers, who would no longer have to manipulate the data into a standardized, legible format.

AI, of course, already has a number of other applications in healthcare. A pair of recently developed algorithms could diagnose lung cancer and heart disease even more accurately than human doctors. Health researchers have also fed retinal images to AI algorithms to determine the chances a patient could develop one (or more) of three major eye diseases.

But those early trials operated on a much smaller scale than what Google is trying to do. More and more of our health data is being uploaded to centralized computer systems, but most of these databases exist independently, spread across various healthcare systems and government agencies.

Funneling all of this personal data into a single predictive model owned by one of the largest private corporations in the world is a solution, but it’s not an appealing one. Electronic health records of millions of patients in the hands of a small number of private companies could quickly allow the likes of Google to exploit health industries, and become a monopoly in healthcare.

Just last week, Alphabet-owned DeepMind Health came under scrutiny by the U.K. government over concerns it was able to “exert excessive monopoly power,” according to TechCrunch. And their relationship was already frayed over allegations that DeepMind Health broke U.K. laws by collecting patient data without proper consent in 2017.

Healthcare professionals are already concerned about the effect that AI will have on medicine once it’s truly embedded, and if we don’t take precautions for transparency before then. The American Medical Association admits in a statement that combining AI with human clinicians can bring significant benefits, but states that AI tools must “strive to meet several key criteria, including being transparent, standards-based, and free from bias.” The Health Insurance Portability and Accountability Act (HIPAA) passed by Congress in 1996 — 22 years is an eternity in technology terms — just won’t cut it.

Without a effective regulatory framework that encourages transparency in the U.S. it will be near impossible to hold these companies accountable. It may be up to private companies to ensure that AI technology will have an impact on healthcare that benefits patients, not just the companies themselves.

Source: Futurism