Month: November 2018

14 Nov 2018
Manahel Thabet

Brain changes found in self-injuring teen girls

The brains of teenage girls who engage in serious forms of self-harm, including cutting, show features similar to those seen in adults with borderline personality disorder, a severe and hard-to-treat mental illness, a new study has found.

Reduced brain volumes seen in these girls confirms biological – and not just behavioral – changes and should prompt additional efforts to prevent and treat self-inflicted injury, a known risk factor for suicide, said study lead author Theodore Beauchaine, a professor of psychology at The Ohio State University.

This research is the first to highlight physical changes in the brain in teenage girls who harm themselves.

The findings are especially important given recent increases in self-harm in the U.S., which now affects as many as 20 percent of adolescents and is being seen earlier in childhood, Beauchaine said.

“Girls are initiating self-injury at younger and younger ages, many before age 10,” he said.

Cutting and other forms of self-harm often precede suicide, which increased among 10- to 14-year-old girls by 300 percent from 1999 to 2014, according to data from the Centers for Disease Control and Prevention. During that same time, there was a 53 percent increase in suicide in older teen girls and young women. Self-injury also has been linked to later diagnosis of depression and borderline personality disorder.

In adults with borderline personality disorder, structural and functional abnormalities are well-documented in several areas of the brain that help regulate emotions.

But until this research, nobody had looked at the brains of adolescents who engage in self-harm to see if there are similar changes.

The new study, which appears in the journal Development and Psychopathology, included 20 teenage girls with a history of severe self-injury and 20 girls with no history of self-harm. Each girl underwent magnetic resonance imaging of her brain. When the researchers compared overall brain volumes of the 20 self-injuring girls with those in the control group, they found clear decreases in volume in parts of the brain called the insular cortex and inferior frontal gyrus.

These regions, which are next to one another, are two of several areas where brain volumes are smaller in adults with borderline personality disorder, or BPD, which, like cutting and other forms of self-harm, is more common among females. Brain volume losses are also well-documented in people who’ve undergone abuse, neglect and trauma, Beauchaine said.

The study also found a correlation between brain volume and the girls’ self-reported levels of emotion dysregulation, which were gathered during interviews prior to the brain scans.

Read more:

13 Nov 2018
Manahel Thabet

AI should be a global public good

Efforts to develop artificial intelligence (AI) are increasingly being seen as a global race, even a new Great Game. Apart from the race between countries to become more competent and establish a competitive advantage in AI, enterprises are also in a contest to acquire AI talent, leverage data advantages, and offer unique services. In both cases, success would depend on whether AI solutions can be democratized and distributed across sectors.

The global AI race is unlike any other global competition, as the extent to which innovation is being driven by governments, the corporate sector or academia differs substantially from country to country. On average, though, the majority of innovations so far have emerged from academia, with governments contributing through procurement, rather than internal research and development.

While the share of commodities in global trade has fallen, the share of digital services has risen, such that digitalization now underwrites more than 60 percent of all trade. By 2025, half of all economic value is expected to be created in the digital sector. And as governments have searched for ways to claim a position in the value chain of the future, they have homed in on AI.

Accordingly, countries ranging from the United States, France, Finland and New Zealand to China and the United Arab Emirates all now have national AI strategies to boost domestic talent and prepare for the future effects of automation on labor markets and social programs.

Still, the true nature of the AI race remains to be seen. It most likely will not be restricted to any single area, and the most important factor determining outcomes will be how governments choose to regulate and monitor AI applications, both domestically and in an international context. China, the US and other participants not only have competing ideas about data, privacy and national sovereignty, but also divergent visions of what the 21st century world order should look like.

Thus, nationalized AI programs are a hedged bet. Until now, governments have assumed that the country that is first to the finish line will be the one that captures the bulk of AI’s potential value. This seems accurate. And yet the issue is not whether the assumption is true, but whether a nationalized approach is necessary, or even wise.

After all, to frame the matter in strictly national terms is to ignore how AI is developed. Whether data sets are shared internationally could determine whether machine-learning algorithms develop country-specific biases. And whether certain kinds of chips rendered as proprietary technology could determine the extent to which innovation can proceed at the global level. In light of these realities, there is reason to worry that a fragmentation of national strategies could hamper growth in the digital economy.

Moreover, in the current environment, national AI programs are competing for a limited talent pool. And though that pool will expand over time, the competencies needed for increasingly AI-driven economies will change. For example, there will be a greater demand for expertise in cybersecurity.

So far, AI developers working out of key research centers and universities have found a reliable exit strategy, and a large market of eager buyers. With corporations driving up the price for researchers, there is now a widening global talent gap between the top companies and everyone else. And because the major technology companies have access to massive, rich data stores that are unavailable to newcomers and smaller players, the market is already heavily concentrated.

Against this backdrop, it should be obvious that isolationist measures-not least trade and immigration restrictions-will be economically disadvantageous in the long run. As the changing composition of global trade suggests, most of the economic value in the future will come not from goods and services, but from the data attached to them. Thus, the companies and countries with access to global data flows will reap the largest gains.

At a fundamental level, the new global competition is for applications that can compile alternate choices and make optimal decisions. Eventually, the burden of adjusting to such technologies will fall on citizens. But before that moment arrives, it is crucial that key AI developers and governments coordinate to ensure that this technology is used safely and responsibly.

Back in the days when the countries with the best sailing and navigation technologies ruled the world, the mechanical clock was a technology available only to the few. This time is different. If we are to have super intelligence, then it should be a global public good.


12 Nov 2018

Einstein letter showed he was fearful before Nazis came to power

JERUSALEM (AP) — More than a decade before the Nazis seized power in Germany, Albert Einstein was on the run and already fearful for his country’s future, according to a newly revealed handwritten letter.

His longtime friend and fellow Jew, German Foreign Minister Walther Rathenau, had just been assassinated by right-wing extremists and police had warned the noted physicist that his life could be in danger too.

So Einstein fled Berlin and went into hiding in northern Germany. It was during this hiatus that he penned a handwritten letter to his beloved younger sister, Maja, warning of the dangers of growing nationalism and anti-Semitism years before the Nazis ultimately rose to power, forcing Einstein to flee his native Germany for good.

“Out here, nobody knows where I am, and I’m believed to be away on a trip,” he wrote in August 1922. “Here are brewing economically and politically dark times, so I’m happy to be able to get away from everything.”

The previously unknown letter, brought forward by an anonymous collector, is set to go on auction next week in Jerusalem with an opening asking price of $12,000.

As the most influential scientist of the 20th century, Einstein’s life and writings have been thoroughly researched. The Hebrew University in Jerusalem, of which Einstein was a founder, houses the world’s largest collection of Einstein material. Together with the California Institute of Technology it runs the Einstein Papers Project. Individual auctions of his personal letters have brought in substantial sums in recent years.

The 1922 letter shows he was concerned about Germany’s future a full year before the Nazis even attempted their first coup — the failed Munich Beer Hall Putsch to seize power in Bavaria.

“This letter reveals to us the thoughts that were running through Einstein’s mind and heart at a very preliminary stage of Nazi terror,” said Meron Eren, co-owner of the Kedem Auction House in Jerusalem, which obtained the letter and offered The Associated Press a glimpse before the public sale. “The relationship between Albert and Maja was very special and close, which adds another dimension to Einstein the man and greater authenticity to his writings.”

The letter, which bears no return address, is presumed to have been written while he was staying in the port city of Kiel before embarking on a lengthy speaking tour across Asia.

“I’m doing pretty well, despite all the anti-Semites among the German colleagues. I’m very reclusive here, without noise and without unpleasant feelings, and am earning my money mainly independent of the state, so that I’m really a free man,” he wrote. “You see, I am about to become some kind of itinerant preacher. That is, firstly, pleasant and, secondly, necessary.”

Addressing his sister’s concerns, Einstein writes: “Don’t worry about me, I myself don’t worry either, even if it’s not quite kosher, people are very upset. In Italy, it seems to be at least as bad.”

Later in 1922, Einstein was awarded the Nobel Prize in physics.

Ze’ev Rosenkranz, the assistant director of the Einstein Papers Project at Caltech, said the letter wasn’t the first time Einstein warned about German anti-Semitism, but it captured his state of mind at this important junction after Rathenau’s killing and the “internal exile” he imposed on himself shortly after it.

“Einstein’s initial reaction was one of panic and a desire to leave Germany for good. Within a week, he had changed his mind,” he said. “The letter reveals a mindset rather typical of Einstein in which he claims to be impervious to external pressures. One reason may be to assuage his sister’s concerns. Another is that he didn’t like to admit that he was stressed about external factors.”

When the Nazis came to power and began enacting legislation against Jews, they also aimed to purge Jewish scientists. The Nazis dismissed Einstein’s groundbreaking work, including his Law of Relativity, as “Jewish Physics.”

Einstein renounced his German citizenship in 1933 after Hitler became chancellor. The physicist settled in the United States, where he would remain until his death in 1955.

Einstein declined an invitation to serve as the first president of the newly established state of Israel but left behind his literary estate and personal papers to the Hebrew University.


10 Nov 2018
Manahel Thabet

Scientists hunt mysterious ‘dark force’ to explain hidden realm of the cosmos

Scientists are about to launch an ambitious search for a “dark force” of nature which, if found, would open the door to a realm of the universe that lies hidden from view.

The hunt will seek evidence for a new fundamental force that forms a bridge between the ordinary matter of the world around us and the invisible “dark sector” that is said to make up the vast majority of the cosmos.

The chances of success may be slim, but should such a force be found it would rank among the most dramatic discoveries in the history of physics. The best theory of reality that physicists have explains only 4% of the observable universe. The rest is a mystery made up of dark matter, the strange material that lurks around galaxies, and the even more baffling dark energy, a substance called upon to explain the ever-accelerating expansion of the universe.

“At the moment, we don’t know what more than 90% of the universe is made of,” said Mauro Raggi, a researcher at the Sapienza University of Rome. “If we find this force it will completely change the paradigm we have now. It would open up a new world and help us to understand the particles and forces that compose the dark sector.”

Physicists, to date, know of only four basic forces of nature. The electromagnetic force allows for vision and mobile phone calls, but also stops us falling through our chairs. Without the so-called strong force, the innards of atoms would fall apart. The weak force operates in radiation, and gravity – the most pervasive of nature’s forces – keeps our feet rooted to the ground.

But there may be other forces that have gone unnoticed. These would shape the behaviour of the so far unknown particles that constitute dark matter, and could potentially exert the most subtle effects on the forces we are more familiar with.

This month, Raggi and his colleagues will turn on an instrument at the National Institute of Nuclear Physics near Rome which is designed to hunt down a possible fifth force of nature. Known as Padme, for Positron Annihilation into Dark Matter Experiment, the machine will record what happens when a diamond wafer a tenth of a millimetre thick is blasted with a stream of antimatter particles called positrons.

When positrons slam into the diamond wafer, they immediately merge with electrons and vanish in a faint burst of energy. Normally, the energy released is in the form of two particles of light called photons. But if a fifth force exists in nature, something different will happen. Instead of producing two visible photons, the collisions will occasionally release only one, alongside a so-called “dark photon”. This curious, hypothetical particle is the dark sector’s equivalent of a particle of light. It carries the equivalent of a dark electromagnetic force.

Unlike normal particles of light, any dark photons produced in Padme will be invisible to the instrument’s detector. But by comparing the energy and direction of the positrons fired in, with whatever comes out, scientists can tell if an invisible particle has been created and work out its mass. Though normal photons are massless, dark photons are not, and Padme will search for those up to 50 times heavier than an electron.

The dark photon, if it exists, would have an imperceptible influence on what makes up the world we see. But knowing its mass, and the kinds of particles it can break down into, would provide the first glimpse of what makes up the bulk of the universe that is beyond our perception.

The Padme experiment will run until at least the end of the year, but there are tentative plans to move the instrument to Cornell University in 2021. There it would be hooked up to a more powerful particle accelerator than in Italy to broaden its search for dark photons.

Other laboratories around the world are also looking for dark photons. Bryan McKinnon, a research fellow at Glasgow University, is involved in the search for the particle at the Thomas Jefferson national accelerator facility in Virginia. “The dark photon, if it exists, is effectively a portal,” he said. “It lets us peer into the dark sector to see what is happening. It won’t open the floodgates, but it will allow us to have a little look.”

Physicists have little idea how complex the dark sector might be. There may be no new forces to discover. Dark matter itself may be shaped by gravity alone and made up of only one type of particle. But it may be a far richer realm, where new kinds of invisible particles and forces wait to be found.

Read more:

07 Nov 2018

Digging deeper into AI: Why scratching the surface isn’t enough

There’s no question that artificial intelligence (AI) has become a game-changer for businesses today. For forward-looking leaders, AI is increasingly understood to be the key to transforming customer experiences, delivering new, digitally driven value propositions, and entering new markets. But most business leaders today aren’t capitalizing on AI’s full potential. That is largely because they’re limiting their focus to the technology itself — rather than focusing on the much bigger potential of AI as an engine for growth.

To reap the full benefits of AI, executives must dig deeper — they must evolve the technology from a hot new trend to a seamless enabler, woven into the fabric of the enterprise and truly working alongside and augmenting people. When done right, AI has the potential to allow companies to not only do different things, but also to do things differently.By more deeply understanding AI and the holistic value it can bring, companies can become more valuable to their ecosystem and maintain a competitive position in a world that will soon be powered by AI. But that also requires the ability to see beyond the hype and tackle the challenges and complexities to reach AI’s full potential.

A multi-pronged approach to AI

Firms across industries are starting to recognize the value of ingraining AI programs into all aspects of their business. While 57 percent of companies are still in the early investment or pilot stages of AI initiatives and have yet to deploy fully sustainable programs across their organization, some are starting the journey — and already reaping the benefits.

One large financial services firm, for example, is working to incorporate AI across all levels of its business by exploring areas such as intelligent automation that can automate analyst and operational workflows, as well as intelligent products that can deliver a new class of algorithm-driven funds for clients.

Data is the new currency

For the best learning and results, AI demands vast and diverse data. Accordingly, leadership teams are quickly realizing that data from their own company is highly valuable — and that data from a network of related companies is even more valuable.

AI will only be as good as the breadth of data used to “train it,” which is why it’s so important for organizations to leverage the data across an ecosystem rather than just within a company. According to Accenture Strategy research, 44 percent of executives across industries say the value of ecosystems lies in access to new customers, and with those customers comes data. (Note: The authors of this article both work for Accenture Strategy.)

The faster and more completely companies buy into ecosystems of data, the better able they will be to compete. AI-fueled insights will increasingly require vast data marketplaces to be truly targeted. It is these very insights that are key to business growth drivers — transforming experience, developing new digitally enabled products and services, and serving previously less attractive markets.

Read more:

05 Nov 2018
Manahel Thabet

Daytime Naps Boost Brain Power in Mysterious Ways

Recent sleep research has unearthed some fascinating correlations between the duration of time someone spends sleeping and his or her cognitive functions. One of the most extensive studies ever conducted on the link between sleep duration and cognition recently reported that sleeping more or less than seven to eight hours per night impairs specific cognitive abilities. Surprisingly, the brain researchers from Western University in Canada found that oversleeping can be just as detrimental to cognition as sleeping too little.

This massive worldwide survey also identified that getting too much sleep isn’t a problem for most of us; on average people around the globe only sleep about 6.3 hours per night. Unfortunately, this creates a sleep deficit that can cause the body, brain, and mind to function at a subpar level.

The good news is that another study by researchers at the University of Bristol in the UK recently reported that taking a power nap can improve domains of cognitive function associated with processing information below conscious awareness. This study, “Nap‐Mediated Benefit to Implicit Information Processing Across Age Using an Affective Priming Paradigm,” was recently published in the Journal of Sleep Research. The primary goal of this study was to identify if a relatively short period of sleep helps people process unconscious information and how this might improve automatic reaction times.

For this pioneering research on how short bouts of sleep improve memoryconsolidation of implicit tasks, the researchers hid information by “masking” it and then presenting it to study participants without their conscious awareness. Although the “masked” information was hidden from conscious perception, this research shows that it was being absorbed on a subliminal level somewhere in the brain.

For this study, 16 healthy participants practiced a masked task (unconscious processing) and a control task that involved conscious information processing. One group stayed awake after practicing both tasks while the other group took a 90-minute nap. Then, participants were monitored using an EEG as they performed both tasks again while researchers monitored pre-and-post nap brain activity.

The group that stayed awake throughout the experiment did not show significant improvements on either task. Interestingly, the researchers found that taking a nap improved the processing speed of the masked task — which required learning on an unconscious level — but not the control task, which involved explicit memory and conscious awareness. According to the researchers, this suggests sleep-specific improvements in subconscious processing and that information acquired during wakefulness can be processed in deeper, qualitative ways during short bouts of sleep.

“The findings are remarkable in that they can occur in the absence of initial intentional, conscious awareness, by processing of implicitly presented cues beneath participants’ conscious awareness. Further research in a larger sample size is needed to compare if and how the findings differ between ages, and investigation of underlying neural mechanisms,” co-author Liz Coulthard of the University of Bristol Medical School: Translational Health Sciences said in a statement.

Read more:

04 Nov 2018

Make sure you’re not investing in zombie AI

Ever notice how images of robots almost always accompany AI propaganda? It’s certainly an effective tactic. Robots conjure up images of highly intelligent solutions poised to create far more efficient and profitable businesses. Yet there are rarely details available about how these AI technologies work. And as a result, many AI solutions are a ‘black box’ to users.

What’s in the box?

Software developers and marketers often lead customers to believe there’s a robot in the box, when in fact it may be just an artificially intelligent zombie dressed up in a robot costume.

A zombie is a little like a robot. It’s self-sufficient. It doesn’t need a lot of guidance or direction to do the things it does. But it’s not truly intelligent.

And a zombie AI system operates independently, with no human interaction. On one hand, a lack of interactivity is positive as it can mean ease of use; on the other, there’s no way to train a zombie to do something else, or to do its job better. Users are not only unable to apply changes but are shielded from the decisions and logic used in creating the system. With no awareness or understanding, there can be no accountability, nor hope for progress.

While most organizations employ talented developers and technicians capable of ‘teaching’ AI systems to overcome weaknesses, the absence of transparency precludes their ability to do so.

True AI

Among the throngs of zombie AI systems, though, exist a few quality AI systems. These systems are highly intelligent, and though they have some minor human dependencies, they produce incredibly reliable results. The developers of these systems want customers to have a good grasp of the ‘magic’ behind the intelligence – ‘magic’ that really amounts to specific settings, mechanics, controls, even known limitations.

True AI can be recognized by its interactivity and trainability. These systems combine intuitive interfaces with algorithms, instructions that tell the robotic brain what logic to use. And with a little coaching along the way, true AI gets smarter and learns to differentiate right from wrong.

Compared to zombie systems, true AI systems require more time investment initially but are typically more sustainable in the long run because the coaching continually improves them over time.

Robots will reign

Any investment in a zombie system is a waste of resources. Businesses that venture down that path will eventually want more control. They will want the ability to coach. When variables require adjusting or errors occur, they’ll want to be able to make fixes and modifications. At the very least, they’ll want a basic understanding of how the system works.

What they’ll really want is the robot they thought they were getting the first time.

Many business leaders who are disappointed in the outcome of their zombie are focused on the exorbitant amount of time they invested into making the system ‘fit.’ Often, they’ve gone so far as to change their business processes to accommodate the zombie brain. And sadly, they continue to sink money and resources into a solution that will never do what they expected.

Don’t make that mistake. Bury your zombie AI system before it’s too late. Look for true AI. Seek out the system in the ‘glass box,’ or at least one with an access panel into its robotic brain. It’s the AI frameworks that effectively balance transparency and trainability with performance and ease of use that will deliver the highest ROI – and separate the zombies from the robots.


03 Nov 2018
Manahel Thabet

How a small change will reduce distortion in measuring innovation

When your child is diabetic, a few minutes can make a big difference, and it pays to have real-time access to their blood sugar numbers. But what if no one sells a product that can do that? You build one, like the open-source community that developed the wireless blood sugar monitor Nightscout did.

The product serves a public good—keeping diabetics safe—and the community offers the plans to build it for free online. But until now, such clear examples of innovation haven’t been counted as such under the Organisation for Economic Co-operation and Development’s definition. That changed on Oct. 24: The organization’s new edition of the Oslo Manual, the guidebook for collecting and using data on industrial innovation used by most nations, now includes a revised definition of an innovation, removing the requirement that one must first be commercialized to be counted.

MIT Sloan economist Eric von Hippel, in his book Free Innovation, argues that household or “free” innovations—those developed by people on their own time and dollar—aren’t receiving enough support. That’s because they aren’t being recognized for what they are by public policymakers, even though they’re responsible for a significant proportion of new ideas and innovation.

Under the previous OECD definition, an innovation only qualifies as an innovation if it has been introduced to the market. That definition of “onto the market” means that innovators in the household sector—representing tens of millions of people spending tens of billions of dollars on product development and modification per year—don’t get credit for innovations because 90 percent of them simply give their innovations away.

In mountain biking, for example, participants in the sport—who are now the consumers of the mountain bike industry—designed and built the first mountain bikes, but didn’t receive any credit in the statistics, von Hippel said.

“Basically, the end result is a distorted system where businesses get a lot of credit for a lot of innovations they didn’t do. This, in turn, biases public policy toward the needs of companies and their intellectual property rights,” von Hippel said. “Now, finally, with a better OECD definition and better data we’ll be in a position to allocate innovations to the people who actually develop them. That in turn will make household sector innovation visible to government policymakers, and induce people to make a more level playing field where both consumer innovators and producer innovators are acknowledged and supported.”

The new OECD definition reads:

An innovation is a new or improved product or process, or combination thereof, that differs significantly from the unit’s previous products or processes and that has been made available to potential users (product) or brought into use by the unit (process). (This general definition uses the generic term “unit” to describe the actor responsible for innovations. It refers to any institutional unit in any sector, including households and their individual members.)

Tinkering leads to tech breakthroughs

Research by von Hippel and colleagues into consumer innovation across 10 nations found that the phenomenon was both very general and very important—generating a research and development capital stock of about $250 billion in the U.S. alone. These consumer hacks addressed all product areas of interest to consumers from new medical devices to sports and software hacks.

“If there is nothing out there, consumers will build it for themselves,” he said. “Ninety percent of these people just give [innovations] away for free, and the other 10 percent is where a lot of entrepreneurship comes from. That means 90 percent of innovation occurring in the household sector hasn’t been counted.”

That means efforts to support household innovators, like building more maker spaces, aren’t being carried out, von Hippel said. “If the household sector is developing many generally valuable innovations, this increases social welfare just as producer innovation does, and society ought to level the playing field and support both household sector innovation and producer innovation,” he said.

Read more at:

01 Nov 2018
Manahel Thabet

AI under the spotlight

The ethical dilemmas inherent in artificial intelligence (AI) will be the focus of a seminar held at the State Library Victoria, in Melbourne, Australia on 13 November.

Professors Toby Walsh and Sharon Oviatt will sit down to discuss and answer questions about the future of this technology in a forum to be moderated by Kylie Ahern, co-founder of Cosmos magazine.

The event is billed as “The ethical dilemmas of AI – Are we sleepwalking into an AI future” and is open to the public.

Sharon Oviatt is a professor at Monash University, known for her work in human-computer interaction. Toby Walsh is a professor at the University of New South Wales in Australia, with a focus on limiting AI “to ensure the public good”.

Ahern says the event is open to “anyone with an interest in AI”.

“This talk will help people think bigger about AI and gain a better understanding of what it is and how it might impact us,” she adds.

The panellists are expected to discuss Australia’s investment in the field, the development of commercialised technology and use of this technology to support human needs, activities and values.

Ahern says she plans to ask the panellists about the most impactful and innovative research projects in the field today, as well as questions related to the timeframe for wide-spread AI in our daily lives.

“Outside of academia we don’t have a great understanding of AI, the history of AI research or where it’s headed,” she says.

“What should we be scared of and excited about? What are the safety measures we need to implement? How will it change us and how our behaviours?”

The event Is part of the Monash University Dean’s Seminar Series.