Author: Manahel Thabet

05 Nov 2019
THIS POWERFUL MORNING PROCESS WILL CHANGE YOUR LIFE – PRE PAVE WITH INTENTION

THIS POWERFUL MORNING PROCESS WILL CHANGE YOUR LIFE – PRE PAVE WITH INTENTION

The most important time… is the time you give to yourself.

And the most important time to do that is first thing in the morning…Because in the morning you set the intention for the rest of your day.

If you jump out of bed late, rushed, stressed, and in your head – that is what you are pre- paving for the rest of your day, and, in the case of most people… the rest of your life: More rush.
More stress.Less living the quality life you deserve.

The key is to get up early enough to allow yourself some time alone. Time to get clear about how you want to feel this day.

Time for intention.Time to get in the energy space of gratitude. Time for meditation.It’s about pre-paving what you want for this day, and your LIFE.

Setting a clear intention and energy, so you attract those things into your experience.

Set the intention for what you want out of the day ahead and get grateful in advance.

This will make sure that you are an energy match to it, and it will soon be in your experience as you are setting the intention for it to be so.

Just go on a rampage of gratitude, of intention and appreciation… It might go something like this:

 

I am grateful today for every moment of calm, every moment of peace, every moment of real connection.
I am grateful for amazing conversations, grateful for every laugh and smile today.
I am grateful for every moment of happiness, especially when I can give that moment to someone else.
I am grateful for every hug. Every kiss. Every moment of real love.
I am grateful for every moment of true presence. When I really feel more connected to everyone and everything around me.

As I am writing these words I am really feeling each moment as if it is really happening, that is perhaps the most important part… The feeling of it.

Putting yourself in that feeling state as if it is really happening. Raising your vibration to that feeling.

Now what that is doing is setting the intention for the day… putting those amazing things in your conscious mind – and so your attention for this day is going to be zeroed in on trying to find and make those things a reality.

This is such a powerful process.

Everything in life is energy. How you show up each day is energy. Your energy is determined by your intention and how you feel.

So make it a priority to feel good.Make it a priority to give yourself time every morning.

Time to meditate, release stress and increase calm.Time in gratitude and pre-paving intention to get in the right energy.

Use whatever words feel natural to you when setting your gratitude intention. Whatever you are really grateful for, and whatever you want to show up in your experience as a FEELING.

Source: https://iamfearlesssoul.com/pre-pave-with-intention/

 

04 Nov 2019
We Need AI That Is Explainable, Auditable, and Transparent

We Need AI That Is Explainable, Auditable, and Transparent

Every parent worries about the influences our children are exposed to. Who are their teachers? What movies are they watching? What video games are they playing? Are they hanging out with the right crowd? We scrutinize these influences because we know they can affect, for better or worse, the decisions our children make.

Just as we concern ourselves with who’s teaching our children, we also need to pay attention to who’s teaching our algorithms. Like humans, artificial intelligence systems learn from the environments they are exposed to and make decisions based on biases they develop. And like our children, we should expect our models to be able to explain their decisions as they develop.

As Cathy O’Neil explains in Weapons of Math Destruction, algorithms often determine what college we attend, if we get hired for a job, if we qualify for a loan to buy a house, and even who goes to prison and for how long. Unlike human decisions, these mathematical models are rarely questioned. They just show up on somebody’s computer screen and fates are determined.

In some cases, the errors of algorithms are obvious, such as when Dow Jones reported that Google was buying Apple for $9 billion and the bots fell for it or when Microsoft’s Tay chatbot went berserk on Twitter — but often they are not. What’s far more insidious and pervasive are the more subtle glitches that go unnoticed, but have very real effects on people’s lives.

Once you get on the wrong side of an algorithm, your life immediately becomes more difficult. Unable to get into a good school or to get a job, you earn less money and live in a worse neighborhood. Those facts get fed into new algorithms and your situation degrades even further. Each step of your descent is documented, measured, and evaluated.

Consider the case of Sarah Wysocki, a fifth grade teacher who — despite being lauded by parents, students, and administrators alike — was fired from the D.C. school district because an algorithm judged her performance to be sub-par. Why? It’s not exactly clear, because the system was too complex to be understood by those who fired her.

Make no mistake, as we increasingly outsource decisions to algorithms, the problem has the potential to become even more Kafkaesque. It is imperative that we begin to take the problem of AI bias seriously and take steps to mitigate its effects by making our systems more transparent, explainable, and auditable.

Sources of Bias

Bias in AI systems has two major sources: the data sets on which models are trained, and the design of the models themselves. Biases in the data sets on which algorithms are trained can be subtle, for example, such as when smartphone apps are used to monitor potholes and alert authorities to contact maintenance crews. That may be efficient, but it’s bound to undercount poorer areas where fewer people have smartphones.

In other cases, data that is not collected can affect results. Analysts suspect that’s what happened when Google Flu Trends predicted almost double as many cases in 2013 as there actually were. What appears to have happened is that increased media coverage led to more searches by people who weren’t sick.

Yet another source of data bias happens when human biases carry over into AI systems. For example, biases in the judicial system affect who gets charged and sentenced for crimes. If that data is then used to predict who is likely to commit crimes, then those biases will carry over. In other cases, humans are used to tag data and may direct input bias into the system.


This type of bias is pervasive and difficult to eliminate. In fact, Amazon was forced to scrap an AI-powered recruiting tool because they could not remove gender bias from the results. They were unfairly favoring men because the training data they used taught the system that most of the previously-hired employees of the firm that were viewed as successful were male. Even when they eliminated any specific mention of gender, certain words which appeared more often in male resumes than female resumes were identified by the system as proxies for gender.

A second major source of bias results from how decision-making models are designed. For example, if a teacher’s ability is evaluated based on test scores, then other aspects of performance, such as taking on children with learning differences or emotional problems, would fail to register, or even unfairly penalize them. In other cases, models are constructed according to what data is easiest to acquire or the model is overfit to a specific set of cases and is then applied too broadly.

Overcoming Bias

With so many diverse sources of bias, we do not think it is realistic to believe we can eliminate it entirely, or even substantially. However, what we can do is make our AI systems more explainable, auditable, and transparent. We suggest three practical steps leaders can take to mitigate the effects of bias.

First, AI systems must be subjected to vigorous human review. For example, one study cited by a White House report during the Obama administration found that while machines had a 7.5% error rate in reading radiology images, and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.

Second, much like banks are required by law to “know their customer,” engineers that build systems need to know their algorithms. For example, Eric Haller, head of Datalabs at Experian told us that unlike decades ago, when the models they used were fairly simple, in the AI era, his data scientists need to be much more careful. “In the past, we just needed to keep accurate records so that, if a mistake was made, we could go back, find the problem and fix it,” he told us. “Now, when so many of our models are powered by artificial intelligence, it’s not so easy. We can’t just download open-source code and run it. We need to understand, on a very deep level, every line of code that goes into our algorithms and be able to explain it to external stakeholders.”

Third, AI systems, and the data sources used to train them, need to be transparent and available for audit. Legislative frameworks like GDPR in Europe have made some promising first steps, but clearly more work needs to be done. We wouldn’t find it acceptable for humans to be making decisions without any oversight, so there’s no reason why we should accept it when machines make decisions.

Perhaps most of all, we need to shift from a culture of automation to augmentation. Artificial intelligence works best not as some sort of magic box you use to replace humans and cut costs, but as a force multiplier that you use to create new value. By making AI more explainable, auditable and transparent, we can not only make our systems more fair, we can make them vastly more effective and more useful.

Source: https://hbr.org/2019/10/we-need-ai-that-is-explainable-auditable-and-transparent

03 Nov 2019
AI May Not Kill Your Job—Just Change It

AI May Not Kill Your Job—Just Change It

Don’t fear the robots, according to a report from MIT and IBM. Worry about algorithms replacing any task that can be automated. 

Martin Fleming doesn’t think robots are coming to take your jobs. The chief economist at IBM, Fleming says those worries aren’t backed up by the data. “It’s really nonsense,” he says. A new paper from MIT and IBM’s Watson AI Lab shows that for most of us, the automation revolution probably won’t mean physical robots replacing human workers. Instead, it will come from algorithms. And while we won’t all lose our jobs, those jobs will change, thanks to artificial intelligence and machine learning.

Fleming and a team of researchers analyzed 170 million online US job listings, collected by the job analytics firm Burning Glass Technologies, that were posted between 2010 and 2017. They found that, on average, tasks such as scheduling or credential validation, which could be performed by AI, appeared less frequently in the job listings in the more recent years. The recent listings also included more “soft skills” requirements like creativity, common sense, and judgment. Fleming says this shows that work is being resorted. AI is taking over more easily automated tasks and workers are being asked to do things that machines can’t do.

If you’re in sales, for example, you’ll spend less time figuring out the ideal price for your product, because an algorithm can determine the optimal price to maximize profits. Instead, you might spend more time managing customers or designing attractive marketing materials or websites.

In the study, researchers divided the listings into three groups based on the advertised pay, then examined how different tasks were being valued. What they found is that how we value tasks may be starting to change.

Design skills, for example, were in particularly high demand and increased the most across wage brackets. Within personal care and services occupations—which generally are low-wage—pay for jobs that included design tasks, such as presentation design or digital design, increased by an average of $12,000 over the study period, after inflation. The same can be said of higher wage earners in business and finance who have deep industry expertise that can’t yet be matched by AI. Their wages went up more than $6,000 annually.

Some low-wage occupations like home health care, hairstyling, or fitness training are insulated from the impact of AI because those skills are hard to automate. But middle-wage earners are starting to feel the squeeze. Their wages are still rising, but after adjusting for the shifts in tasks for those jobs, the report found, those wages weren’t growing as quickly as low-wage and high-wage jobs. In some industries, like manufacturing and production, wages actually decreased. There are also fewer middle-wage jobs. Some are getting simpler and being replaced by low-wage jobs. Others now require more skills and are becoming high wage.

Fleming is optimistic about what AI tools can do for work and for workers. Just as automation made factories more efficient, AI can help white-collar workers be more productive. The more productive they are, the more value they add to their companies. And the better those companies do, the higher wages get. “There will be some jobs lost,” he says. “But on balance, more jobs will be created both in the US and worldwide.” While some middle-wage jobs are disappearing, others are popping up in industries like logistics and health care, he says.

As AI starts to take over more tasks, and the middle-wage jobs start to change, the skills we associate with those middle-class jobs have to change too. “I think that it’s rational to be optimistic,” says Richard Reeves, director of the Future of the Middle Class Initiative at the Brookings Institution. “But I don’t think that we should be complacent. It won’t just automatically be OK.”

The report says these changes are happening relatively slowly, giving workers time to adjust. But Reeves points out that while these changes may seem incremental now, they are happening faster than they used to. AI has been an academic project since the 1950s. It remained a niche concept until 2012, when tests showed neural networks could make speech and image recognition more accurate. Now we use it to complete emails, analyze surveillance footage, and decide prison sentencing. The IBM and MIT researchers used it to help sort through all the data they analyzed for this paper.

That fast adoption means that workers are watching their jobs change. We need a way to help people adjust from the jobs they used to have to the jobs that are now available. “Our optimism actually is rather contingent on our actions, on actually making good on our promise to reskill,” says Reeves. “We are rewiring our economy but we haven’t rewired our training and education programs.”

Read more: https://www.wired.com/story/ai-not-kill-job-change-it/

02 Nov 2019
AI Stats News: 64% Of Workers Trust A Robot More Than Their Manager

AI Stats News: 64% Of Workers Trust A Robot More Than Their Manager

Recent surveys, studies, forecasts and other quantitative assessments of the progress of AI highlighted workers’ positive attitudes toward AI and robots, challenges in implementing enterprise AI, the perceived benefits of AI in financial services, and the impact of AI on the business of Big Tech.

AI business adoption, attitudes and expectations

50% of workers are currently using some form of AI at work compared to only 32% last year; workers in China (77%) and India (78%) have adopted AI over 2X more than those in France (32%) and Japan (29%); 65% of workers are optimistic, excited and grateful about having robot co-workers and nearly a quarter report having a loving and gratifying relationship with AI at work; 64% of workers would trust a robot more than their manager and half have turned to a robot instead of their manager for advice; workers in India (89%) and China (88%) are more trusting of robots over their managers, but less so in the U.S. (57%), UK (54%) and France (56%); 82% think robots can do things better than their managers, including providing unbiased information (26%), maintaining work schedules (34%), problem solving (29%) and managing a budget (26%); managers are better than robots in understanding workers’ feelings (45%), coaching them (33%) and creating a work culture (29%) [Oracle survey of 8,370 employees, managers and HR leaders in 10 countries]

The growth of AI applications in deployment was actually less this year than last year, with the total percentage of CIOs saying their company has deployed AI now at 19%, up from 14% last year—far lower than the 23% of companies that thought they would newly roll out AI in 2019 [Gartner]

74% of Financial Services Institutions (FI) executives said AI was extremely or very important to the success of their companies today, while 53% predicted it would be extremely important three years from now; about 75% expected that over the next three years their organizations will gain major or significant benefits from AI in increased efficiency/lower costs; while 61% of FI executives said they knew about an AI project at their companies, only 29% of these executives reported on a project that had been fully implemented; only 29% of AI projects are within full implementation phase, with 46% still pilots, 35% in proof of concept and 24% in initial planning; challenges include securing senior management commitment (45%) and securing adequate budget (44%); technologies used in AI projects include virtual agents (72%) and natural language analysis (56%); 50% found it extremely or very challenging to secure talent and 49% found it extremely or very challenging to attract and retain professionals with appropriate skills [Cognizant survey of FI executives in US and Europe]

82% of CEOs say they have a digital initiative or transformation program, but only 23% think their organizations are very effective at harvesting the results of digital, and even fewer CIOs would say they are very strong at this [Gartner surveys of CEOs and CIOs]

Read more: https://www.forbes.com/sites/gilpress/2019/11/01/ai-stats-news-64-of-workers-trust-a-robot-more-than-their-manager/#777497912b21

31 Oct 2019
Employees Worldwide Welcome ‘AI Coworkers’ To The Office

Employees Worldwide Welcome ‘AI Coworkers’ To The Office

Last year, many Americans worried that artificial intelligence (AI) might replace them at work. This year, employees around the world are wondering why their employers don’t provide them with the kind of AI-enabled technology they’re starting to use at home. 

That’s one way to think about the results of a second annual survey about AI in the workplace, conducted by Oracle and research firm Future Workplace. This year, 50% of survey respondents say they’re currently using some form of AI at work—a major leap compared to only 32% in last year’s survey.

Last year, many Americans worried that artificial intelligence (AI) might replace them at work. This year, employees around the world are wondering why their employers don’t provide them with the kind of AI-enabled technology they’re starting to use at home. 

That’s one way to think about the results of a second annual survey about AI in the workplace, conducted by Oracle and research firm Future Workplace. This year, 50% of survey respondents say they’re currently using some form of AI at work—a major leap compared to only 32% in last year’s survey.

Source: https://www.forbes.com/sites/oracle/2019/10/31/employees-worldwide-welcome-ai-coworkers-to-the-office/#47aa68266681

30 Oct 2019
Using AI to Eliminate Bias from Hiring

Using AI to Eliminate Bias from Hiring

Like any new technology, artificial intelligence is capable of immensely good or bad outcomes. The public seems increasingly focused on the bad, especially when it comes to the potential for bias in AI. This concern is both well-founded and well-documented. But what is AI? It is the simulation of human processes by machines. This fear of biased AI ignores a critical fact: The deepest-rooted source of bias in AI is the human behavior it is simulating. It is the biased data set used to train the algorithm. If you don’t like what the AI is doing, you definitely won’t like what humans are doing because AI is purely learning from humans.

Let’s focus on hiring. The status quo of hiring is deeply flawed and quite frankly dystopian for three primary reasons.

Unconscious human bias makes hiring unfair. The typical way of reviewing applicants prior to an interview is through recruiters reviewing résumés. Numerous studies have shown this process leads to significant unconscious bias against women, minorities and older workers.

Large pools of applicants are being ignored. LinkedIn and other sourcing platforms have been so successful that, on average, 250 applicants apply for any open role. This translates into millions of applicants for a few thousand open roles. This process obviously cannot be handled manually. So, recruiters limit their review of the applicant pool to the 10% to 20% they think will show most promise: those coming from Ivy League campuses, passive candidates from competitors of the companies seeking to fill positions, or employee-referral programs. But guess what? Top colleges and employee-referral programs are much less diverse than the broader pool of applicants submitting résumés.

Traditional hiring tools are already biased. This is permitted by a loophole in U.S. law: Federal regulations state that a hiring tool can be biased if it is job-related. “Job-related” means that the people who are successful in a role show certain characteristics. But if all “successful employees” are white men, due to a history of biased human hiring practices, then it is almost certain that your job-related hiring assessment will bias towards white men and against women and minorities. An African American woman from a non-Ivy League college who is lucky enough to become part of the pipeline, whose résumé is reviewed, and who passes the human recruiter evaluating her résumé may then be asked to take a biased assessment.

Is it any wonder we struggle to hire a diverse workforce? What has led to today’s chronic lack of diversity, and what will continue to stunt diversity, are the human paradigms in place today, not AI.

AI holds the greatest promise for eliminating bias in hiring for two primary reasons:

1. AI can eliminate unconscious human bias. Many current AI tools for recruiting have flaws, but they can be addressed. A beauty of AI is that we can design it to meet certain beneficial specifications. A movement among AI practitioners like OpenAI and the Future of Life Institute is already putting forth a set of design principles for making AI ethical and fair (i.e., beneficial to everyone). One key principle is that AI should be designed so it can be audited and the bias found in it can be removed. An AI audit should function just like the safety testing of a new car before someone drives it. If standards are not met, the defective technology must be fixed before it is allowed into production.

2. AI can assess the entire pipeline of candidates rather than forcing time-constrained humans to implement biased processes to shrink the pipeline from the start. Only by using a truly automated top-of-funnel process can we eliminate the bias due to shrinking the initial pipeline so the capacity of the manual recruiter can handle it. It is shocking that companies today unabashedly admit how only a small portion of the millions of applicants who apply are ever reviewed. Technologists and lawmakers should work together to create tools and policies that make it both possible and mandatory for the entire pipeline to be reviewed.

Additionally, this focus on AI fairness should have us evaluate existing pre-hire assessments with the same standards. The U.S. Equal Employment Opportunity Commission (EEOC) wrote the existing fair-hiring regulations in the 1970s — before the advent of the public internet and the explosion in the number of people applying for each job. The EEOC didn’t anticipate modern algorithms that are less biased than humans yet also able to evaluate a much larger, more diverse pipeline. We need to update and clarify these regulations to truly encourage equal opportunity in hiring and allow for the use of algorithmic recruiting systems that meet clear criteria. Some precedents for standards have already occurred. The California State Assembly passed a resolution to use unbiased technology to promote diversity in hiring, and the San Francisco DA is using “blind sentencing” AI in criminal justice proceedings.

The same standards should be applied to existing hiring tools. Amazon was nationally lambasted for months due to its male-biased hiring algorithm. Yet in the United States today, employers are legally allowed to use traditional, biased assessments that discriminate against women or minorities. How can this be? Probably because most people are unaware that biased assessments are prominently used (and legal). If we are going to call for unbiased AI — which we absolutely should — we should also call for the elimination of all biased traditional assessments.

It is impossible to correct human bias, but it is demonstrably possible to identify and correct bias in AI. If we take critical steps to address the concerns that are being raised, we can truly harness technology to diversify the workplace.

Source: https://hbr.org/2019/10/using-ai-to-eliminate-bias-from-hiring

29 Oct 2019
What's Blockchain Actually Good for, Anyway? For Now, Not Much

What’s Blockchain Actually Good for, Anyway? For Now, Not Much

Not long ago, blockchain technology was touted as a way to track tuna, bypass banks, and preserve property records. Reality has proved a much tougher challenge.

In early 2018, Amos Meiri got the kind of windfall many startup founders only dream of. Meiri’s company, Colu, develops digital currencies for cities—coupons, essentially, that encourage people to spend their money locally. The company was having some success with pilot projects in the UK and Israel, but Meiri had an idea for something bigger. He envisioned a global network of city currencies, linked together using blockchain technology. So he turned to a then-popular way to fund his idea: the initial coin offering, or ICO. Colu raised nearly $20 million selling a digital token it called CLN.

Now, Meiri is doing something unusual: Giving the money back. After a year of regulatory and technical headaches, he stopped trying to fit blockchain into his business plan. He thinks other blockchain projects will follow suit.

It’s not unusual for startup efforts to fail or pivot when the product doesn’t work or the funding runs out. But blockchain has offered a wilder ride than most new technologies. Two years ago, ICOs like Meiri’s lured billions of dollars into blockchain companies and spawned a cottage industry of pilot projects. For a while, a blockchain seemed a salve for just about any problem: Fraudulent tuna. Unreliable health records. Homelessness. Remember WhopperCoin? Burger King’s crypto-for-burgers scheme, along with thousands of other projects, has long lost its sizzle. Many were scams from the start. But even among the more legitimate enterprises, there are relatively few winners. Enter, as a recent report from Gartner put it, “blockchain fatigue.”

“What you’re seeing right now is lethargy,” says Emin Gun Sirer, a professor of computer science at Cornell and founder of Ava Labs. “The current technologies fall really short.”

Bitcoin appears to be here to stay, even if the price has recently slumped. An entire industry has been built around holding and trading digital assets like it. But attempts to build more complex applications using blockchain are hobbled by the underlying technology. Blockchains offer an immutable ledger of data without relying on a central authority—that’s core to the hype behind the technology. But the cryptographic machinery behind blockchains is notoriously slow. Early platforms, like Ethereum, which gave rise to the ICO frenzy, are far too sluggish to handle most commercial applications. For that reason, “decentralized” projects represent a tiny portion of corporate blockchain efforts, perhaps 3 percent, says Apolline Blandin, a researcher at the Cambridge Centre for Alternative Finance.

The rest take shortcuts. So-called permissioned blockchains borrow ideas and terms from Bitcoin, but cut corners in the name of speed and simplicity. They retain central entities that control the data, doing away with the central innovation of blockchains. Blandin has a name for those projects: “blockchain memes.” Hype and lavish funding fueled many such projects. But often, the same applications could be built with less-splashy technology. As the buzzwords wear off, some have begun asking, what’s the point?

When Donna Kinville, the city clerk in South Burlington, Vermont, was approached by a startup that wanted to put the city’s land records on a blockchain, she was willing to listen. “We had the reputation of being ahead of things,” she says. The company, called Propy, had raised $15 million through an ICO in 2017 and forged Vermont connections, including lobbying for blockchain-friendly state legislation.

Propy pitched blockchain as a more secure way to handle land records. “It didn’t take long for them to say that they were overzealous,” Kinville says. She worked with Propy for about a year as it designed its platform and recorded the city’s historical data on the Ethereum blockchain. Propy also recorded one sale for the city, for a parcel of empty land whose owners weren’t in much of a rush.

Last month, Propy pitched Kinville a nearly finished product. She was uninspired. The system lacked practical features she uses all the time, like a simple way to link documents. She liked the software she uses now. It was built by an established company that was just a call away, in case anything fritzed.

“I’m having a hard time understanding how blockchain is going to really positively affect my citizens,” Kinville says. “Is it the speed of the blockchain? The security? Between faxes and emails, things get done just as quickly.” The city’s data is backed up on three servers; Kinville keeps a print copy, just in case. “We Vermonters are cautious. We like paper; you can always go back to it.” She sent Propy notes on how to improve its product, but doesn’t expect to buy it.

Natalia Karayaneva, Propy’s founder, says the land records platform is being tested in another Vermont town that didn’t have a computer system. But she acknowledges that privacy issues, as well as local rules and legacy computer systems, mean blockchain isn’t always a good fit for government. Propy is now focusing on an automated platform for realtors. It also uses blockchain, but the company doesn’t always trumpet it.

“In 2017, it was enough to have blockchain technology and everyone reaches out to you,” says Karayaneva. “But now working with traditional investors, we actually avoid the word blockchain in many of our materials.”

For a while, blockchain was seen as a panacea, says Andrew Stevens, a Gartner analyst who coauthored the “blockchain fatigue” study. Stevens’ team focused on projects that touted blockchain as a way to identify fraudulent and tainted goods in supply chains. They predicted 90 percent of those projects would eventually stall. Blockchain evangelizers were finding that supply chains more complex than expected, and that blockchain offered no ready-made solutions. When it comes to mission-critical blockchain projects, “there are no deployments across any supply chains,” he says.

Read more: https://www.wired.com/story/whats-blockchain-good-for-not-much/

28 Oct 2019
SCIENTISTS SAY THEY FINALLY FIGURED OUT HOW TO SPOT WORMHOLES

SCIENTISTS SAY THEY FINALLY FIGURED OUT HOW TO SPOT WORMHOLES

Thinking With Portals

Scientists think they’ve come up with a way to detect traversable wormholes, assuming they exist.

There’s never been any sort of evidence that traversable wormholes — portals between two distant parts of the universe, or two universes within a theoretical multiverse — are real. But if they are, a team of scientists think they know what that evidence might look like, breathing new life into a far-out theory that could finally achieve faster-than-light travel.

Telltale Wobbles

If a wormhole were to exist, then the gravitational pull of objects on one side, like black holes or stars, would influence the objects on the other side.

If a star wobbled or had otherwise inexplicable perturbations in its orbit around a black hole, researchers could hypothetically argue that they’re being influenced by the gravity of something on the other end of a wormhole, according to research published this month in the journal Physical Review D.

“If you have two stars, one on each side of the wormhole, the star on our side should feel the gravitational influence of the star that’s on the other side,” said University at Buffalo physicist Dejan Stojkovic. “The gravitational flux will go through the wormhole.”

Occam’s Razor

While the researchers hope to look for wobbles in the orbits of stars orbiting near Sagittarius A*, the supermassive black hole at the center of the Milky Way, Stojkovic concedes that spotting some wouldn’t guarantee that a wormhole exists there.

He added “we cannot say that, ‘Yes, this is definitely a wormhole.’ There could be some other explanation, something else on our side perturbing the motion of this star.”
 
27 Oct 2019
Big Tech Is Making A Massive Bet On AI … Here’s How Investors Can, Too

Big Tech Is Making A Massive Bet On AI … Here’s How Investors Can, Too

Artificial intelligence is becoming the future of everything. Yet, only a few large companies have the talent and the technology to perfect it.

That’s the gist of New York Times story published late last week. Rising costs for AI research are locking out university researchers and garage entrepreneurs, two of the traditional — and historically best — founts of innovation.

But it’s not all bad news for investors.

In the past, software engineers used code to build platforms and new business models. A prime example is Netflix.

Managers there transformed the mail-order DVD business into a digital media behemoth. They revolutionized how we view and interact with media. They also shook up traditional Hollywood studios by giving new and independent voices a huge platform.

In the process, the companies with the best algorithms will start to solve the medical, economic and social problems that have vexed researchers and scientists for decades.

Investors need to understand that winners and losers are being determined right now as the cost of AI research becomes prohibitive.

Think of the research process as a set of increasingly complex math problems. Researchers throw enormous amounts of data at custom algorithms that learn through trial and error. As the number of simulations mount, so do costs.

Big problems like self-driving cars or finding the cause of disease at the cellular level require immense amounts computing power.

An August research report from the Allen Institute for Artificial Intelligence determined that the number of calculations required to perform cutting-edge AI research soared 300,000x over the course of the past six years.

Only a handful of companies have the resources to compete at that level.

Long ago, executives at Amazon.comMicrosoftAlphabet and Facebook had the foresight to begin building massive cloud computing scale. Their data centers, many the size of football fields, are strewn all over the globe. Millions of servers, connected with undersea cables and fiber optic lines, have replaced the mainframes of old.

If you want to do great things in AI research, you’ll probably need to deal with at least one of these four big firms.

It’s a pinch being felt even by large technology companies …

Adobe and SAP joined an open data alliance with Microsoft in September 2018. A day later, salesforce.com hooked up with Amazon Web Services, Amazon’s cloud computer arm.

There has been some effort to break up the concentration of power. But critics are still mostly focused on the wrong things. In their view, data is the new oil, and it begs for regulation.

In the early 1900s, oil was the lifeblood of industry. It was central to the development of new game-changing chemicals. It powered the nascent automobile and steel complex.

The oil barons were the gatekeepers to innovation. In the process, they amassed fantastic wealth, as did many other industrialists. Income inequality soared.

Eventually, this led to calls for regulation, and trust-busters were brought in to break up (and control) the oil giants.

The parallels to today are convenient, and lazy.

Writers at The Economist in 2017, painted a dystopian picture of our future — one where the tech giants remain unregulated. The influential finance magazine concluded antitrust regulators must step in to control the flow of data, just as they did with oil companies in the early 1900s.

However, data is not oil. It’s not dear. It’s abundant.

Thanks to inexpensive sensors and lightweight software, there is a gusher of digital information everywhere. It comes from our wrists, cars and television sets. Soon it will shoot out of traffic lights, buses and trains; mining pits, farm fields and factories.

The limited resource is computing power. Enterprises, governments and researchers will need to pay up if they want to turn their data into something of value.

McKinsey, a global research and consulting firm, argues unlocking data should be a strategic priority at every enterprise. Analysts predict data will change business models in every industry, every business going forward.

The most important takeaway is that all future key AI breakthroughs are likely to come out of the big four. They have the technological and financial resources to attract talent. They have the scale to push the envelope.

It’s not a surprise that Amazon is leading in advanced robotics and language processing, or that Alphabet started developing self-driving cars in 2009.

Microsoft is building the biggest connected car platform in the world: Its engineers in Redmond, Wash., imagine a world of vehicle synchronization and the end of traffic.

Across town, Facebook researchers are working on augmented reality and brain computer interfaces.

These are big ideas with huge potential payoffs.

Amazon, Microsoft, Alphabet and Facebook are as important today as Standard Oil, Royal Dutch Shell and British Petroleum were a century ago.

Their resource is not oil, or data for that matter. It’s computing power. They’re leveraging that position to dominate AI research, the most important technology of the future.

For their investors, this is a good thing.

Growth investors should consider buying the stocks into any significant weakness. The story of AI is only getting started.

Source: https://www.forbes.com/sites/jonmarkman/2019/10/26/big-tech-is-making-a-massive-bet-on-ai–heres-how-investors-can-too/#a3cfea856d73

26 Oct 2019
Expert: VR Headsets Should Have Brain Interfaces

Expert: VR Headsets Should Have Brain Interfaces

Brain-computer interfaces could make VR gaming way more immersive.

Mind Control
Virtual reality headsets are already pretty good at fooling our eyes and ears into thinking we’re in another world. And soon, we might be able to navigate that world with our thoughts alone.

Speaking at this year’s Game Developer’s Conference in San Francisco, Mike Abinder, in-house psychologist and researcher for game developer and distributor Valve, gave a talk on the exciting possibilities of adding brain-computer interfaces to VR headsets.

Personalized Gaming
The idea is to add non-invasive electroencephalogram (EEG) sensors to the insides of existing VR headsets. EEG readers detect the electrical signals firing in the brain and turn them into data points. And by analyzing that data, according to Abinder, game designers could make games that respond differently depending on whether you’re excited, happy, sad or bored.


“So think about adaptive enemies. What kinds of animals do you like playing against in gaming?” Ambinder said, as quoted by VentureBeat. “If we knew the answers to these questions, you could have the game give you more of a challenging time and less of the boring time.”

Game design could become almost perfectly tailored to the person wearing the VR headset — or even recreate a perfect representation of you inside a virtual world. Your avatar could perfectly mimic your current state of mind or mood.

“All of a sudden, we start becoming able to assess how you’re responding to various elements in game,” Ambinder continued. “We can make small changes to make big changes.”

Brain Extensions
There are a handful of companies already trying to harness brain signals for enhancing gaming experiences. A startup called Neurable is already testing out BCIs built into off-the-shelf VR headsets “to create a natural extension of our brains, creating new possibilities for human empowerment,” according to its website.

Of course, Abinder’s vision of the future of gaming is mostly a fun thought experiment at this stage. Even hospital-grade EEGs have to deal with a huge amount of noise — and that’s especially the case for consumer-grade, non-invasive scanners that are not planted to the scalp or surgically implanted.

Source: https://futurism.com/brain-computer-interface-vr-headsets