Donald Glover’s trap gospel is a bold divergence from protest songs of the past.
Category: Tech news
hacking,system security,protection against hackers,tech-news,gadgets,gaming
The Iran Nuclear Deal Unraveling Raises Fears of Cyberattacks
For the last three years, Iran has restrained its state-sponsored hackers from disruptive attacks on the West. That ceasefire may now be over.
This Photo Was Made With Radiation From Vintage Dishes
Peter Shellenberger uses old Fiestaware and Ektachrome film to make his autoradiographs.
The Physics of Swinging a Mass on a String for Fun
With a specific setup, you can control the tension in the string.
The Complex Engineering of Aston Martin’s DB11 Volante
Engineers spent four years designing and abusing a folding roof for the convertible version of the DB11.
Xbox One Game Sale: Our Favorite Games for Cheap (May 2018)
Looking for Xbox games to play this summer? Grab some on the cheap.
AI Isn’t a Crystal Ball, But It Might Be a Mirror
Using algorithms to predict crimes has created a biased system: Better to use AI for looking inward.
Want to Prove Your Business Is Fair? Audit Your Algorithm
A slew of tech companies are opening up their inner workings to outside evaluators, including Weapons of Math Destruction author Cathy O’Neil
Why Nashville Voters Rejected Public Transit
After the city’s voters shot down an ambitious plan to fund light rail and bus lines, transit advocates wonder where they went wrong.
What Is Net Neutrality? The Complete WIRED Guide
Everything you need to know about the struggle to treat information on the internet the same—ISPs shouldn’t be able to block some sorts of data and prioritize others.
The Price of Google’s New Conveniences? Your Data
Google introduces new features to make life easier, and to help the company collect more data on users.
Google I/O 2018: How Google’s Duplex Demo Stole the Show
Google’s new “Duplex” technology presents a significant tipping point for machine intelligence–powered virtual assistants.
We love augmented reality, but let’s fix things that could become big problems
Contributor
Contributor
Augmented Reality (AR) is still in its infancy and has a very promising youth and adulthood ahead. It has already become one of the most exciting, dynamic, and pervasive technologies ever developed. Every day someone is creating a novel way to reshape the real world with a new digital innovation.
Over the past couple of decades, the Internet and smartphone revolutions have transformed our lives, and AR has the potential to be that big. We’re already seeing AR act as a catalyst for major change, driving advances in everything from industrial machines to consumer electronics. It’s also pushing new frontiers in education, entertainment, and health care.
But as with any new technology, there are inherent risks we should acknowledge, anticipate, and deal with as soon as possible. If we do so, these technologies are likely to continue to thrive. Some industry watchers are forecasting a combined AR/VR market value of $108 billion by 2021, as businesses of all sizes take advantage of AR to change the way their customers interact with the world around them in ways previously only possible in science fiction.
As wonderful as AR is and will continue to be, there are some serious privacy and security pitfalls, including dangers to physical safety, that as an industry we need to collectively avoid. There are also ongoing threats from cyber criminals and nation states bent on political chaos and worse — to say nothing of teenagers who can be easily distracted and fail to exercise judgement — all creating virtual landmines that could slow or even derail the success of AR. We love AR, and that’s why we’re calling out these issues now to raise awareness.

Without widespread familiarity with the potential pitfalls, as well as robust self-regulation, AR will not only suffer from systemic security issues, it may be subject to stringent government oversight, slowing innovation, or even threaten existing First Amendment rights. In a climate where technology has come under attack from many fronts for unintended consequences and vulnerabilities–including Russian interference with the 2016 election as well as ever-growing incidents of hacking and malware–we should work together to make sure this doesn’t happen.
If anything causes government overreach in this area, it’ll likely be safety and privacy issues. An example of these concerns is shown in this dystopian video, in which a fictional engineer is able to manipulate both his own reality and that of others via retinal AR implants. Because AR by design blurs the divide between the digital and real worlds, threats to physical safety, job security, and digital identity can emerge in ways that were simply inconceivable in a world populated solely by traditional computers.
While far from exhaustive, the lists below present some of the pitfalls, as well as possible remedies for AR. Think of these as a starting point, beginning with pitfalls:
- AR can cause big identity and property problems: Catching Pokemons on a sidewalk or receiving a Valentine on a coffee cup at Starbucks is really just scratching the surface of AR capabilities. On a fundamental level, we could lose the power to control how people see us. Imagine a virtual, 21st century equivalent of a sticky note with the words “kick me” stuck to some poor victim’s back. What if that note was digital, and the person couldn’t remove it? Even more seriously, AR could be used to create a digital doppelganger of someone doing something compromising or illegal. AR might also be used to add indelible graffiti to a house, business, sign, product, or art exhibit, raising some serious property concerns.
- AR can threaten our privacy: Remember Google Glass and “Glassholes?” If a woman was physically confronted in a San Francisco dive bar just for wearing Google Glass (reportedly, her ability to capture the happenings at the bar on video was not appreciated by other patrons), imagine what might happen with true AR and privacy. We may soon see the emergence virtual dressing rooms, which would allow customers to try on clothing before purchasing online. A similar technology could be used to overlay virtual nudity onto someone without their permission. With AR wearables, for example, someone could surreptitiously take pictures of another person and publish them in real time, along with geotagged metadata. There are clear points at which the problem moves from the domain of creepiness to harassment and potentially to a safety concern.
- AR can cause physical harm: Although hacking bank accounts and IoT devices can wreak havoc, these events don’t often lead to physical harm. With AR, however, this changes drastically when it is superimposed on the real world. AR can increase distractions and make travel more hazardous. As it becomes more common, over-reliance on AR navigation will leave consumers vulnerable to buggy or hacked GPS overlays that can manipulate drivers or pilots, making our outside world less safe. For example, if a bus driver’s AR headset or heads-up display starts showing illusory deer on the road, that’s a clear physical danger to pedestrians, passengers, and other drivers.
- AR could launch disturbing career arms races: As AR advances, it can improve everything from individual productivity to worker data access, significantly impacting job performance. Eventually, workers with training and experience with AR technology might be preferred over those who don’t. That could lead to an even wider gap between so-called digital elites and those without such digital familiarity. More disturbingly, we might see something of an arms race in which a worker with eye implants as depicted in the film mentioned above might perform with higher productivity, thereby creating a competitive advantage over those who haven’t had the surgery. The person in the next cubicle could then feel pressure to do the same just to remain competitive in the job market.
How can we address and resolve these challenges? Here are some initial suggestions and guidelines to help get the conversation started:
- Industry standards: Establish a sort of AR governing body that would evaluate, debate and then publish standards for developers to follow. Along with this, develop a centralized digital service akin to air traffic control for AR that classifies public, private and commercial spaces as well as establishes public areas as either safe or dangerous for AR use.
- A comprehensive feedback system: Communities should feel empowered to voice their concerns. When it comes to AR, a strong and responsive way for reporting unsecure vendors that don’t comply with AR safety, privacy, and security standards will go a long way in driving consumer trust in next-gen AR products.
- Responsible AR development and investment: Entrepreneurs and investors need to care about these issues when developing and backing AR products. They should follow a basic moral compass and not simply chase dollars and market share.
- Guardrails for real-time AR screenshots: Rather than disallowing real-time AR screenshots entirely, instead control them through mechanisms such as geofencing. For example, an establishment such as a nightclub would need to set and publish its own rules which are then enforced by hardware or software.
While ambitious companies focus on innovation, they must also be vigilant about the potential hazards of those breakthroughs. In the case of AR, working to proactively wrestle with the challenges around identity, privacy and security will help mitigate the biggest hurdles to the success of this exciting new technology.
Recognizing risks to consumer safety and privacy is only the first step to resolving long-term vulnerabilities that rapidly emerging new technologies like AR create. Since AR blurs the line between the real world and the digital one, it’s imperative that we consider the repercussions of this technology alongside its compelling possibilities. As innovators, we have a duty to usher in new technologies responsibly and thoughtfully so that they’re improving society in ways that can’t also be abused -we need to anticipate problems and police ourselves. If we don’t safeguard our breakthroughs and the consumers who use them, someone else will.
Technical ignorance is not leadership
There is a peculiar pattern that I have noticed among elites in the United States outside Silicon Valley, which is the almost boastful ignorance of technology. As my colleague Jon Shieber pointed out today, you can see that ignorance among congressmen throughout the whole Facebook/Cambridge Analytica saga. Our president has rarely sent an email, and seems to confine his mobile phone activities to Twitter. One senior policymaker told me a few months ago that she doesn’t know how to turn on her computer.
Such a pattern is hardly unique to politics though. Hang out with enough business executives, lawyers, doctors, or consultants, and you will hear the inevitable “I don’t really do the computer,” with an air of detached disdain.
Yet it isn’t just the technical challenges that this class avoids, but anything to do with implementation in general. In the policy world, wonks spend decades debating the finer points of healthcare and social spending, only to be wholly ignorant at how their decisions are actually implemented into code. There is an elitism in policy between those who make the decisions and those who implement them, just as much as there is a social distinction between corporate executives and the people who have to carry out their directives.
In many ways, this disdain for the technical mirrors the disdain for math, where the phrase “I’m not a math person” has become sufficiently ubiquitous in the U.S. as to be covered regularly in the press. Being bad at math is a way to signal that someone isn’t one of the worker bees who actually have to care about calculations — they just read the reports prepared by others.
Yet, that ignorance of technology is increasingly untenable. Decisions are only as good as the implementation that results. Marketing isn’t a plan, it’s a system of feedback loops from the market that need to be adjusted in real-time. It’s one thing for politicians to sign a bill into law, but another to ensure that the bill’s intentions are actually encoded into the software that powers government.
The gap between decision and implementations was at the core of a conversation I had this past week with Jennifer Pahlka, who founded and heads Code for America, a nonprofit whose mission is to bridge the divide between government and technologists.
To show how far a policy and its implementation can be, she pointed me to Proposition 47 in California. That initiative, which was passed by voters in 2014, was designed to allow individuals to retroactively expunge or reclassify certain nonviolent felonies to misdemeanors, allowing individuals to become eligible again to work, vote, and receive some government benefits.
Yet, several years after the approval of Prop 47, a single digit percentage of eligible people have taken advantage of the program. The reason is classic government: incredibly convoluted paperwork, which is exponentially worse since every one of California’s 58 counties has to implement the program independently. “If you are a voter and you voted for a specific referendum,” Palhka explained, then you expect a certain outcome. But, “if none of the benefits that you expected to change” materialized, then cynicism mounts quickly.
To help bridge the gap, Code for America launched Clear My Record, a service designed to automate many of the steps involved in the Prop 47 process and make it more accessible. It’s just one of a bunch of services that the group has launched to improve government services ranging from food assistance through GetCalFresh to improving case manager communication through ClientComm.
Palhka’s mission isn’t to just offer point solutions for specific government programs, but to completely overhaul the latent anti-tech culture of government officials. “Digital competence is core to successful government,” she explained, and yet, “If you are a powerful person, you don’t have to understand how the digital world works … but what we are saying is that you do have to care.” Her goal is straightforward: “how do you get policy, operations, and tech to all work together?”
While Palhka and her organization focuses on the public sector, their framework is perhaps even more important to the private sector. There isn’t a company today that can survive without technical leadership in the C-suite, and yet, we still see an astonishing lack of awareness about the internet and its potential from corporate executives. Software increasingly intermediates all relationships with customers, whether though digital commerce or enterprise services. If the software is bad, no amount of decision-making in a mahogany-paneled board room is going to change it.
The good news is that ignorance has an easy solution: education. The computer is not some mystery box. It’s well-documented, and all kinds of resources are available to learn how they work and how to think about their capabilities and nuances. If someone can run a multinational company, they can probably ask smart questions about algorithms or machine learning even if they don’t realistically implement the linear algebra themselves.
CEOs, senators, and other leaders are synthesizers — they rely on staff to handle the details so they can focus on strategy. We would never trust a CEO who brushed off an accountant by saying “I don’t do cash flows,” and we shouldn’t trust a CEO who doesn’t understand how the internet works. Changing times require adaptable leaders, and today those leaders need tech literacy just as much as our grade-school children do. It’s the only way leadership can move forward today.
Daniel Jones is said to have left Global Founders Capital to ‘raise his own fund’
Global Founders Capital, the venture capital arm of Rocket Internet, has seen a number of its London investment team leave over the last couple of years, but the most significant departure may have only just happened.
According to multiple sources, Daniel Jones, General Partner at GFC, has left the VC firm and is thought to be planning to raise his own fund. A spokesperson for GFC declined to comment on the record when asked to confirm he is no longer at GFC. Jones couldn’t be reached for comment at the time of publication.
Rumours of Jones’ departure began circulating in late March, and sometime in April portfolio companies were informed by GFC about changes in the U.K. team and specifically that he was leaving. Separately I understand from several sources that the reason being given by GFC is that Jones has decided to take up the challenge of raising a fund of his own.
Perhaps a sign of how depleted the GFC London team is right now — in the last two years, the firm has lost associate Julien Bek to Accel, associate Julia Morrongiello to Point Nine Capital, and principle Nicholas Stocks to White Star Capital — a number of portfolio companies are being told that Rocket Internet co-founder and CEO Oli Samwer is to be their main contact for now going forward. He’s primarily being supported by GFC Partner Levin Bunz, according to a person familiar with the matter.
Meanwhile, Jones’ exit from GFC is bound to be a loss to the U.K. tech startup scene, even if he does go on to eventually raise his own fund. He was and remains a popular figure amongst GFC portfolio companies and as a General Partner was always somebody thought to have the ear of Samwer, and therefore an influential figure at GFC and Rocket Internet.
As one source with knowledge of how GFC operated in the U.K. put it: “Daniel Jones was the most important non-Samwer at Global Founders. He constituted at least half of the decision-making and the majority of the legwork on every term sheet GFC issued.”
According to his LinkedIn profile, Jones’ U.K. GFC investments include Goodlord, Echo and Nested. In the last few years, the stage-agnostic VC firm has also backed U.K. startups Quiqup, OpenRent, and HomeTouch, amongst others.