Report: Police are now asking Google for data about all mobile devices close to certain crimes

According to a new report from Raleigh, N.C. television affiliate WRAL, Google might have quietly helped local detectives in their pursuit of two gunmen who committed separate crimes roughly one-a-half years apart. How? According to the story, Raleigh police presented the company with warrants not for information about specific suspects but rather data from all the mobile devices that were within a certain distance of the respective crime scenes at the time the crimes were committed.

In one of its homicide cases, Raleigh police reportedly asked Google to provide unique data for anyone within a 17-acre area that includes both homes and businesses. In the other, it asked for user data across “dozens” of apartment units at a particular complex.

As the outlet notes, most modern phones, tablets and laptops have built-in location tracking that pings some combination of GPS, Wi-Fi and mobile networks to determine each device’s position. Users can switch off location tracking, but if they’re using a cellular network or relying on WiFi to connect, their devices are still transmitting their coordinates to third parties.

Google hasn’t responded to a request for more information that we’d sent off earlier today. But in response to WRAL’s investigation, a company spokesman declined to comment on specific cases or discuss whether Google has fought requests from the Raleigh investigators, saying only that: “We have a long-established process that determines how law enforcement may request data about our users. We carefully review each request and always push back when they are overly broad.”

According to a Raleigh Police Department spokesperson, the requested account data was not limited to devices running Google’s Android operating system but rather all devices running any kind of location-enabled Google app. The department began using the tactic after learning about a similar search warrant in California’s Orange County, said this spokesperson.

Meanwhile, a Wake County district attorney tells WRAL that the data investigators have sought from Google contain only anonymized account numbers without any content included, though it sounds from her comments as though Google has been complicit in supplying further information when forced to do so.

“We’re not getting text messages or emails or phone calls without having to go through a different process and having additional information that might lead us to a specific individual,” she tells WRAL.

Google says that in recent years, it’s been receiving disclosure requests for between 75,000 and 80,000 users every six months. As of January 2017, which is the last time it publicly updated its transparency report about such things, it says it produced data roughly 65 percent of the time that it was asked to do so.

Google doesn’t publicly disclose what kind of data it provides to governmental and other authorities. Further, in cases where it does hand over data, it may be under court order not to identify the individuals impacted.

Either way, the area-based search warrants that Raleigh detectives have sought seem to be a newer trend — one that will undoubtedly concern Fourth Amendment advocates anew. For one thing, in addition to potentially violating the privacy of Google users and subjecting them to unreasonable searches, one can imagine people being wrongly accused by sheer dint of being tied to a murder scene via cell phone location records.

In fact, it has happened already.

It’s also easy to imagine that someone with nefarious designs might leave his or her cell phone behind. Indeed, says WRAL’s investigation, in two separate cases where Raleigh investigators have presented Google with area-based search warrants — one involving a fire and another sexual battery — there was not evidence that either the arsonist or attacker had a cell phone.

Facebook and the endless string of worst-case scenarios

Facebook has naively put its faith in humanity and repeatedly been abused, exploited, and proven either negligent or complicit. The company routinely ignores or downplays the worst-case scenarios, idealistically building products without the necessary safeguards, and then drags its feet to admit the extent of the problems.

This approach, willful or not, has led to its latest scandal, where a previously available API for app developers was harnessed by Trump and Brexit Leave campaign technology provider Cambridge Analytica to pull not just the profile data of 270,000 app users who gave express permission, but of 50 million of those people’s unwitting friends.

Facebook famously changed its motto in 2014 from “Move fast and break things” to “Move fast with stable infra” aka ‘infrastructure’. But all that’s meant is that Facebook’s products function as coded even at enormous scale, not that they’re built any slower or with more caution for how they could be weaponized. Facebook’s platform iconography above captures how it only sees the wrench, then gets shocked by the lightning on the other end.

Sometimes the abuse is natural and emergent, as when people grow envious and insecure from following the highlights of their peers’ lives through the News Feed that was meant to bring people together. Sometimes the abuse is malicious and opportunistic, as it was when Cambridge Analytica used an API designed to help people recommend relevant job openings to friends to purposefully harvest data that populated psychographic profiles of voters so they could be swayed with targeted messaging.

NEW YORK, NY – SEPTEMBER 19: CEO of Cambridge Analytica Alexander Nix speaks at the 2016 Concordia Summit – Day 1 at Grand Hyatt New York on September 19, 2016 in New York City. (Photo by Bryan Bedder/Getty Images for Concordia Summit)

Whether it doesn’t see the disasters coming, makes a calculated gamble that the growth or mission benefits of something will far outweigh the risks, or purposefully makes a dangerous decision while obscuring the consequences, Facebook is responsible for its significant shortcomings. The company has historically cut corners in pursuit of ubiquity that left it, potentially knowingly, vulnerable to exploitation.

And increasingly, Facebook is going to lengths to fight the news cycle surrounding its controversies instead of owning up early and getting to work. Facebook knew about Cambridge Analytica’s data policy violations since at least August 2016, but did nothing but send a legal notice to delete the information.It only suspended the Facebook accounts of Cambridge Analytica and other guilty parties and announced the move this week in hopes of muting forthcoming New York Times and Guardian articles about the issue (articles which it also tried to prevent from running via legal threats.) And since, representatives of the company have quibbled with reporters over Twitter, describing the data misuse as a “breach” instead explaining why it didn’t inform the public about it for years.

“I have more fear in my life that we aren’t going to maximize the opportunity that we have than that we mess something up” Zuckerberg said at a Facebook’s Social Good Forum event in November. Perhaps it’s time for that fear to shift towards ‘what could go wrong’, not just for Zuck, but the leaders of all of today’s tech titans.

Facebook CEO mark Zuckerberg

An Abridged List Of Facebook’s Unforeseen Consequences

Here’s an incomplete list of the massive negative consequences and specific abuses that stem from Facebook’s idealistic product development process:

  • Engagement Ranked Feed = Sensationalized Fake News – Facebook built the News Feed to show the most relevant content first so we’d see the most interesting things going on with our closest friends, but measured that relevance largely based on what people commented on, liked, clicked, shared, and watched. All of those activities are stoked by sensationalist fake new stories, allowing slews of them to go viral while their authors earned ad revenue and financed their operations with ad views delivered by Facebook referral traffic. Facebook downplayed the problem until it finally fessed up and is now scrambling to fight fake news.
  • Engagement Priced Ad Auctions = Polarizing Ads – Facebook gives a discount to ads that are engaging so as to incentivize businesses to produce marketing materials that don’t bore or annoy users such that they close the social network. But the Trump campaign designed purposefully divisive and polarizing ads that would engage a niche base of his supporters to try to score cheaper ad clicks and more free viral sharing of those ads.
  • Academic Research = Emotion Tampering – Facebook allows teams of internal and external researchers to conduct studies on its users in hopes of producing academic breakthroughs in sociology. But in some cases these studies have moved from observation into quietly interfering with the mental conditions of Facebookers. In 2012, Facebook data science team members manipulated the number of emotionally positive or negative posts in the feeds of 689,000 users and then studied their subsequent status updates to see if emotion was contagious. Facebook published the research, failing to foresee the huge uproar that ensued when the public learned that some users, including emotionally vulnerable teenagers who could have been suffering from depression, were deliberately shown sadder posts.
  • Ethnic Affinity Ad Targeting = Racist Exclusion – Facebook’s ad system previously let businesses target users in “ethnic affinity” groups such as “African-American” or “Hispanic” based on their in-app behavior as a stand in for racial targeting. The idea was likely to help businesses find customers interested in their products, but the tool was shown to allow exclusion of certain ethnic affinity groups in ways that could be used to exclude them from legally protected opportunities such as housing; employment, and loans. Facebook has since disabled this kind of targeting while investigates the situation.

    Exclusionary ethnic affinity ad targeting, as spotted by ProPublica

  • App Platform = Game Spam – One of Facebook’s earliest encounters with unforeseen consequences came in 2009 and 2010 after it launched its app platform. The company expected developers to build helpful utilities that could go viral thanks to special, sometimes automatic posts to the News Feed. But game developers seized on the platform and its viral growth channels, spawning companies like Zynga that turned optimizing News Feed game spam into a science. The constant invites to join games in order to help a friend win overwhelmed the feed, threatening to drown out legitimate communication and ruin the experience for non-gamers until Facebook shut down the viral growth channels, cratering many of the game developers.
  • Real Name Policy = Enabling Stalkers – For years, Facebook strictly required to use their real names in order to reduce uncivility and bullying facilitated by hiding behind anonymity. But victims of stalking, domestic violence, and hate crimes argued that their abusers could use Facebook to track them down and harass them. Only after mounting criticism from the transgender community and others did Facebook slightly relax the policy in 2015, though some still find it onerous to set up a pseudonym on Facebook and dangerous to network without one.
  • Self-Serve Ads = Objectionable Ads – To earn money efficiently, Facebook lets people buy ads through its apps without ever talking to a sales representative. But the self-serve ads interface has been repeatedly shown to used nefariously. ProPublica found businesses could target those who followed objectionable user-generated Pages and interests such as “jew haters” and other disturbing keywords on Facebook. And Russian political operatives famously used Facebook ads to spread divisive memes in the United States and pit people against each other and promote distrust between citizens. Facebook is only now shutting down long-tail user-generated ad targeting parameters, hiring more ad moderators, and requiring more thorough political ad buyer documentation.
  • Developer Data Access = Data Abuse – Most recently, Facebook has found its trust in app developers misplaced. For years it offered an API that allowed app makers to pull robust profile data on their users and somewhat limited info about their friends to make personalized products. For example, one could show which bands your friends Like so you’d know who to invite to a concert. But Facebook lacked strong enforcement mechanisms for its policy that prevented developers from sharing or selling that data to others. Now the public is learning that Cambridge Analytica’s trick of turning 270,000 users of Dr. Aleksandr Kogan’s personality quiz app into info about 50 million people illicitly powered psychographic profiles that helped Trump and Brexit pinpoint their campaign messages. It’s quite likely that other developers have violated Facebook’s flimsy policies against storing, selling, or sharing user data they’ve collected, and more reports of misuse will emerge.

Each time, Facebook built tools with rosy expectations, only to negligently leave the safety off and see worst-case scenarios arise. In October, Zuckerberg already asked for forgiveness, but the public wants change.

Trading Kool-Aid For Contrarians

The desire to avoid censorship or partisanship or inefficiency is no excuse. Perhaps people are so addicted to Facebook that no backlash will pry them their feeds. But Facebook can’t treat this as merely a PR problem, a distraction from the fun work of building new social features, unless its employees are ready to shoulder the blame for the erosion of society. Each scandal further proves it can’t police itself, inviting government regulation that could gum up its business. Members of congress are already calling on Zuckerberg to testify.

Yet even with all of the public backlash and calls for regulation, Facebook still seems to lack or ignore the cynics and diverse voices who might foresee how its products could be perverted or were conceptualized foolishly in the first place. Having more minorities and contrarians on the teams that conceive its products could nip troubles in the bud before they blossom.

“The saying goes that optimists tend to be successful and pessimists tend to be right” Zuckerberg explained at the November forum. “If you think something is going to be terrible and it is going to fail, then you are going to look for the data points that prove you right and you will find them. That is what pessimists do. But if you think that something is possible, then you are going to try to find a way to make it work. And even when you make mistakes along the way and even when people doubt you, you are going to keep pushing until you find a way to make it happen.”

Zuckerberg speaks at Facebook’s Social Good Forum

That quote takes on new light given Facebook’s history. The company must promote a culture where pessimists can speak up without reprise. Where a seeking a raise, reaching milestones, avoiding culpability, or a desire to avoid rocking the Kool-Aid boat don’t stifle discussion of a product’s potential hazards. Facebook’s can-do hacker culture that codes with caution to the wind, that asks for forgiveness instead of permission, is failing to scale to the responsibility of being a two billion user communications institution.

And our species is failing to scale to that level of digital congregation too, stymied by our insecurity and greed. Whether someone is demeaning themselves for not having as glamorous of a vacation as their acquaintances, or seizing the world’s megaphone to spew lies in hopes of impeding democracy, we’ve proven incapable of safe social networking.

That’s why we’re relying on Facebook and the other social networks to change, and why it’s so catastrophic when they miss the festering problems, ignore the calls for reform, or try to hide their complicity. To connect the world, Facebook must foresee its ugliness and proactively rise against it.

For more on Facebook’s non-stop scandals, check out these TechCrunch feature pieces:

Facebook has suspended the account of the whistleblower who exposed Cambridge Analytica

Tech hath no fury like a multi-billion dollar social media giant scorned.

In the latest turn of the developing scandal around how Facebook’s user data wound up in the hands of Cambridge Analytica — for use in the in development in psychographic profiles that may or may not have played a part in the election victory of Donald Trump — the company has taken the unusual step of suspending the account of the whistleblower who helped expose the issues.

Suspended by @facebook. For blowing the whistle. On something they have known privately for 2 years.

— Christopher Wylie (@chrisinsilico) March 18, 2018

In a fantastic profile in The Guardian, Wylie revealed himself to be the architect of the technology that Cambridge Analytica used to develop targeted advertising strategies that arguably helped sway the U.S. presidential election.

A self-described gay, Canadian vegan, Wylie eventually became — as he told The Guardian — the developer of “Steve Bannon’s psychological warfare mindfuck tool.”

The goal, as The Guardian reported, was to combine social media’s reach with big data analytical tools to create psychographic profiles that could then be manipulated in what Bannon and Cambridge Analytica investor Robert Mercer allegedly referred to as a military-style psychological operations campaign — targeting U.S. voters.

In a series of Tweets late Saturday, Wylie’s former employer, Cambridge Analytica, took issue with Wylie’s characterization of events (and much of the reporting around the stories from The Times and The Guardian). 

We told @nytimes & @guardian that Mr. Wylie was a contractor for CA. He was not a founder.

— Cambridge Analytica (@CamAnalytica) March 17, 2018

Meanwhile, Cadwalldr noted on Twitter earlier today she’d received a phone call from the aggrieved whistleblower.

Plaintive phone call from Chris: he's also banned from WhatsApp.
And – outraged voice! – Instagram.
"But how am I going to curate my online identity?" he says.
The Millennials' first great whistleblower? And @facebook hitting him where it hurts

— Carole Cadwalladr (@carolecadwalla) March 18, 2018

Facebook has since weighed in with a statement of its own, telling media outlets:

“Mr. Wylie has refused to cooperate with us until we lift the suspension on his account. Given he said he ‘exploited Facebook to harvest millions of people’s profiles,’ we cannot do this at this time.

“We are in the process of conducting a comprehensive internal and external review as we work to determine the accuracy of the claims that the Facebook data in question still exists. That is where our focus lies as we remain committed to vigorously enforcing our policies to protect people’s information.”

Wing It is a Facebook Messenger bot meant to get you out of the house

“I should go on a weekend trip,” you think to yourself. “I’ll go to the mountains!”

And then the weekend comes and all the hotels are booked and you’re tired and the mountains are far and hey look, Netflix!

Wing It is a Facebook Messenger bot that tries to get you out of that rut. You punch in your criteria, and it’ll pop up every once in a while when it finds trips that fit the bill, recommending accommodations and an activity or two in the area.

Wing It asks just a few questions off the bat: Where do you live? How far do you want to go? Is it just you and a partner, or a big group of friends? How much is each person looking to spend?

A few minutes later, it’ll respond with a short list: a few places to stay and some things you might want to do while there. Right now, that’s mostly hikes and trails; eventually, the Wing It team hopes to expand their knowledge base out to things like kayaking trips, rock climbing, or nearby surf spots.

Wing It focuses on things that are far enough away to feel like a vacation, but close enough to do on a whim. While early iterations of the bot tried to offer up trips involving last minute flights, they’ve since learned to focus on things within driving distance. “People would say they’d go on a last minute flight… 98% of people would say ‘I’m in!’. Then you send’em that, and give them the option to book it, and… nothing.”

The underlying concept of a trip planning search engine is by no means new, but there’s something nice about the way it all fits together here. Too many weekend trips die in the planning phases — that moment when you dive into Airbnb and drown under a million options and decide to do it later. Wing It boils it all down to a handful of choices based on what it already knows you’re looking for.

That slimmed down and curated offering is what Wing It’s co-founders, Luis De Pombo and Gabriel Ascanio, are going for. After meeting in school, they started working together on side projects on the weekends. They’d try to mix up the scenery by traveling to new locales on the weekends, only to spend half their time just searching for the right place.

So why a Messenger bot? “Because of the ease of reaching people”, Gabriel tells me. There’s no app to download – you just start a conversation with the bot. Meanwhile, the team can iterate on their concept almost instantly. If the data shows users are liking a feature, they can play it up with no need for a downloaded update; if they change something and usage tanks, they can reverse course on the fly.

Wing It is part of Y Combinator’s Winter 2018 class, and has yet to raise funds beyond that.

Regulators in the UK are also calling for more hearings into Facebook and Cambridge Analytica

As more details emerge about Cambridge Analytica’s use of Facebook data in the U.S. presidential election, members of Parliament in the UK are joining congressional leadership in the U.S. to call for a deeper investigation and potential regulatory action.

The Chair of parliamentary committee investigating “fake news”, the conservative MP Damian Collins, accused both Cambridge Analytica and Facebook of misleading his committee’s investigation in a statement early Sunday morning indicating that both companies would be called in for more questioning.

Alexander Nix denied to the Committee last month that his company had received any data from the Global Science Research company (GSR). From the evidence that has been published by The Guardian and The Observer this weekend, it seems clear that he has deliberately mislead the Committee and Parliament by giving false statements,” Collins wrote in a statement to the press. “We will be contacting Alexander Nix next week asking him to explain his comments, and answer further questions relating to the links between GSR and Cambridge Analytica, and its associate companies.”

On Friday, Facebook announced that it had suspended the account of Cambridge Analytica for violating the social media company’s terms and conditions by obtaining user data from a third party source without users’ permissions.

The announcement, made late Friday night, was designed to preempt reports published by The New York Times and The Guardian that would have exposed the fact that Cambridge Analytica had obtained information on 50 million Facebook users — and that Facebook had known about the improper availability of that user data for two years.

The use or abuse of that data by Cambridge Analytica in work that it had done with Donald Trump’s campaign for President in 2016 and potentially for other businesses in the run up to the election is at the heart of Donal

Before basically verifying the accuracy of the story, Facebook had threatened both The Times and The Guardian with legal action to try and kill it.

So. One day ahead of publication, Squire Patton & Boggs, lawyers for Cambridge Analytica, drop @guardian a line….

— Carole Cadwalladr (@carolecadwalla) May 14, 2017

The company’s response to the reports aren’t impressing anyone — and could land more than just its chief counsel in the hot seat.

Facebook Chief Legal Officer Colin Stretch

“We have repeatedly asked Facebook about how companies acquire and hold on to user data from their site, and in particular whether data had been taken from people without their consent. Their answers have consistently understated this risk, and have also been misleading to the Committee,” Collins wrote.

He went on to accuse Facebook of “deliberately answering straight questions from the committee” and failing to supply the Committee with evidence relating to “the relationship between Facebook and Cambridge Analytica.” Evidence that had been promised when members of Parliament went to Washington to quiz Facebook about its role in various political campaigns in the UK.

“I will be writing to Mark Zuckerberg asking that either he, or another senior executive from the company, appear to give evidence in front of the Committee as part our inquiry. It is not acceptable that they have previously sent witnesses who seek to avoid asking difficult questions by claiming not to know the answers. This also creates a false reassurance that Facebook’s stated policies are always robust and effectively policed,” Collins wrote.

“We need to hear from people who can speak about Facebook from a position of authority that requires them to know the truth. The reputation of this company is being damaged by stealth, because of their constant failure to respond with clarity and authority to the questions of genuine public interest that are being directed to them. Someone has to take responsibility for this. It’s time for Mark Zuckerberg to stop hiding behind his Facebook page.”

Aalo is do-it-yourself, customizable, re-purposable furniture

Buying furniture sucks. Getting rid of it later is worse.

Aalo, part of the Y Combinator Winter 2018 class, is trying to fix both sides of that equation. They want you to design and build your own furniture… and when you’re done with it, turn it into something else. They’ve built a system of interlocking, interchangeable parts which you can use to build their designs or create your own.

“Furniture”, here, mostly means things to sit your stuff on at this point — not stuff you sit on. Think bookshelves, tables, and shoe racks — not couches, beds, and chairs just yet (though people have built benches with it.)

Some examples:

The system is currently made up of around ten different components, from different lengths of beams to different types of connectors and mounts. Furniture can be ordered in pre-arranged kits — but if you’re feeling creative, each component can be ordered individually.

Each piece is powder-coated aluminum in white or black, super strong, and snaps together with just an Allen wrench. Here’s a quick GIF of the company’s founder, Sejun Park, building a headphone stand at our office (sped up for the sake of file size – actual assembly time was ~20 seconds):

Aalo was born out of a good ol’ failed DIY attempt. Sejun bought a shelf for his new apartment from a nearby big box store, but the one he liked best was a bit too long for his wall. He busted out his hacksaw and started cutting away at the wood to slim it down… only to realize that it wasn’t really wood at all, but a thin wood veneer wrapped around a cardboard core. Previously a manufacturing engineer for Toyota/Lexus, he realized there had to be a better way.

The price of anything you build would vary based on the components involved. It tends to work out to be a bit pricier than stuff you’d find at Ikea or Target, but less than what you might find in a designer store. That headphone stand up above would cost about $35, for example; their design for a shoe rack, meanwhile, goes for $80.

I love the idea of taking an old piece of furniture and turning it into something new, and that it’s built into the core of this whole system. Tired of your TV stand? Break it down, turn it into a bike rack. Don’t want that table anymore? Tear it apart, order a few small pieces, and turn it into a couple plant stands.

Sejun tells me that in time, the system should be able to look at the components you own and recommend other things you might build from them — tapping a community-driven catalog of creations, perhaps — before shipping you only the parts you need.

Phlur, a fragrance startup launched by a former Ralph Lauren exec, is raising fresh funding

There’s no shortage of ideas being backed when it comes to direct-to-consumer e-commerce companies that are cultivating their own brands. We’ve seen everything from slippers to toothbrushes to, perhaps most famously, razor blades.

Among the newer frontiers being funded right now: ingredient-conscious perfumes. For example, the  New York-based, venture-backed cosmetics company Glossier began marketing a proprietary perfume called You last October that’s designed to change in character depending on the wearer. (“You” complete the product, it says.)

Late last year, an L.A. based called Skylar that uses only natural ingredients raised also attracted venture funding: $3 million from Upfront Ventures and serial entrepreneur Brian Lee, who also founded The Honest Company. (Skylar’s founder previously worked at Honest.)

Now another new entrant, Austin, Tex.-based Phlur, appears to be shaking the trees for venture capital. The company — which was launched publicly less than two years ago by Eric Korman, a former president of global e-commerce for Ralph Lauren — is targeting up to $8 million in venture funding, according to an SEC filing that shows it has raised at least $2.4 million toward that end. Among its backers is local venture firm Next Coast Ventures.

The money follows $6 million that Phlur has already raised, including from Next Coast, for what it describes as scents for both men and women that are made with “responsibly sourced” ingredients.

Its packaging is also environmentally friendly, it says; it’s made with 20 percent recycled glass.

We reached out to Korman yesterday to learn more and we’ll update this post if we hear back. But certainly, it’s easy to understand why consumers might appreciate companies that promise that they needn’t visit a fragrance counter ever again.

It’s easy to appreciate investor enthusiasm for perfumes, too. Three giants — L’Oréal Groupe, Coty, and Estée Lauder — still make up the bulk of fragrance sales, and millennials are looking for new options that don’t necessarily remind them of their parents. There’s been a spate of M&A in the beauty sector — and not yet in the fragrance sector, meaning there’s still opportunity there. Not last, the beauty industry is a very big business, with one estimate projecting the global fragrance market alone will be worth about $92 billion by 2024.

These new brands are simply playing into a years-long trend of consumers caring much more about everything that touches them, from their food to their house-cleaning products. As startups provide them with more transparency into how fragrances are made — and at far less cost than companies that pay for counter space at retail stores — expect to see many more next-gen fragrances, as well.

What it’s like using the Owl car security camera

When you get a new car, and you’re feeling like a star, the first thing you’re probably going to do is ghost ride it. This is where the Owl camera can come in.

I’ve been testing Owl, an always-on, two-way camera that records everything that’s happening inside and outside of your car all day, every day for the last couple of weeks.

The Owl camera is designed to monitor your car for break-ins, collisions and police stops. Owl can also be used to capture fun moments (see above) on the road or beautiful scenery, simply by saying, ‘Ok, presto.’

If Owl senses a car accident, it automatically saves the video to your phone, including the 10 seconds before and after the accident. Also, if someone is attracted to your car because of the camera and its blinking green light, and proceeds to steal it, Owl will give you another one.

For 24 hours, you can view your driving and any other incidents that happened during the day. You can also, of course, save footage to your phone so you can watch it after 24 hours.

Setting it up

The two-way camera plugs into your car’s on-board diagnostics port (Every car built after 1996 has one), and takes just a few minutes to set up. The camera tucks right in between the dashboard and windshield. Once it’s hooked up, you can access your car’s camera anytime via the Owl mobile app.

I was a bit skeptical about the ease with which I’d be able to install the camera, but it was actually pretty easy. From opening the box to getting the camera up and running, it took fewer than ten minutes.

Accessing the footage

This is where it can get a little tricky. If you want to save footage after the fact, Owl requires that you be physically near the camera. That meant I had to put on real clothes and walk outside to my car to access the footage from the past 24 hours in order to connect to the Owl’s Wi-Fi. Eventually, however, Owl says it will be possible to access that footage over LTE.

But that wasn’t my only qualm with footage access. Once I tried to download the footage, the app would often crash or only download a portion of the footage I requested. This, however, should be easily fixable, given Owl is set up for over-the-air updates. In fact, Owl told me the company is aware of that issue and is releasing a fix this week. If I want to see the live footage, though, that’s easy to access.


Owl is set up to let you know if and when something happens to your car while you’re not there. My Owl’s out-of-the-box settings were set to high sensitivity, which meant I received notifications if a car simply drove by. Changing the settings to a lower sensitivity fixed the annoyance of too many notifications.

Since installing the Owl camera, there hasn’t been a situation in which I was notified of any nefarious behavior happening in or around my car. But I do rest assured knowing that if something does happen, I’ll be notified right away and will be able to see live footage of whatever it is that’s happening.

My understanding is that most of the dash cams on the market aren’t set up to give you 24/7 video access, nor are they designed to be updatable over the air. The best-selling dash cam on Amazon, for example, is a one-way facing camera with collision detection, but it’s not always on. That one retails for about $100 while Amazon’s Choice is one that costs just $47.99, and comes with Wi-Fi to enable real-time viewing and video playback.

Owl is much more expensive than its competition, retailing at $299, with LTE service offered at $10 per month. Currently, Owl is only available as a bundle for $349, which includes one year of the LTE service.

Unlike Owl’s competition, however, the device is always on, due to the fact it plugs into your car’s OMD port. That’s the main, most attractive differentiator for me. To be clear, while the Owl does suck energy from your car’s battery, it’s smart enough to know when it needs to shutdown. Last weekend, I didn’t drive my car for over 24 hours, so Owl shut itself down to ensure my battery wasn’t dead once I came back.

Owl, which launched last month, has $18 million in funding from Defy Ventures, Khosla Ventures, Menlo Ventures, Sherpa Capital and others. The company was founded by Andy Hodge, a former product lead at Apple and executive at Dropcam, and Nathan Ackerman, who formerly led development for Microsoft’s HoloLens.

P.S. I was listening to “Finesse” by Bruno Mars and Cardi B in the GIF above.