Hidden Amazon page drops hints about a ‘Fire TV Cube’

Rumors have been floating around for a few months now of a new device from Amazon that would mash-up the media streaming capabilities of its Fire TV line with the voice assistant abilities of the Echo. After leaked images turned up showing a cube-shaped device that seemed to fit the bill, people started referring to this still as-of-yet unannounced device as the “Fire TV Cube.”

Sure enough: a seemingly official page has been found tucked away on Amazon.com that mentions a Fire TV Cube, and promises “details coming soon.”

As found by AFTVNews, the placeholder splash page offers up little beyond the promise of eventual details. It’s got a big ol’ header that says “What is Fire TV Cube?”, a button to let you sign up for more details and… well, that’s about it.

There’s also a mention of a “Fire TV Cube” on this page, tucked away in Amazon’s account management backend to let folks toggle their subscriptions to any one of the dozens of newsletters/email campaigns that Amazon sends out.

According to the original leaks, the Fire TV Cube would have the speaker, far-field microphones and LED light bar of an Echo and the 4K video-capable guts of a Fire TV, allowing you to hook it up to your TV and have one device doing double the duties.

In other words: While there’s still no official word on when (or if!) this thing will actually ship, it definitely looks like they’re prepping for something behind the scenes.

Facebook and the perils of a personalized choice architecture

Yafit Lev-Aretz
Contributor

Yafit Lev-Aretz is a Research Fellow at the Information Law Institute, New York University Law School.

The recent Facebook-Cambridge Analytica chaos has ignited a fire of awareness, bringing the risks of today’s data surveillance culture to the forefront of mainstream conversations.

This episode and the many disturbing prospects it has emphasized have forcefully awakened a sleeping giant: people seeking information about their privacy settings and updating their apps permissions, a “Delete Facebook” movement has taken off and the FTC launched an investigation into Facebook, causing Facebook’s stocks to drop. A perfect storm.   

The Facebook-Cambridge Analytica debacle is composed of pretty simple facts: Users allowed Facebook to collect personal information, and Facebook facilitated third-party access to the information. Facebook was authorized to do that pursuant to its terms of service, which users formally agreed to but rarely truly understood. The Cambridge Analytica access was clearly outside the scope of what Facebook, and most of its users, authorized. Still, this story has turned into an iconic illustration of the harms generated by massive data collection.

While it is important to discuss safeguards for minimizing the prospects of unauthorized access, the lack of consent is the wrong target. Consent is essential, but its artificial quality has been long-established. We already know that our consent is, more often than not, meaningless beyond its formal purpose. Are people really raging over Facebook failing to detect the uninvited guest who crashed our personal information feast when we’ve never paid attention to the guest list? Yes, it is annoying. Yes, it is wrong. But it is not why we feel that this time things went too far.

In their 2008 book, “Nudge,” Cass Sunstein and Richard Thaler coined the term “choice architecture.”  The idea is simple and pretty straightforward: the design of the environments in which people make decisions influences their choices. Kids’ happy encounters with candies in the supermarket are not serendipitous: candies are commonly located where children can see and reach them.

Tipping options in restaurants are usually tripled because individuals tend to go with the middle choice, and you must exit through the gift shop because you might be tempted to buy something on your way out. But you probably knew that already because choice architecture has been here since the dawn of humanity and is present in any human interaction, design and structure. The term choice architecture is 10 years old, but choice architecture itself is way older.

The Facebook-Cambridge Analytica mess, together with many preceding indications before it, heralds a new type of choice architecture: personalized, uniquely tailored to your own individual preferences and optimized to influence your decision.

We are no longer in the familiar zone of choice architecture that equally applies to all. It is no longer about general weaknesses in human cognition. It is also not about biases that are endemic to human inferences. It is not about what makes humans human. It is about what makes you yourself.

When the information from various sources coalesces, the different segments of our personality come together to present a comprehensive picture of who we are. Personalized choice architecture is then applied to our datafied curated self to subconsciously nudge us to choose one course of action over another.

The soft spot at which personalized choice architecture hits is that of our most intimate self. It plays on the dwindling line between legitimate persuasion and coercion disguised as voluntary decision. This is where the Facebook-Cambridge Analytica story catches us — in the realization that the right to make autonomous choices, the basic prerogative of any human being, might soon be gone, and we won’t even notice.

Some people are quick to note that Cambridge Analytica did not use the Facebook data in the Trump campaign and many others question the effectiveness of the psychological profiling strategy. However, none of this matters. Personalized choice architecture through microtargeting is on the rise, and Cambridge Analytica is not the first nor the last to make successful use of it.

Jigsaw, for example, a Google -owned think tank, is using similar methods to identify potential ISIS recruits and redirect them to YouTube videos that present a counter-narrative to ISIS propaganda. Facebook itself was accused of targeting at-risk youth in Australia based on their emotional state. The Facebook-Cambridge Analytica story may have been the first high profile-incident to survive numerous news cycles, but many more are sure to come.

We must start thinking about the limits of choice architecture in the age of microtargeting. Like any technology, personalized choice architecture can be used for good and evil: It may identify individuals at risk and lead them to get help. It could motivate us into reading more, exercising more and developing healthy habits. It could increase voter turnout. But when misused or abused, personalized choice architecture can turn into a destructive manipulative force.

Personalized choice architecture can frustrate the entire premise behind democratic elections — that it is we, the people, and not a choice architect, who elect our own representatives. But even outside the democratic process, unconstrained personalized choice architecture can turn our personal autonomy into a myth.

Systematic risks such as those induced by personalized choice architecture would not be solved by people quitting Facebook or dismissing Cambridge-Analytica’s strategies.

Personalized choice architecture calls for systematic solutions that involve a variety of social, economic, technical, legal and ethical considerations. We cannot let individual choice die out in the hands of microtargeting. Personalized choice architecture must not turn into nullification of choice.

 

‘Avengers: Infinity War’ is an overstuffed adventure with a terrific villain

When I saw the first trailer for Avengers: Infinity War, I was really excited and really worried.

Excited because, holy crap, there were so many characters. Iron Man! Captain America! Thor! Black Panther! Black Widow! The Vision! The Guardians of the Galaxy! And they were all going to be in a movie together!

Worried because, holy crap, there so many characters. How could you squeeze all of them into a single film?

The answer is, with great difficulty. To be fair, Infinity War isn’t the giant mess that it could have been — in fact, it’s a lot of fun. But there’s simply not enough movie to do justice to the enormous cast.

Infinity War

Marvel Studios’ AVENGERS: INFINITY WAR. L to R: Spider-Man/Peter Parker (Tom Holland), Iron Man/Tony Stark (Robert Downey Jr.), Drax (Dave Bautista), Star-Lord/Peter Quill (Chris Pratt) and Mantis (Pom Klementieff). Photo: Film Frame. ©Marvel Studios 2018

Some of those characters fare better than others. For most of Infinity War, the “cosmic” side of the Marvel Cinematic Universe is well-represented by Thor (Chris Hemsworth) and the Guardians of the Galaxy (Chris Pratt, Zoe Saldana and team), who end up working together. Screenwriters Christopher Markus and Stephen McFeely are more willing to spend time with them, even when they’re not involved in a giant battle, and that pays off with the movie’s funniest moments — as well as scenes with real weight and melancholy.

Meanwhile, Iron Man (Robert Downey Jr.) and Spider-Man (Tom Holland) also get some good jokes in, recapturing the fun of their relationship in Spider-Man: Homecoming.

Everyone else? Well, they’re usually introduced with a nice quip or a bad-ass moment, designed to remind you of how much you liked them in their own movies. But afterwards, they tend to fade into the background, becoming just another moving part in the big action set pieces (and yes, this includes Marvel’s new MVP Black Panther). That’s probably about as good as any filmmaker could do when trying to stuff the entire Marvel Universe into a single movie, but it’s still a little disappointing after the first Avengers film managed to give us five distinct and memorable heroes (sorry, Hawkeye), and it got so much mileage out of throwing those heroes together.

Infinity War

Marvel Studios’ AVENGERS: INFINITY WAR. Thanos (Josh Brolin). Photo: Film Frame. ©Marvel Studios 2018

Luckily, the film’s real strength isn’t on the heroic side. Instead, as in Black Panther (and virtually no other Marvel movie), Infinity War‘s most memorable character is actually the villain, Thanos.

Previous films have reduced Thanos to a purple guy who utters a few threatening lines while sitting in his silly looking space throne. In Infinity War, Thanos is at the center of the action. His quest to acquire the super-powered Infinity Stones drives the story, as all of Marvel’s heroes scramble to stop him, giving the film a constant feeling of crisis and leading fairly quickly to spectacular fights on Earth and in space. He even gets to kill off a surprisingly large number of those heroes (though I don’t expect all of those deaths to stick).

Over the course of the film, Thanos emerges as a dangerous and powerful alien who’s absolutely devoted to his mission of destroying half the life in the universe — kind of a weird goal, but as Walter Sobchak once said, at least it’s an ethos. And as portrayed by Josh Brolin (via voice acting and motion capture), he doesn’t come off as a cackling villain. Instead, he’s a weary soldier at the end of a long quest.

I shouldn’t say too much about where that quest leads, but I will note that Infinity War feels very much like the first half of a two-part film, with an ending that sets up the still-untitled Avengers 4 (due May 3, 2019).

I do think Infinity War falls a little short of Marvel’s best movies, like Black Panther and Captain America: The Winter Soldier (which, like Infinity War, was directed by Anthony and Joe Russo). But here’s one simple measure of the film’s success: Despite my reservations, that cliffhanger worked, and I really, really want to know what happens next.

It’s going to be a long wait till 2019.

Vacation rental management service Guesty raises $19.75M

As the vacation rental sector heats up — with Airbnb making even more moves to expand its portfolio of services to include multiple tiers of rentals — there’s going to be more and more of a need for people who manage a large number of properties.

Guesty is one service that aims to do that, and today a filing with the Securities and Exchange Commission notes that it’s raised $19.75 million in a new Series B round of financing. While Airbnb may be the dominant home vacation rental service, there are others like VRBO, and managing those properties across multiple different platforms could require handling all of that information in something more analog like an Excel sheet. It’s a kind of CRM tool for property management, ranging from tracking guest check-ins to the amount of revenue a property owner. Guesty also helps property owners by providing tools to manage operations beyond just the tracking.

Airbnb earlier this year started rolling out more tiers of home categories that are geared toward different kinds of travelers. That included high-end tiers called Airbnb Plus and Beyond by Airbnb. While these new categories potentially offer a more granular set of choices for consumers, it might make managing those properties a little more difficult — especially if it’s across multiple different services like Airbnb and VRBO, or even more analog channels. Tools like Guesty can help owners of multiple different properties (that might span multiple tiers) turn those homes into an actual business.

There are also plenty of platforms that are looking for additional services for people managing multiple properties on vacation rental sites. There are startups like Beyond Pricing, which look to help property managers figure out how to best price their homes. Airbnb has its own pricing algorithms, but there’s clear demand for tools that cross multiple platforms. Guesty was party of Y Combinator’s winter 2014 class, and raised $3 million in May last year.

While Airbnb continues to try to expand into new categories and offer home owners a way to rent out their homes — or for owners of multiple properties to run a side business — it’s not the only approach to vacation rentals. One startup, Selina, is looking to convert existing properties into kinds of campuses that cater to different tiers of travelers, ranging from travelers looking to stay in a hostel to ones that are willing to pay for their own rooms. Selina earlier this month said it raised $95 million. Selina is more of a hotel-ish model as it expands from geography to geography, but it also shows that there’s demand for an experience that can cater to a wide variety of guests.

IoT ‘conversation’ and ambient contextuality

A few years back, I wrote about the way we communicate with our technology. It was obvious even then that a big game-changer would be enabling a reliable conversational interaction with technology in order to overcome the friction humans experience when we use our modern tools, be they apps, phones, cars or semi-autonomous coffee makers. Too much typing and swiping and app management crowds our experiences with our connected “things.”

To some degree, this game-changer has come to pass.

Voice interaction is now a big part of technology interface in everything from smartphones to virtual assistant/smart speaker products to connected home and vehicle solutions — and so it will be going forward. While this is marked progress, it is not really “conversation.”

For the most part, the state of voice interaction is more akin to commanding a four-year-old to do your bidding than having a useful, rich conversation with a friend or assistant. As we continue to minimize friction and advance usability of technology via voice, it is clear that more is needed. I’ll predict right here that the next big game-changer in technology interface is ambient contextuality.

Ambient contextuality hinges on the idea that there is information hidden all around us that helps clarify our intent in any given conversation. Answering the simple questions of who, what, where and when is now easier than ever as IoT continues to mine and mind the data of our lives. I once sketched out a derivative needs pyramid for IoT devices using the example of Maslow’s hierarchy of needs pyramid to chart a course for “thing-actualization,” whereby our technology could use analytics, learned logic and predictive behavior to establish groups and networks of things and enable other more “complex” things. The voice interfaces and natural-language processing technology on display in interactive speakers such as Amazon’s Alexa or Apple’s Homepod are examples of this actualization in action — predictive analytics and machine learning imbued into objects and interfaces to technology that collect data and collectively power progressively complex functions, often in real time.

But it is still not conversation. There is a new, nascent communications triangle between people, processes and things that fuels usability, and it still has a bit of its own growing up to do.

Deeper questions like how and why are also key to conversation for humans. To achieve truly conversational interactions, one or many of the answers to these questions not only need to be captured, but also learned and retained. Recently, Google has made some good strides into this for targeted types of online search. But we have to do much more before something akin to natural conversation emerges.

Establishing ambient contextuality to enable the kinds of conversations we do want to have is the actual end goal of all this connected stuff.

Most human conversation is abridged. Known quantities may not even be discussed, but they are deeply factored into interaction. A simple example is shifting from nouns and proper names to pronouns. “I asked about Dave’s vacation and Jen said she’d take him to the airport to kick it off right.” This may seem like a small thing, but think about how unnatural a conversation is when you cannot use human “shorthand.” Referring to every subject in every sentence by its proper name quickly becomes as uncomfortable as it is unnatural.

A simple definition of a conversation is an informal exchange of sentiment and ideas, and it’s the way people naturally communicate with each other. Informal conversation is contextual, cohesive and comprehensive. It involves a lot of storytelling. It ebbs and flows, jumps around in time and tense, references shared experience or knowledge to exchange new experiences and knowledge. It is inference infused and doesn’t require adherence to strict conventions. But this is pretty much the exact opposite of the way “things” are designed to communicate. Machine communication is specific to whatever technology drives it and is based on code. It is binary, resource-constrained, inflexible, standalone, purely informational and lacks context. It is rigid and formal. It is very much not storytelling.

This elemental difference in communication creates a usability gap, which we have traditionally bridged by forcing people to learn to “speak” machine — download a new app to control every new device, use this set of wake words or language constructs for one device and an entirely different set for another, update, update, update, and if-this-then-that for everything. It’s why so many “things” end up thrown in a drawer after two weeks, never to be used again. This is not the kind of conversation humans want to have.

Putting aside the creepiness factor and important privacy issues surrounding devices that constantly collect information about us, establishing ambient contextuality to enable the kinds of conversations we do want to have is the actual end goal of all this connected stuff. The aim is to smooth our experiences with our technology throughout the day and blur the seams enough to feel natural to us.

The challenge now is to make our machines “speak” human — to imbue them with context and inference and informality so that conversation flows naturally. DARPA has been working on it. So, too, Amazon and Google. In fact, most technology efforts are concerned with reducing interface friction. Improving the quality of our conversation is key to achieving that goal.

Development on IoT, augmented and mixed reality, Assistive Intelligence (my term for AI, but that’s an entirely different conversation) and even the miniaturization and extension properties on display in mobility and power advancements are all examples of the quest for that quality. Responsibly developed ambient contextuality, and ultimately natural conversation, will be better enabled by these technologies, and our lives will become much more conversational soon. Once we experience reliable and useful conversations with our technological world, I think we will all be hooked.

Digit’s first move beyond saving money is a feature to pay down credit card debt

Digit, the developer of a wildly popular automatic savings mobile app, is moving beyond its core business with a new feature enabling users to pay down credit card debt from their Digit account.

Announced earlier today, the new Digit Pay service uses savings in a Digit account to pay off credit card debt for any registered account.

The new feature works by enabling users to create a “credit card debt” goal in their Digit settings and activate the Digit Pay service. Digit automatically will begin to save money from a linked checking account — and use those funds to pay off credit cards. Credit card payments can even be prioritized through Digit’s boost feature.

So far, the Digit app has been used to save roughly $1 billion for its customers according to chief executive officer Ethan Bloch .

Bloch says that Digit has been focused on solving the biggest financial pain points for the most customers it can reach in the U.S. For the company, that meant starting with savings… and moving on to the next biggest threat to customers’ financial health in the U.S. — debt.

Roughly 75 percent of the company’s customers have credit card debt (hi, my name is Jon and I’m a Digit customer).

In the U.S. there’s about $1 trillion of credit card debt outstanding — a stat that’s very no bueno for the U.S. economy. Add to that, an average U.S. household owes about $16,883 and pays about $1,292 in interest each year (credit card companies thank you).

For folks who need a refresher in how Digit works, the company’s app provides a service that connects to checking accounts from almost any bank. Digit’s software analyzes income and spending and then sets aside small amounts of money at intervals that won’t impact an account. The company offers a 1 percent annualized savings bonus for people who save with Digit for three months, and the service costs $2.99 per month after a free 100-day trial period.

Those savings are placed in a rainy day fund or toward any other financial goals that a user sets in the app. They can be customized, and the latest customization is this Digit Pay option.

It’s the first time that Digit is linking back out to other vendors and it paves the way for other services using the Digit balance.

One thing that users shouldn’t expect to see anytime soon is an investment feature in Digit, according to Bloch. “Digit was founded to make financial health effortless,” Bloch said. While investment tools are good for helping their users make more money, Bloch said they weren’t core to his view of financial health.

“We’ll be focused on those two… savings and credit card debt,” he said.

New numbers illustrate how fast fundraising has changed for young startups

Fundraising is never easy, but it’s even harder when the goal posts are being moved around. Such is the challenge facing today’s youngest startups, which are looking at very different fundraising metrics than new startups did just six or seven years ago.

We explored the issue yesterday with Peter Wagner, who spent more than 14 years with Accel as a managing partner before co-founding the early-stage firm Wing Venture Capital in 2013 with another veteran investor, Gaurav Garg, formerly of Sequoia Capital.

Wagner has an obvious interest in how rounds are changing. Wing has to know how much is reasonable to expect to invest in a company, even while it prefers to invest in companies that don’t yet have revenue or customers. In a competitive funding landscape, its now four-person investing team is also looking to raise the firm’s profile by publishing smart industry research, including, not so long ago, on the state of IoT.

Whatever Wing’s motivations, its findings are worth tracking if you’re a founder who is thinking about raising either a seed or Series A round any time soon. More from our chat with Wagner, along with Wing’s data, follows.

TC: Your second fund, $300 million, was nearly twice the size of your $160 million debut fund. Do you expect your third fund will be even larger? Is this going to be an Accel-size firm some day?

PW: No, we’re actually working hard to keep a lid on our fund size. Early-stage investing doesn’t scale. For us to grow, we’d have to change our investing strategy.

TC: So many firms are doing exactly that, with the notable exception of Benchmark, which has maintained its fund size for the last 18 years roughly. 

PW: I was at Accel when we were [expanding into] having a later-stage practice. We sought out different skills [from potential hires] because it’s a different process. It fact, the more we learned about it, the more we realized how different a discipline it is.

TC: Given that you’re so focused on early-stage financing dynamics, tell us what you’ve learned. How did you put together this new report?

PW: We looked at companies that were funded by the 20 or so leading venture firms between 2010 and 2017. It’s 2,700 companies altogether, and 5,800 financings. If a company raised a seed fund from another firm, but Sequoia led its Series A, all of its financings rounds, including that seed round, were incorporated into our research. We also focused on these companies’ downstream financings [no matter the investors].

TC: So some of these companies are pretty new. Others are eight years old. What should founders know about the numbers?

PW: Today’s cumulative seed capital — because it often comes in in multiple rounds — is larger than the average Series A round was in 2010, which wasn’t all that long ago. The average Series A in 2010 was $4.9 million; by last year, it had reached $12.1 million. The average amount of seed funding a startup raised in 2010 was $1.4 million; as of last year, it was $6.3 million.

TC: That’s a big uptick. Do you find it concerning at all?

PW: Not necessarily. It’s a reflection of the changing strategies of major venture firms. Those defined as Series A investors have mostly adopted a later-stage posture and at scale. And when you’re scaling a venture firm, you’ll do more later-stage investing because you can invest more money. That’s one of the things pulling up Series A sizes.

TC: Looking at another of your charts, it looks like the companies raising A rounds have to be a lot further along than was formerly the case. That’s not exactly a news flash, but it’s still interesting. Perhaps more telling is that 67 percent of them were already generating revenue, unlike 11 percent of their peers in 2010. The same is playing out for seed investments.

PW: Yes, just 9 percent of seed-funded companies were generating revenue back in 2010; last year, more than half of them were.

VC: So much for “venture” investing. Since everyone is taking so much less risk on these companies at the seed and Series A stage, are early-stage VCs getting less in terms of their ownership of these startups?

PW: Ownership percentages [outside of Wing] are hard to get, other than in IPO prospectuses. Based on anecdotal data and what I’ve observed, major firms are still looking for the same ownership percentages. They’re just paying a lot more for it.

TC: You have other interesting data, including around the number of financings that startups are sealing up before they get to the Series A. It used to be A was the second round. Now, companies have raised nearly three rounds before they get to that point.

That seems not great for founders, who are giving away part of their company with every financing.

PW: As you know, “pre-seed” is a thing now, as are “seed plus” financings. So you have this segmentation within the world of seed before you get to post-adoption, where you have some evidence that things are working and investors can see how rapidly. Seed is the new A.

As for whether founders own less because of this trend, that’s a hard one to track, again because ownership stats are the last ones you’ll find.

TC: Well, you’re investing very early on, at the pre-seed or pre-adoption phase in many cases. Are you still taking the 20 percent that you looked to own when you were doing Series A deals that looked more like seed deals?

PW: Ideally. Other times, we’ll start with a smaller position and build up to that. We play the role of go-to partner, so we want to be in that ownership position.

TC: With things shifting around so much, where is the Valley of Death these days? You obviously have to have a strong startup to land Series A funding.

PW: It’s interesting. Major firms have adopted these scaled-up strategies and they’ve outsourced a lot of the adoption work to investors and incubators and angel investors, who are launching a fleet of a thousand ships. That enables the firms to hang around and see which startups look the best and pick and choose.

What’s notable is they don’t have as much vested interest in companies at the Series A because it’s very different when you make a new investment versus a follow-on investment. It used to be that individuals at these venture firms were involved much earlier. 

I’m not sure if that’s a healthy or unhealthy development. But it does mean that seed firms have been presented with this expanded territory from which these other firms have backed away. Somebody has to do the foundation building. It’s a great opportunity for seed investors to play a bigger role, but it can certainly be a confusing time for founders, with investors changing, along with the criteria for who you let into your inner circle.

TC: You’ve been in venture for more than 20 years. Is there a correction coming or has something fundamentally changed?

PW: There will be a correction. There will always be a correction. Every time we’ve ever thought the cycle has been broken, we’ve been proven wrong. VC is cyclical. What I don’t know is the date of that correction or how deep it will be.

TC: Do you think venture firms should be raising such gigantic funds right now, given this likelihood? 

PW: The last time around [in the late ’90s], a bunch of people raised really big funds and wound up releasing half the capital or more back to their limited partners when the market changed. Returns on big funds have always disappointed. Things do change and tech is a much more important ingredient. But I do think this is still a boom-bust business.

MobileCoin, a cryptocurrency from the creator of Signal, just raised $30M for mobile payments

A new privacy-centric cryptocurrency project with some big names on board just raised a round worth noting. On Tuesday, the team at MobileCoin announced that Binance Labs, the major blockchain incubator associated with the Binance exchange, led a $30 million round denominated in bitcoin and ether for the new cryptocurrency. MobileCoin will enjoy “priority consideration” for being listed on Binance as part of the relationship.

New cryptocurrency projects are a dime (or less) a dozen, but the legitimacy of an established name can make all the difference. Moxie Marlinspike, the founder of end-to-end encryption messaging app Signal and Open Whisper Systems, is one such name. As Wired reported in December, Marlinspike began working with MobileCoin as a technical advisor in August of 2017.

Marlinspike is joined by Joshua Goldbard, a general partner at hedge fund Crypto Lotus and MobileCoin technologist, and Shane Glynn, legal counsel, to help the company navigate the choppy waters of cryptocurrency regulation. Glynn has served since 2010 as senior product counsel at Google, though it’s not clear if he is leaving his longtime role for the new project.

In the MobileCoin whitepaper, published in December, the project’s creators describe its mission:

…Most attempts at building a compelling crypto-currency user experience unfortunately resort to trusting a third party service to manage keys and validate transactions. This largely sacrifices the primary benefits offered by crypto-currency to begin with.

MobileCoin is an effort to develop a fast, private, and easy-to-use cryptocurrency that can be deployed in resource constrained environments to users who aren’t equipped to reliably maintain secret keys over a long period of time, all without giving up control of funds to a payment processing service.

MobileCoin transactions will synchronize to the coin’s network using the Stellar Consensus Protocol for scalability and speed. The end product will emphasize user privacy and integration into mobile messaging apps, including WhatsApp and Signal — two apps that use Marlinspike’s end-to-end encrypted Signal Protocol.

“MobileCoin is designed so that a mobile messaging application like WhatsApp, Facebook Messenger, or Signal could integrate with a MobileCoin wallet,” the team described in its whitepaper.

Marlinspike is a rare sort of reverse tech celebrity, a figure who eschews both spotlight and Silicon Valley-style excess and has instead cultivated quiet respect in digital privacy and cryptography circles. That makes him an odd fit for the fraud-laden universe of empty multi-million-dollar ICOs with no product to speak of, but it also means that MobileCoin is probably worth paying attention to. At the very least, the prominent cryptographer’s new project should amuse anyone who’s complained about the digital currency world’s habit of using the term “crypto” as shorthand for “cryptocurrency.”

MobileCoin has funding and talent, but it’s still very early days for the nascent cryptocurrency. As an incubator, Binance Labs concentrates on pre-ICO projects and MobileCoin will use the funding to “build out [its] team and processes” as it develops its product.

“A mobile-first, user-friendly cryptocurrency, like MobileCoin, plays a critical role in driving mainstream cryptocurrency adoption,” Binance Labs said of the funding. “The MobileCoin team and Binance Labs share a common vision and we are proud to be a supporter of what they are doing.”

Along with the news, MobileCoin announced that it is recruiting a “core team” of engineers:

“Specifically, we are looking for those who have worked on large systems (greater than 10,000,000 daily active users) in a senior role who enjoy working on low-level code. Direct memory access is a critical part of our problem set.”

Given the legitimacy of Marlinspike’s best-known project and his reticence to attach his name to things, it’s not unreasonable to give MobileCoin the benefit of the doubt, even if aspects of its raison d’être remain unarticulated. Beyond the core question of why a new cryptocurrency needs to exist at all, MobileCoin will need to position itself as a compelling alternative to existing mainstream mobile payment services like Venmo and PayPal for normal users.

MobileCoin will also face the full slate of regulatory challenges, including fraud prevention, that plague other digital currency projects, though given its stealthy behavior and the fact that one-third of the three-member team listed on its website represents legal counsel, its founders are don’t appear to be charging in recklessly.

“This is a journey and we are excited to build a simple system for trusted payments,” Goldbard wrote in the announcement.

In the digital currency realm, too much style — think celeb-endorsed ICOs and endless press release hype cycles — can signal a lack of substance. The reverse can be true too, and in MobileCoin’s case, a modest mission could be a strong signal for a compelling product a bit further down the blockchain.

Kogan: ‘I don’t think Facebook has a developer policy that is valid’

A Cambridge University academic at the center of a data misuse scandal involving Facebook user data and political ad targeting faced questions from the UK parliament this morning.

Although the two-hour evidence session in front of the DCMS committee’s fake news enquiry raised rather more questions than it answered — with professor Aleksandr Kogan citing an NDA he said he had signed with Facebook to decline to answer some of the committee’s questions (including why and when exactly the NDA was signed).

TechCrunch understands the NDA relates to standard confidentiality provisions regarding deletion certifications and other commitments made by Kogan to Facebook not to misuse user data — after the company learned he had user passed data to SCL in contravention of its developer terms.

Asked why he had a non disclosure agreement with Facebook Kogan told the committee it would have to ask Facebook. He also declined to say whether any of his company co-directors (one of whom now works for Facebook) had been asked to sign an NDA. Nor would he specify whether the NDA had been signed in the US.

Asked whether he had deleted all the Facebook data and derivatives he had been able to acquire Kogan said yes “to the best of his knowledge”, though he also said he’s currently conducting a review to make sure nothing has been overlooked.

A few times during the session Kogan made a point of arguing that data audits are essentially useless for catching bad actors — claiming that anyone who wants to misuse data can simply put a copy on a hard drive and “store it under the mattress”.

(Incidentally, the UK’s data protection watchdog is conducting just such an audit of Cambridge Analytica right now, after obtaining a warrant to enter its London offices last month — as part of an ongoing, year-long investigation into social media data being used for political ad targeting.)

Your company didn’t hide any data in that way did it, a committee member asked Kogan? “We didn’t,” he rejoined.

“This has been a very painful experience because when I entered into all of this Facebook was a close ally. And I was thinking this would be helpful to my academic career. And my relationship with Facebook. It has, very clearly, done the complete opposite,” Kogan continued.  “I had no interest in becoming an enemy or being antagonized by one of the biggest companies in the world that could — even if it’s frivolous — sue me into oblivion. So we acted entirely as they requested.”

Despite apparently lamenting the breakdown in his relations with Facebook — telling the committee how he had worked with the company, in an academic capacity, prior to setting up a company to work with SCL/CA — Kogan refused to accept that he had broken Facebook’s terms of service — instead asserting: “I don’t think they have a developer policy that is valid… For you to break a policy it has to exist. And really be their policy, The reality is Facebook’s policy is unlikely to be their policy.”

“I just don’t believe that’s their policy,” he repeated when pressed on whether he had broken Facebook’s ToS. “If somebody has a document that isn’t their policy you can’t break something that isn’t really your policy. I would agree my actions were inconsistent with the language of this document — but that’s slightly different from what I think you’re asking.”

“You should be a professor of semantics,” quipped the committee member who had been asking the questions.

A Facebook spokesperson told us it had no public comment to make on Kogan’s testimony. But last month CEO Mark Zuckerberg couched the academic’s actions as a “breach of trust” — describing the behavior of his app as “abusive”.

In evidence to the committee today, Kogan told it he had only become aware of an “inconsistency” between Facebook’s developer terms of service and what his company did in March 2015 — when he said he begun to suspect the veracity of the advice he had received from SCL. At that point Kogan said GSR reached out to an IP lawyer “and got some guidance”.

(More specifically he said he became suspicious because former SCL employee Chris Wylie did not honor a contract between GSR and Eunoia, a company Wylie set up after leaving SLC, to exchange data-sets; Kogan said GSR gave Wylie the full raw Facebook data-set but Wylie did not provide any data to GSR.)

“Up to that point I don’t believe I was even aware or looked at the developer policy. Because prior to that point — and I know that seems shocking and surprising… the experience of a developer in Facebook is very much like the experience of a user in Facebook. When you sign up there’s this small print that’s easy to miss,” he claimed.

“When I made my app initially I was just an academic researcher. There was no company involved yet. And then when we commercialized it — so we changed the app — it was just something I completely missed. I didn’t have any legal resources, I relied on SCL [to provide me with guidance on what was appropriate]. That was my mistake.”

“Why I think this is still not Facebook’s policy is that we were advised [by an IP lawyer] that Facebook’s terms for users and developers are inconsistent. And that it’s not actually a defensible position for Facebook that this is their policy,” Kogan continued. “This is the remarkable thing about the experience of an app developer on Facebook. You can change the name, you can change the description, you can change the terms of service — and you just save changes. There’s no obvious review process.

“We had a terms of service linked to the Facebook platform that said we could transfer and sell data for at least a year and a half — nothing was ever mentioned. It was only in the wake of the Guardian article [in December 2015] that they came knocking.”

Kogan also described the work he and his company had done for SCL Elections as essentially worthless — arguing that using psychometrically modeled Facebook data for political ad targeting in the way SCL/CA had apparently sought to do was “incompetent” because they could have used Facebook’s own ad targeting platform to achieve greater reach and with more granular targeting.

“It’s all about the use-case. I was very surprised to learn that what they wanted to do is run Facebook ads,” he said. “This was not mentioned, they just wanted a way to measure personality for many people. But if the use-case you have is Facebook ads it’s just incompetent to do it this way.

“Taking this data-set you’re going to be able to target 15% of the population. And use a very small segment of the Facebook data — page likes — to try to build personality models. When do this when you could very easily go target 100% and use much more of the data. It just doesn’t make sense.”

Asked what, then, was the value of the project he undertook for SCL, Kogan responded: “Given what we know now, nothing. Literally nothing.”

He repeated his prior claim that he was not aware that work he was providing for SCL Elections would be used for targeting political ads, though he confirmed he knew the project was focused on the US and related to elections.

He also said he knew the work was being done for the Republican party — but claimed not to know which specific candidates were involved.

Pressed by one committee member on why he didn’t care to know which politicians he was indirectly working for, Kogan responded by saying he doesn’t have strong personal views on US politics or politicians generally — beyond believing that most US politicians are at least reasonable in their policy positions.

“My personal position on life is unless I have a lot of evidence I don’t know. Is the answer. It’s a good lesson to learn from science — where typically we just don’t know. In terms of politics in particular I rarely have a strong position on a candidate,” said Kogan, adding that therefore he “didn’t bother” to make the effort to find out who would ultimately be the beneficiary of his psychometric modeling.

Kogan told the committee his initial intention had not been to set up a business at all but to conduct not-for-profit big data research — via an institute he wanted to establish — claiming it was Wylie who had advised him to also set up the for-profit entity, GSR, through which he went on to engage with SCL Elections/CA.

“The initial plan was we collect the data, I fulfill my obligations to SCL, and then I would go and use the data for research,” he said.

And while Kogan maintained he had never drawn a salary from the work he did for SCL — saying his reward was “to keep the data”, and get to use it for academic research — he confirmed SCL did pay GSR £230,000 at one point during the project; a portion of which he also said eventually went to pay lawyers he engaged “in the wake” of Facebook becoming aware that data had been passed to SCL/CA by Kogan — when it contacted him to ask him to delete the data (and presumably also to get him to sign the NDA).

In one curious moment, Kogan claimed not to know his own company had been registered at 29 Harley Street in London — which the committee noted is “used by a lot of shell companies some of which have been used for money laundering by Russian oligarchs”.

Seeming a little flustered he said initially he had registered the company at his apartment in Cambridge, and later “I think we moved it to an innovation center in Cambridge and then later Manchester”.

“I’m actually surprised. I’m totally surprised by this,” he added.

Did you use an agent to set it up, asked one committee member. “We used Formations House,” replied Kogan, referring to a company whose website states it can locate a business’ trading address “in the heart of central London” — in exchange for a small fee.

“I’m legitimately surprised by that,” added Kogan of the Harley Street address. “I’m unfortunately not a Russian oligarch.”

Later in the session another odd moment came when he was being asked about his relationship with Saint Petersburg University in Russia — where he confirmed he had given talks and workshops, after traveling to the country with friends and proactively getting in touch with the university “to say hi” — and specifically about some Russian government-funded research being conducted by researchers there into cyberbullying.

Committee chair Collins implied to Kogan the Russian state could have had a specific malicious interest in such a piece of research, and wondered whether Kogan had thought about that in relation to the interactions he’d had with the university and the researchers.

Kogan described it as a “big leap” to connect the piece of research to Kremlin efforts to use online platforms to interfere in foreign elections — before essentially going on to repeat a Kremlin talking point by saying the US and the UK engage in much the same types of behavior.

“You can make the same argument about the UK government funding anything or the US government funding anything,” he told the committee. “Both countries are very famous for their spies.

“There’s a long history of the US interfering with foreign elections and doing the exact same thing [creating bot networks and using trolls for online intimidation].”

“Are you saying it’s equivalent?” pressed Collins. “That the work of the Russian government is equivalent to the US government and you couldn’t really distinguish between the two?”

“In general I would say the governments that are most high profile I am dubious about the moral scruples of their activities through the long history of UK, US and Russia,” responded Kogan. “Trying to equate them I think is a bit of a silly process. But I think certainly all these countries have engaged in activities that people feel uncomfortable with or are covert. And then to try to link academic work that’s basic science to that — if you’re going to down the Russia line I think we have to go down the UK line and the US line in the same way.

“I understand Russia is a hot-button topic right now but outside of that… Most people in Russia are like most people in the UK. They’re not involved in spycraft, they’re just living lives.”

“I’m not aware of UK government agencies that have been interfering in foreign elections,” added Collins.

“Doesn’t mean it’s not happened,” replied Kogan. “Could be just better at it.”

During Wylie’s evidence to the committee last month the former SCL data scientist had implied there could have been a risk of the Facebook data falling into the hands of the Russian state as a result of Kogan’s back and forth travel to the region. But Kogan rebutted this idea — saying the data had never been in his physical possession when he traveled to Russia, pointing out it was stored in a cloud hosting service in the US.

“If you want to try to hack Amazon Web Services good luck,” he added.

He also claimed not to have read the piece of research in question, even though he said he thought the researcher had emailed the paper to him — claiming he can’t read Russian well.

Kogan seemed most comfortable during the session when he was laying into Facebook’s platform policies — perhaps unsurprisingly, given how the company has sought to paint him as a rogue actor who abused its systems by creating an app that harvested data on up to 87 million Facebook users and then handing information on its users off to third parties.

Asked whether he thought a prior answer given to the committee by Facebook — when it claimed it had not provided any user data to third parties — was correct, Kogan said no given the company provides academics with “macro level” user data (including providing him with this type of data, in 2013).

He was also asked why he thinks Facebook lets its employees collaborate with external researchers — and Kogan suggested this is “tolerated” by management as a strategy to keep employees stimulated.

Committee chair Collins asked whether he thought it was odd that Facebook now employs his former co-director at GSR, Joseph Chancellor — who works in its research division — despite Chancellor having worked for a company Facebook has said it regards as having violated its platform policies.

“Honestly I don’t think it’s odd,” said Kogan. “The reason I don’t think it’s odd is because in my view Facebook’s comments are PR crisis mode. I don’t believe they actually think these things — because I think they realize that their platform has been mined, left and right, by thousands of others.

“And I was just the unlucky person that ended up somehow linked to the Trump campaign. And we are where we are. I think they realize all this but PR is PR and they were trying to manage the crisis and it’s convenient to point the finger at a single entity and try to paint the picture this is a rogue agent.

At another moment during the evidence session Kogan was also asked to respond to denials previously given to the committee by former CEO of Cambridge Analytica Alexander Nix — who had claimed that none of the data it used came from GSR and — even more specifically — that GSR had never supplied it with “data-sets or information”.

“Fabrication,” responded Kogan. “Total fabrication.”

“We certainly gave them [SCL/CA] data. That’s indisputable,” he added.

In written testimony to the committee he also explained that he in fact created three apps for gathering Facebook user data. The first one — called the CPW Lab app — was developed after he had begun a collaboration with Facebook in early 2013, as part of his academic studies. Kogan says Facebook provided him with user data at this time for his research — although he said these datasets were “macro-level datasets on friendship connections and emoticon usage” rather than information on individual users.

The CPW Lab app was used to gather individual level data to supplement those datasets, according to Kogan’s account. Although he specifies that data collected via this app was housed at the university; used for academic purposes only; and was “not provided to the SCL Group”.

Later, once Kogan had set up GSR and was intending to work on gathering and modeling data for SCL/Cambridge Analytica, the CPW Lab app was renamed to the GSR App and its terms were changed (with the new terms provided by Wylie).

Thousands of people were then recruited to take this survey via a third company — Qualtrics — with Kogan saying SCL directly paid ~$800,000 to it to recruit survey participants, at a cost of around $3-$4 per head (he says between 200,000 and 300,000 people took the survey as a result in the summer of 2014; NB: Facebook doesn’t appear to be able to break out separate downloads for the different apps Kogan ran on its platform — it told us about 305,000 people downloaded “the app”).

In the final part of that year, after data collection had finished for SCL, Kogan said his company revised the GSR App to become an interactive personality quiz — renaming it “thisisyourdigitallife” and leaving the commercial portions of the terms intact.

“The thisisyourdigitallife App was used by only a few hundred individuals and, like the two prior iterations of the application, collected demographic information and data about “likes” for survey participants and their friends whose Facebook privacy settings gave participants access to “likes” and demographic information. Data collected by the thisisyourdigitallife App was not provided to SCL,” he claims in the written testimony.

During the oral hearing, Kogan was pressed on misleading T&Cs in his two commercial apps. Asked by a committee member about the terms of the GSR App not specifying that the data would be used for political targeting, he said he didn’t write the terms himself but added: “If we had to do it again I think I would have insisted to Mr Wylie that we do add politics as a use-case in that doc.”

“It’s misleading,” argued the committee member. “It’s a misrepresentation.”

“I think it’s broad,” Kogan responded. “I think it’s not specific enough. So you’re asking for why didn’t we go outline specific use-cases — because the politics is a specific use-case. I would argue that the politics does fall under there but it’s a specific use-case. I think we should have.”

The committee member also noted how, “in longer, denser paragraphs” within the app’s T&Cs, the legalese does also state that “whatever that primary purpose is you can sell this data for any purposes whatsoever” — making the point that such sweeping terms are unfair.

“Yes,” responded Kogan. “In terms of speaking the truth, the reality is — as you’ve pointed out — very few if any people have read this, just like very few if any people read terms of service. I think that’s a major flaw we have right now. That people just do not read these things. And these things are written this way.”

“Look — fundamentally I made a mistake by not being critical about this. And trusting the advice of another company [SCL]. As you pointed out GSR is my company and I should have gotten better advice, and better guidance on what is and isn’t appropriate,” he added.

“Quite frankly my understanding was this was business as usual and normal practice for companies to write broad terms of service that didn’t provide specific examples,” he said after being pressed on the point again.

“I doubt in Facebook’s user policy it says that users can be advertised for political purposes — it just has broad language to provide for whatever use cases they want. I agree with you this doesn’t seem right, and those changes need to be made.”

At another point, he was asked about the Cambridge University Psychometrics Centre — which he said had initially been involved in discussions between him and SCL to be part of the project but fell out of the arrangement. According to his version of events the Centre had asked for £500,000 for their piece of proposed work, and specifically for modeling the data — which he said SCL didn’t want to pay. So SCL had asked him to take that work on too and remove the Centre from the negotiations.

As a result of that, Kogan said the Centre had complained about him to the university — and SCL had written a letter to it on his behalf defending his actions.

“The mistake the Psychometrics Centre made in the negotiation is that they believed that models are useful, rather than data,” he said. “And actually just not the same. Data’s far more valuable than models because if you have the data it’s very easy to build models — because models use just a few well understood statistical techniques to make them. I was able to go from not doing machine learning to knowing what I need to know in one week. That’s all it took.”

In another exchange during the session, Kogan denied he had been in contact with Facebook in 2014. Wylie previously told the committee he thought Kogan had run into problems with the rate at which the GSR App was able to pull data off Facebook’s platform — and had contacted engineers at the company at the time (though Wylie also caveated his evidence by saying he did not know whether what he’d been told was true).

“This never happened,” said Kogan, adding that there was no dialogue between him and Facebook at that time. “I don’t know any engineers at Facebook.”

Voyage open-sources autonomous driving safety practices

Voyage, the self-driving car spin-out from Udacity, is open-sourcing its approach to autonomous driving safety. This comes at a time when autonomous driving programs are under intense scrutiny following two fatal crashes — one involving Tesla’s Autopilot and the other involving one of Uber’s self-driving cars in Tempe, Arizona. Meanwhile, Voyage has successfully deployed five Level 4 self-driving vehicles in retirement communities in California and Florida.

Dubbed Open Autonomous Safety, the initiative aims to help autonomous driving startups implement better safety-testing practices. Companies looking to access the documents, safety procedures and test code can do so via a GitHub repository.

“Each and every autonomous vehicle startup today has to define their own safety programs, and we think that is dangerous,” Voyage CEO Oliver Cameron tweeted earlier today.

Version one includes scenario testing, functional safety, autonomy assessment and a testing toolkit. Later this year, OAS will release driver training material, additional scenarios and fault injection code and tests.

Here’s a quick breakdown of what the above currently entails:

  • Scenario testing: Looks at fundamental questions, like how self-driving cars behave around pedestrians and when cars back out of driveways.
  • Functional safety: Helps to ensure safety without a driver present.
  • Autonomy assessment: Validates whether or not car is moving in the right direction “and how we know that we are solving the right problems,” Cameron wrote in a blog post.
  • Testing toolkit: A library of traffic, roadway and vehicle assets.

“When it comes to safety, we believe open is better. At Voyage, we welcome contributions to improve OAS, like any other open source project,” Cameron wrote in a blog post. “The purpose of this effort is to promote an elevated standard of safety in the autonomous vehicle industry, increasing public trust through transparency.”

Facebook shuts down custom feed-sharing prompts and 12 other APIs

Facebook is making good on Mark Zuckerberg’s promise to prioritize user safety and data privacy over its developer platform. Today Facebook and Instagram announced a slew of API shutdowns and changes designed to stop developers from being able to pull your data or your friends’ data without express permission, drag in public content or trick you into sharing. Some changes go into effect today, and others roll out on August 1 so developers have more than 90 days to fix their apps. They follow the big changes announced two weeks ago.

Most notably, app developers will have to start using the standardized Facebook sharing dialog to request the ability to publish to the News Feed on a user’s behalf. They’ll no longer be able to use the publish_actions API that let them design a custom sharing prompt. A Facebook spokesperson says this change was planned for the future because the consistency helps users feel in control, but the company moved the deadline up to August 1 as part of today’s updates because it didn’t want to have to make multiple separate announcements of app-breaking changes.

Facebook app developers will now have to use this standard Facebook sharing prompt since the publish_action API for creating custom prompts is shutting down

One significant Instagram Graph API change is going into effect today, which removes the ability to pull the name and bio of users who leave comments on your content, though commenters’ usernames and comment text is still available.

Facebook’s willingness to put user safety over platform utility indicates a maturation of the company’s “Hacker Way” that played fast-and-loose with people’s data in order to attract developers to its platform who would in turn create functionality that soaked up more attention.

For more on Facebook’s API changes, check out our breakdown of the major updates:

Popular crypto wallet MEW hit by DNS attack that drained some users’ accounts

There is concern, tears and lost money in the world of crypto once again after MyEtherWallet (MEW), one of the most popular wallets on the internet, was hit by a DNS hack that saw some users lose their cryptocurrency.

MEW said in a statement that “a couple of Domain Name System registration servers were hijacked around 12PM UTC 24 April to redirect users to a phishing site.” Not all visitors to the site during the hijack were impacted, but MEW said that “a majority” of those who were had been using Google’s DNS.

“We are currently in the process of verifying which servers were targeted to help resolve this issue as soon as possible,” the company added, confirming that it has since secured its website. The company recommends those who had used Google DNS to switch to Cloudflare’s.

Wikipedia, country-specific versions of Microsoft, Google and PayPal and even banks have been hit by similar attacks before.

An incident like this doesn’t compromise the site directly, but, in the case of MEW, it led some users of the service to insecure websites that aren’t MEW. From there, those who entered private key information without realizing they had been phished risked having their data snagged by the attackers on the other side. With that information, the attackers could gain access to their account and drain its contents. (Note: This is a very good reason why people are advised to never enter private keys manually, and why secure hardware is highly recommended.)

It’s hard to quantify the impact of an attack like this because MEW is such a well-used and trusted service, while MEW said it is still gathering information on exactly what happened.

Coindesk reports that $150,000, or 216 Ether, was taken, but the figure is likely higher. One fraud tracker identified two wallets (here and here) used in the attack, and they lead to what looks like a holding wallet (here) that collected more than 520 Ether today. That would be around $365,000 at today’s price of $700 per ETH.

The actual amount taken could be higher still. The holding wallet leads to a larger wallet, which has a balance of more than $17 million in Ether and a constant stream of incoming transactions. That’s not to say that $17 million was stolen — that isn’t likely — but the attackers could be using other wallets which haven’t yet been tracked but eventually lead to this larger one.

Beyond using hardware like Trezor or Ledger, crypto wallet users — well, internet users in general — should check that the SSL of a website (shown to the left of the domain name in the browser bar) is secure when they are dealing with private information.

That’s the message that MEW gave to its community.

“Users, PLEASE ENSURE there is a green bar SSL certificate that says “MyEtherWallet Inc” before making any transactions. We advise users to run a local (offline) copy of the MEW (MyEtherWallet). We urge users to use hardware wallets to store their cryptocurrencies,” it said in a Reddit statement.

Those looking for an alternative to MEW could turn to MyCrypto, which was started in February by a former MEW co-founder and offers a similar service. Neither site holds users’ crypto or information; instead, they allow the checking of accounts and enable transactions to be sent to the blockchain, after which they are ferried on to the intended recipient.

Disclosure: The author owns a small amount of cryptocurrency. Enough to gain an understanding, not enough to change a life.

Bose acquires Andrew Mason’s walking tour startup, Detour

Groupon founder Andrew Mason’s audio tour startup Detour has been sold to Bose. The acquisition, which involves only the software and tour content — not the team — was quietly announced on Detour’s blog a few days ago, followed by an email to customers. Bose, initially, seems like an unlikely acquirer for an app designed to help people discover a city through narrated walking tours. But its interest in the product has to do with its upcoming AR platform, which involves audio experiences delivered through a pair of sensor-laden glasses.

Bose is now “actively looking for a partner to host the Detour content,” and make it available to its customers, including those on Bose AR.  The Detour app itself will soon shut down.

Mason says he may help Bose a bit in the process of finding that third party, but his focus is on his new company, Descript.

Detour had launched a few years ago, and was entirely self-funded by Mason. Its goal was to offer tourists and locals alike a way to discover a city’s hidden gems, like its off-the-beaten-track shops and alleys — things other tours would overlook. The service arrived to the public with tours in San Francisco starting in 2015, before later expanding to other markets, including international destinations, all available as in-app purchases.

The app, at the time of sale, had around 120 available tours.

A tour of the Marina’s sweets shops in Detour, narrated by a German philosopher

As part of the creation of its tours, Detour had developed some interesting technology — like a tool to transcribe audio that lets you edit the audio file by editing the written transcription, and a way to add music and sound to a narrative by adding it to the transcription.

This technology has now been spun off as a new startup, Descript. The Detour team, including Mason, have been working on Descript for around six months now. Descript, which aims to make editing sound files as easy as editing a Word document, launched in December with $5 million in funding from Andreessen Horowitz.

Given Mason’s current focus, it’s not surprising that Detour was shutting down. But it is a little surprising it found an acquirer.

The app was never able to gain a sizable following on the scale of other travel guides. (It had been ranking in the 400s to 700s in the App Store’s “Travel” category as of late — meaning, practically invisible.) However, its tours were unique and interesting and had been designed with features others at the time lacked — like location awareness or the ability to sync with multiple people in a group, for example.

The Detour app will remain available until May 31, 2018, and all tours will be free through then. Afterwards, the app will be removed from the App Store.

“Thank you to the producers, engineers, designers, and storytellers that made Detour what it is over the last four years. I’m excited to see where Bose takes it,” wrote Mason, on Detour’s blog.

PitchBook claims Detour had raised funding, but Mason says that’s incorrect.

“Detour is self-funded (by me) and we never disclosed how much,” he says. But he did confirm that Mihir Shah, a friend, had invested a “some token number of thousands of dollars in the very beginning,” which is why the investment is listed on Shah’s LinkedIn.

Deal terms were not available, but it was likely a small exit.

It’s unclear when Detour would arrive on Bose AR, as Bose is still in the process of finding a third party to continue with Detour, and hasn’t yet shipped test builds of its AR glasses to developers.

Instacart now suggests 5% tip default

Instacart has revamped its checkout process to make it easier for customers to leave better tips. Now, Instacart suggests a 5 percent default tip.

If someone wants to leave more, or less, there are still options to tip nothing at all, 10 percent, 15 percent, 20 percent and other amounts.

Instacart has had a rocky relationship over the years with its drivers and shoppers. In 2016, Instacart removed the option to tip in favor of guaranteeing higher delivery commissions. About a month later, following pressure from shoppers, the company reintroduced tipping.

“After announcing this change, we heard a lot of feedback from our shopper community,” the company said in a blog post at the time. “While our shoppers liked most of the changes, they did not like the fact that we were removing tips from our online platform. Taking that feedback into account, we have decided to continue to accept tips as part of this change.”

In addition to putting tips more front and center, Instacart also changed its service fee from a 10 percent waivable fee to a 5 percent fixed fee.

Just earlier this month, Instacart raised $150 million in funding, valuing the company at $4.35 billion.

Twilio adds support for LINE

The developer-centric communications platform Twilio today announced that it has added support for LINE to Twilio Channels. With this, Twilio developers now have the ability to reach users on this service, which has 168 million monthly active users, most of whom live in Japan, Thailand, Taiwan and Indonesia. LINE support in Twilio Channels is currently in beta but open to all developers who want to give it a try.

With this, Twilio Channels, which allows for sending and receiving messages, now supports many of the most popular messaging platforms, ranging from Facebook Messenger and Slack to WeChat, Kik and the new RCS text messaging standard. Missing from this list are the likes of WhatsApp and SnapChat, though they don’t have APIs that Twilio could easily integrate.

Unsurprisingly, the LINE support also extends to Twilio Studio, the company’s drag-and-drop app builder, and Flex, Twilio’s recently announced contact center solution.

“The most successful organizations realize that delivering a seamless, elegant experience for customers on their preferred channels is a way to differentiate,” said Patrick Malatack, vice president and general manager of Messaging at Twilio in today’s announcement. “When developers use Twilio to build these experiences – they trust that they will be able to use one API, now and in the future, to support the communication channels their customers want to use. We are thrilled to add support for LINE to the Twilio platform and can’t wait to see what our customers build.”