Hitlist’s new premium service puts a travel agent in your pocket

Hitlist, a several-years-old app for finding cheap flights, has begun rolling out a subscription tier that will turn it into something more akin to your own mobile travel agent. While the core app experience, which monitors airlines for flight deals, will continue to be free, the new premium upgrade will unlock a handful of other useful features, including advanced filtering, exclusive members-only fares and even custom travel advice from the Hitlist team.

The idea, says founder and CEO Gillian Morris, goes back to the original idea that inspired her to create Hitlist in the first place.

“Going back to the very beginning, Hitlist was essentially me giving travel advice to friends,” she says. “People had the time, inclination, and money to travel, but didn’t book because they got lost in the search process. When I sent custom advice, like ‘you said you wanted to go to Istanbul, there are $500 direct round trips in May available right now, that’s a good price and the weather will be good and the tulip festival, this unique cultural experience, will be happening’ — 4 out of 5 people would book,” Morris explains.

“I wouldn’t be able to scale that level of advice at the beginning, so we focused on just the flight deals. But now we have four years’ worth of data that we can learn from — browsing and searching within Hitlist — and we can start to build more sophisticated models that will inside and enable people to travel at scale,” she says.

The new subscription feature will offer users the ability to filter airline deals by things like the carrier, number of stops and the time of day of both the departure and return.

It’s also working with airlines to market “closed group” fares that aren’t accessible through flight search engines, but are available to select travel agents and other resellers that market to a closed user group. These will be flagged in the app as “members-only” fares.

Hitlist says it’s currently working with one airline and, through a third party, with several more. But because this is still in a pilot phase and is only live with select users, it can’t say which.

Meanwhile, the app will continue to focus on helping users find low-cost fares — not only by tracking deals, but also by bundling low-cost carriers and traditional airlines. (On Kayak, they call these “hacker fares.”) However, it won’t promote dates that are likely to be cancelled by airlines, nor will it venture into legally gray areas like skipping legs of a flight (like Skiplagged) to find cheaper fares. So it’s not a one-stop shop solution for a determined low price finder.

Beyond just finding cheap flights — which remains a competitive space — Hitlist aims to offer users a more personalized experience, more like what you would have gotten with a travel agent in the past.

For starters, it developed a proprietary machine learning algorithm that sorts through more than 50 million fares’ worth of data per day to find deals that appeal to each individual user. It also learns from how you use it — browsing flights, or how you react to alerts, for example.

“The app gets to know you better over time, just like a human travel agent would,” says Morris. “With the premium upgrade, we’re gaining more insight to the traveler’s preferences that helps us to develop even more sophisticated A.I. to provide advice and make sure you’re getting the best deal.”

Or, simply put, Hitlist over time will suggest things based on what it thinks you might like, just like any ol’ personalized service now does.

When you find a flight you like, Hitlist will direct you over to a partner’s site — like the airline or online travel agency such as CheapOair.

Where the app differs from others also trying to replace the travel agent — like Lola, Pana or Hyper — is that Hitlist doesn’t offer a chat interface. Morris feels that ultimately, travelers don’t want to talk to a chatbot — they just want to browse and discover, then have an experience that’s tailored for them as the app gets smarter about what they like.

But consumer sentiment around chatbots won’t necessarily be negative forever. While the original chatbots were arguably bad, advances in A.I. may see them improve over time. And at some point, they may be nearly as useful as phoning a travel agent for help. At that point, Hitlist’s decision to forgo a chat interface or chat feature could be called into question.

Instead of chat, Hitlist offers editorially curated suggestions, which can be as broad as “escape to Mexico” or as weird and quirky as “best cities to find wild kittens.” (Yes really.)

Hitlist will also help travelers by offering a variety of travel advice to help them make a decision — similar to how Morris used to advise her friends. For example, it might suggest the best days to fly (similar to Google Flights or Hopper), or tell you about the baggage fees, or even what sort of events might be happening at a destination.

From my experience as a user, the app is straightforward and simple to use, and can easily serve as a place for travel inspiration and discovery. It’s also a fun utility for marking off where you’ve been and where you want to go, bucket-list style, and then keeping an eye on prices. But there are a ton of tools for cheap flight shopping, so you shouldn’t book through Hitlist without checking around to ensure it’s the best deal.

Since its launch, Hitlist has grown to more than a million mostly millennial travelers, who have collectively saved over $25 million on their flights by booking at the right time, the company claims.

The new subscription plan is live now on iOS as an in-app purchase for $4.99 per month, but offers a better rate for quarterly or annual subscriptions, at $4.00/mo and $3/mo, respectively. It will roll out on Android later in the year.

Navigating the risks of artificial intelligence and machine learning in low-income countries

On a recent work trip, I found myself in a swanky-but-still-hip office of a private tech firm. I was drinking a freshly frothed cappuccino, eyeing a mini-fridge stocked with local beer and standing amidst a group of hoodie-clad software developers typing away diligently at their laptops against a backdrop of Star Wars and xkcd comic wallpaper.

I wasn’t in Silicon Valley: I was in Johannesburg, South Africa, meeting with a firm that is designing machine learning (ML) tools for a local project backed by the U.S. Agency for International Development.

Around the world, tech startups are partnering with NGOs to bring machine learning and artificial intelligence to bear on problems that the international aid sector has wrestled with for decades. ML is uncovering new ways to increase crop yields for rural farmers. Computer vision lets us leverage aerial imagery to improve crisis relief efforts. Natural language processing helps us gauge community sentiment in poorly connected areas. I’m excited about what might come from all of this. I’m also worried.

AI and ML have huge promise, but they also have limitations. By nature, they learn from and mimic the status quo — whether or not that status quo is fair or just. We’ve seen AI or ML’s potential to hard-wire or amplify discrimination, exclude minorities or just be rolled out without appropriate safeguards — so we know we should approach these tools with caution. Otherwise, we risk these technologies harming local communities, instead of being engines of progress.

Seemingly benign technical design choices can have far-reaching consequences. In model development, trade-offs are everywhere. Some are obvious and easily quantifiable — like choosing to optimize a model for speed versus precision. Sometimes it’s less clear. How you segment data or choose an output variable, for example, may affect predictive fairness across different sub-populations. You could end up tuning a model to excel for the majority while failing for a minority group.

Image courtesy of Getty Images

These issues matter whether you’re working in Silicon Valley or South Africa, but they’re exacerbated in low-income countries. There is often limited local AI expertise to tap into, and the tools’ more troubling aspects can be compounded by histories of ethnic conflict or systemic exclusion. Based on ongoing research and interviews with aid workers and technology firms, we’ve learned five basic things to keep in mind when applying AI and ML in low-income countries:

  1. Ask who’s not at the table. Often, the people who build the technology are culturally or geographically removed from their customers. This can lead to user-experience failures like Alexa misunderstanding a person’s accent. Or worse. Distant designers may be ill-equipped to spot problems with fairness or representation. A good rule of thumb: If everyone involved in your project has a lot in common with you, then you should probably work hard to bring in new, local voices.
  2. Let other people check your work. Not everyone defines fairness the same way, and even really smart people have blind spots. If you share your training data, design to enable external auditing or plan for online testing, you’ll help advance the field by providing an example of how to do things right. You’ll also share risk more broadly and better manage your own ignorance. In the end, you’ll probably end up building something that works better.
  3. Doubt your data. A lot of AI conversations assume that we’re swimming in data. In places like the U.S., this might be true. In other countries, it isn’t even close. As of 2017, less than a third of Africa’s 1.25 billion people were online. If you want to use online behavior to learn about Africans’ political views or tastes in cinema, your sample will be disproportionately urban, male and wealthy. Generalize from there and you’re likely to run into trouble.
  4. Respect context. A model developed for a particular application may fail catastrophically when taken out of its original context. So pay attention to how things change in different use cases or regions. That may just mean retraining a classifier to recognize new types of buildings, or it could mean challenging ingrained assumptions about human behavior.
  5. Automate with care. Keeping humans “in the loop” can slow things down, but their mental models are more nuanced and flexible than your algorithm. Especially when deploying in an unfamiliar environment, it’s safer to take baby steps and make sure things are working the way you thought they would. A poorly vetted tool can do real harm to real people.

AI and ML are still finding their footing in emerging markets. We have the chance to thoughtfully construct how we build these tools into our work so that fairness, transparency and a recognition of our own ignorance are part of our process from day one. Otherwise, we may ultimately alienate or harm people who are already at the margins.

The developers I met in South Africa have embraced these concepts. Their work with the nonprofit Harambee Youth Employment Accelerator has been structured to balance the perspectives of both the coders and those with deep local expertise in youth unemployment; the software developers are even foregoing time at their hip offices to code alongside Harambee’s team. They’ve prioritized inclusivity and context, and they’re approaching the tools with healthy, methodical skepticism. Harambee clearly recognizes the potential of machine learning to help address youth unemployment in South Africa — and they also recognize how critical it is to “get it right.” Here’s hoping that trend catches on with other global startups, too.

This family’s Echo sent a private conversation to a random contact

A Portland family tells KIRO news that their Echo recorded and then sent a private conversation to someone on its list of contacts without telling them. Amazon called it an “extremely rare occurrence.” (And provided a more detailed explanation, below.)

Portlander Danielle said that she got a call from one of her husband’s employees one day telling her to “unplug your Alexa devices right now,” and suggesting she’d been hacked. He said that he had received recordings of the couple talking about hardwood floors, which Danielle confirmed.

Amazon, when she eventually got hold of the company, had an engineer check the logs, and he apparently discovered what they said was true. In a statement, Amazon said, “We investigated what happened and determined this was an extremely rare occurrence. We are taking steps to avoid this from happening in the future.”

What could have happened? It seems likely that the Echo’s voice recognition service misheard something, interpreting it as instructions to record the conversation like a note or message. And then it apparently also misheard them say to send the recording to this particular person. And it did all this without saying anything back.

The house reportedly had multiple Alexa devices, so it’s also possible that the system decided to ask for confirmation on the wrong device — saying “All right, I’ve sent that to Steve” on the living room Echo because the users’ voices carried from the kitchen. Or something.

Naturally no one expects to have their conversations sent out to an acquaintance, but it must also be admitted that the Echo is, fundamentally, a device that listens to every conversation you have and constantly sends that data to places on the internet. It also remembers more stuff now. If something does go wrong, “sending your conversation somewhere it isn’t supposed to go” seems a pretty reasonable way for it to happen.

Update: I asked Amazon for more details on what happened, and after this article was published it issued the following explanation, which more or less confirms how I suspected this went down:

Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right”. As unlikely as this string of events is, we are evaluating options to make this case even less likely.

Reddit adds a desktop night mode as it continues rolling out major redesign

For being one of the most visited websites on the web, Reddit‘s product has rocked a notoriously basic design for much of its existence. The site is in the process of slowly rolling out a major desktop redesign to users, and today the company announced that part of this upgrade will be native support for night mode.

Night mode will likely be a popular feature for the desktop site that seems to have a core group of users that never sleep. Reddit’s mobile apps have notably had a native night mode for a while already.

While night mode won’t likely be too controversial, some Redditors already seem resistant to the redesign change. Nevertheless, I’ve found it to be a pretty friendly upgrade (classic view is still the best) that gels with the surprisingly great mobile apps the company has continued to update. Reddit’s recent heavy integration of native ads is only more apparent in the new design, something that is understandably frustrating a lot of users, but it was surprising the ad-lite good times lasted so long in the first place.

You can access the night mode feature with a toggle in the username dropdown menu in the top-right corner of the site.

And the winner of Startup Battlefield Europe at VivaTech is… Wingly

At the very beginning, there were 15 startups. After a morning of incredibly fierce competition, we now have a winner.

Startups participating in the Startup Battlefield have all been hand-picked to participate in our highly competitive startup competition. They all presented in front of multiple groups of VCs and tech leaders serving as judges for a chance to win €25,000 and an all-expense paid trip for two to San Francisco to participate in the Startup Battlefield at TechCrunch’s flagship event, Disrupt SF 2018.

After many deliberations, TechCrunch editors pored over the judges’ notes and narrowed the list down to five finalists: Glowee, IOV, Mapify, Wakeo and Wingly.

These startups made their way to the finale to demo in front of our final panel of judges, which included: Brent Hoberman (Founders Factory), Liron Azrielant (Meron Capital), Keld van Schreven (KR1), Roxanne Varza (Station F), Yann de Vries (Atomico) and Matthew Panzarino (TechCrunch).

And now, meet the Startup Battlefield Europe at VivaTech winner.

Winner: Wingly

Wingly is a flight-sharing platform that connects pilots and passengers. Private pilots can add flights they have planned, then potential passengers can book them.

Runner-Up: IOV

IOV is building a decentralized DNS for blockchains. By implementing the Blockchain Communication Protocol, the IOV Wallet will be the first wallet that can receive and exchange any kind of cryptocurrency from a single address of value.

Some low-cost Android phones shipped with malware built in

Avast has found that many low-cost, non-Google-certifed Android phones shipped with a strain of malware built in that could send users to download apps they didn’t intend to access. The malware, called called Cosiloon, overlays advertisements over the operating system in order to promote apps or even trick users into downloading apps. Devices effected shipped from ZTE, Archos and myPhone.

The app consists of a dropper and a payload. “The dropper is a small application with no obfuscation, located on the /system partition of affected devices. The app is completely passive, only visible to the user in the list of system applications under ‘settings.’ We have seen the dropper with two different names, ‘CrashService’ and ‘ImeMess,’” wrote Avast. The dropper then connects with a website to grab the payloads that the hackers wish to install on the phone. “The XML manifest contains information about what to download, which services to start and contains a whitelist programmed to potentially exclude specific countries and devices from infection. However, we’ve never seen the country whitelist used, and just a few devices were whitelisted in early versions. Currently, no countries or devices are whitelisted. The entire Cosiloon URL is hardcoded in the APK.”

The dropper is part of the system’s firmware and is not easily removed.

To summarize:

The dropper can install application packages defined by the manifest downloaded via an unencrypted HTTP connection without the user’s consent or knowledge.
The dropper is preinstalled somewhere in the supply chain, by the manufacturer, OEM or carrier.
The user cannot remove the dropper, because it is a system application, part of the device’s firmware.

Avast can detect and remove the payloads and they recommend following these instructions to disable the dropper. If the dropper spots antivirus software on your phone it will actually stop notifications but it will still recommend downloads as you browse in your default browser, a gateway to grabbing more (and worse) malware. Engadget notes that this vector is similar to the Lenovo “Superfish” exploit that shipped thousands of computers with malware built in.

Uber in fatal crash detected pedestrian but had emergency braking disabled

The initial report by the National Transportation Safety Board on the fatal self-driving Uber crash in March confirms that the car detected the pedestrian as early as 6 seconds before the crash, but did not slow or stop because its emergency braking systems were deliberately disabled.

Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words, to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.” It’s not clear why the emergency braking capability even exists if it is disabled while the car is in operation. The Volvo model’s built-in safety systems — collision avoidance and emergency braking, among other things — are also disabled while in autonomous mode.

It appears that in an emergency situation like this this “self-driving car” is no better, or substantially worse, than many normal cars already on the road.

It’s hard to understand the logic of this decision. An emergency is exactly the situation when the self-driving car, and not the driver, should be taking action. Its long-range sensors can detect problems accurately from much farther away, while its 360-degree awareness and route planning allow it to make safe maneuvers that a human would not be able to do in time. Humans, even when their full attention is on the road, are not the best at catching these things; relying only on them in the most dire circumstances that require quick response times and precise maneuvering seems an incomprehensible and deeply irresponsible decision.

According to the NTSB report, the vehicle first registered Elaine Herzberg on lidar six seconds before the crash — at the speed it was traveling, that puts first contact at about 378 feet away. She was first identified as an unknown object, then a vehicle, then a bicycle, over the next few seconds (it isn’t stated when these classifications took place exactly).

The car following the collision

During these six seconds, the driver could and should have been alerted of an anomalous object ahead on the left — whether it was a deer, a car or a bike, it was entering or could enter the road and should be attended to. But the system did not warn the driver and apparently had no way to.

Then, 1.3 seconds before impact, which is to say about 80 feet away, the Uber system decided that an emergency braking procedure would be necessary to avoid Herzberg. But it did not hit the brakes, as the emergency braking system had been disabled, nor did it warn the driver because, again, it couldn’t.

It was only when, less than a second before impact, the driver happened to look up from whatever it was she was doing and saw Herzberg, whom the car had known about in some way for five long seconds by then. It struck and killed her.

It reflects extremely poorly on Uber that it had disabled the car’s ability to respond in an emergency — though it was authorized to speed at night — and no method for the system to alert the driver should it detect something important. This isn’t just a safety issue, like going on the road with a sub-par lidar system or without checking the headlights — it’s a failure of judgement by Uber, and one that cost a person’s life.

Arizona, where the crash took place, barred Uber from further autonomous testing, and Uber yesterday ended its program in the state.

Uber offered the following statement on the report:

Over the course of the last two months, we’ve worked closely with the NTSB. As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program. We’ve also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we’ll make in the coming weeks.

Facebook and Instagram launch US political ad labeling and archive

Facebook today revealed that it’s chosen not to shut down all political ads because that could unfairly favor incumbents and candidates without resources to buy pricey TV ads. Instead, it’s now launching its previously announced “paid for by” labels on political and issue ads on Facebook and Instagram in the U.S. and its publicly searchable archive of all these politics-related ads that run in the U.S. That includes ads run by news publishers or others that promote articles with political content.

The labeling won’t just apply to candidate and election ads, but those dealing with political issues such as “abortion, guns, immigration or foreign policy.” Clicking through the labels that appear at the top of these News Feed ads will lead to the archive, which isn’t backdated and will only include ads from early May 2018 and after. The archive will hold them for seven years so they can be searched by keyword or the Page that ran them. It also will display the ad’s budget, and the number of people who saw it, plus aggregated, anonymized data on their age, gender and location.

Any advertiser that wants to run political ads must now go through Facebook’s authorization process that requires them to reveal their identity and location, and advertisers will only have a week’s grace period starting today before those unauthorized will have their ads paused. Facebook plans to monitor political ads with a combination of artificial intelligence and 3,000 to 4,000 newly hired ad reviewers as part of its doubling of its security team from 10,000 to 20,000 this year.

The reviewers and AI will analyze these ads’ images, text and the outside websites to which they point to look for political content. They’ll seek to avoid bias in classification by following guidelines on what constitutes one of 20 political issues from the decades-running Comparative Agendas Project. Users also may report unlabeled ads, which will then be reviewed, paused and archived if they’re deemed political. Their buyer will then be required to go through the authorization process before they can buy more.

A look at ads run by Donald Trump’s official page inside Facebook’s new political ad archive

As part of work with Facebook’s new commission investigating social media’s impact on elections, it plans to provide a database available via a forthcoming API that will let watchdog groups, academics and researchers review how ads are being used. These tools will open to other countries in the following months, and Facebook plans to make all ads visible to everyone through a tool launching in June that’s now testing in Ireland and Canada.

Facebook’s chief product officer Chris Cox writes that “We hope that in aggregate these changes will be a big step to improve the quality of civic engagement in our products, and to keep the public discourse strong.”

Facebook held a conference call to discuss the launch with reporters this morning. Unfortunately it was timed to end just 15 minutes before the news went out, limiting the ability of journalists to write timely, in-depth coverage. You can listen to that call below:

Concerns with Facebook’s push for ad transparency

While the labels and archive are good steps toward transparency, there are still a number of problems with the program. Most specifically, the political action committees and organizations that often fund political ads can have confusing or misleading names that obscure their true purpose. Simply listing those organizations in the Paid For By labels or archive won’t necessarily give users a lot of information about who the people behind the money are unless they’re willing to go digging across the internet themselves.

An example of a “Paid for by” label on an Instagram ad

For example, the notorious conservative political donors the Koch brothers funnel cash through a PAC called Prosperity Action to fund Republican candidates like Paul Ryan. Seeing an ad was paid for by Prosperity Action wouldn’t immediately inform most Americans. On the other side, ads to displace Paul Ryan have been bought by a Page called Stand Up America, which many might not immediately know is an anti-Trump group. If Facebook wants to truly give citizens a better understanding of where these political ads come from, it needs to add more info about the donors and political leanings behind PACs and other big spenders.

[Update: After requesting clarification about exactly who and what should appear in the “paid for by” section of ads and the archive, a Facebook spokesperson told me that the Page admin who purchases the ad chooses who to disclose as having paid. Facebook requires that this disclosure info be complete and accurate, and that advertisers follow applicable laws. But that still seems to allow advertisers to cite some shell organization or donor group name that could obscure where the money really comes from.]

Another issue is who will have access to the archive API, since the Cambridge Analytica scandal all started with an academic researcher accessing Facebook data.

One interesting new learning from today is that news publishers’ articles that deal with political issues and are promoted in ads will need the disclosures too. “Any ad that has political content on Facebook going forward will require authorization, labeling, and archiving regardless of who’s running it,” said Facebook Director of Public Policy Steve Satterfield, who notes Facebook is in dialogue with different ad buyers “including news publishers.” While that might seem like overkill if The Wall Street Journal promotes a story regarding one of President Trump’s policy changes in hopes of adding subscribers, not supporting him, the genesis of highly partisan Facebook-specific news sources necessitates there be no loophole for avoiding labeling.

“We won’t always get it right. We know we’ll miss some ads and in other cases we’ll identify some we shouldn’t,” writes Satterfield and Facebook’s Global Politics and Government Outreach Director Katie Harbath. But Harbath described on the call how even though all the monitoring of political ads will cost more than the revenue the company earns from them, Facebook felt it necessary to “make sure people have a way to express themselves and engage in political discourse in a transparent way.”

Ads With Political Content

Posted by Facebook on Thursday, May 24, 2018

Self-policing in this manner could reduce the urgency of calls to pass the Honest Ads Act that was unveiled last year to bring online advertising disclosures in line with those for television, though Congress has yet to hold a hearing about.

“These changes won’t fix everything, but they will make it a lot harder for anyone to do what the Russians did during the 2016 election and use fake accounts and pages to run ads,” CEO Mark Zuckerberg concluded. “I hope they’ll also raise the bar for all political advertising online.”

These are the exact kind of tools and labels Facebook should have offered as soon as it began touting its ability to influence elections with its ads more than a half decade ago. But with the mid-term elections approaching alongside races around the world, they’re better late than never.

Google opens its G Suite for Education to home-school co-ops

Google today announced it is changing the eligibility guidelines of its free G Suite for Education service to include home-school co-ops. Parents and teachers who run home-school co-ops will be able to sign up for it in the coming weeks.

G Suite for Education includes all of Google’s usual online productivity tools and then layers a number of education-specific services like Classroom on top of that. Google Classroom, it’s worth noting, was already available to any G Suite user, but to subscribe to G Suite for Education, you needed to be affiliated with a school or school district. Now, home-school co-ops will be able to verify their status and get access to G Suite for Education, too.

“Through technology, home-school co-op teachers can set and change assignments on the fly, students can work together even if geographically separated, and everyone has a common format for collaboration,” writes Darren Jones of the Home School Legal Defense Association, in today’s announcement. “It’s because of this potential that I’ve been working closely with Google this year to make sure that home-school co-ops have the same access as other schools to G Suite for Education.”

Google has piloted this program with a number of co-ops in recent months. Given that these groups function a bit like traditional schools, with some being more formal than others, I can see how access to a shared and integrated set of tools would be useful there.

Dog-sitting startup Rover just raised $155M

Rover, a dog-walking and dog-boarding service that merged with DogVacay around this time last year, is now the second of such startups this year to raise a massive new round of funding with its announcement of a $155 million financing round.

While competitor Wag has become a juggernaut, there seems to be room for both a second player and the potential to outmaneuver Wag even with its massive influx of capital. Both DogVacay and Rover had a very similar model and eventually merged in an all-stock deal, creating a more substantial competitor for Wag. The round consisted of $125 million in equity financing led by funds and accounts advised by T. Rowe Price Associates, with a $30 million credit facility with Silicon Valley Bank. The Wall Street Journal is reporting that the round values Rover at $970 million.

Wag earlier this year picked up $300 million in a massive funding round led by SoftBank. That was, of course, SoftBank — which is investing massive piles of capital into startups and pretty much altering the calculus of venture capital in the process. But it also signaled a huge interest in various dog-care services, including apparently Rover, as a potential business opportunity for the millions of dog owners in the world. If you’ll walk anywhere in San Francisco, you’re destined to run into a very large number of very good dogs, and it makes enough sense that there should be an opportunity to capitalize on dog ownership as a whole.

Rover connects dog owners with various users that will walk, board, or generally take care of dogs — a critical service for anyone who might be traveling, or just work in a non-dog-friendly office. Users just book a dog walker or sitter through the app, which connects them with area sitters. It’s an area where Wag has faced a lot of criticism following a major Bloomberg report regarding poor service (and losing dogs). There are, of course, many challenges for any service that offloads some kind of daily need to a third party starting in a similar fashion to Uber.

Rover, interestingly, notes on its website that it “accepts less than 20% of potential sitters,” perhaps a dig at the criticism for Wag or the space in general and as an attempt to soothe concerns from potential users. Rover says it has more than 200,000 sitters throughout North America. The company previously raised $156 million, and previous investors include A-Grade Investments, Foundry Group, Madrona Venture Group, Menlo Ventures, OMERS Ventures, Petco, and StepStone Group.

Inside Facebook’s anti-sex trafficking hackathon

Tech giants put their rivalries aside for two days this week to code for a common cause: protecting children on the internet. Deep inside Facebook’s Menlo Park headquarters, teams drawn from Uber, Twitter, Google, Microsoft and Pinterest worked through the night to prototype new tools designed to help nonprofits in their fight against child sex trafficking.

Much of their work from Facebook’s third annual child safety hackathon is actually too sensitive to publish. To stay one step ahead of the criminals, the specifics of how these tools track traffickers and missing children across websites must be kept secret. But the resulting products, all donated to NGOs like Thorn and the Internet Watch Foundation, could help tech companies rally a united front against those who’d seek to hurt kids.

“The thing with work on safety and security and fighting abuse is it’s an area where the industry is collaborative,” says Guy Rosen, Facebook’s VP of product management and one of the event’s judges. “Hackathons are a great way to bring people together to actually bootstrap some of these ideas . . . ensuring that the engineers who have the smart ideas can actually understand the pain points and apply that thinking to these problems.”

The winner of 2016’s hackathon has grown into an invaluable resource for groups like the National Center for Missing and Exploited Children. The “child finder” tool matches online photos, like those on escort service listings, to NCMEC’s database of missing children. It helps reduce law enforcement’s response time so they can deploy officers in hopes of rescuing these kids.

Speaking in tech’s language of computer code, Facebook engineering manager Cristian Canton Ferrer described the tool saying, “People affected = 1; magnitude of change = enormous; lasting impact of the change = forever.”

While Facebook has recently been criticized for its dominance in social networking and approach to data privacy, its size affords it the resources to spearhead projects like this. And because it’s already accustomed to hacking on scaled tools, teaming up with NGOs and other web platforms can let the fruits of 10 years of labor around child safety be passed on to those who couldn’t build them themselves.

“It benefits no company if the general perception is that the internet is not a safe place,” says Facebook’s global head of safety Antigone Davis. “All of us have an individual interest as well as the industry’s interest in ensuring that not only people perceive it as a safe place but that it is a safe place.”

Amongst the projects at this year’s hackathon was a way to use machine vision to identify people and other distinguishing features in photos from sites known to be used for sex trafficking. Artificial intelligence can help take some of the burden off human investigators who can be emotionally taxed by constantly viewing images of the exploited.

The winning project, called “Spotting Trends,” uses clustering analysis to keep tabs on traffickers as they move around the internet. Referring to the recent termination of a popular online prostitution marketplace, Rosen told the hackathon attendees that “Backpage coming down is a big event, but the bad guys are still out there.”

The Spotting Trends team wasn’t awarded a giant novelty check or some golden trophy. Instead, they’ll get the opportunity to present their work at the big Dallas Crimes Against Children Conference, which last year drew more than 4,300 professionals from the safety industry.

“The kind of folks that come to this, they’re really motivated and really proud of the work because as internet companies we operate at the scale of hundreds of millions or billions of users. But when you do this work, you hear those individual stories,” Rosen explains. “Just knowing the things we work on have a real impact on real people is what keeps all these people coming every morning and driven to do really good work.”

Davis concludes, “I think theirs is the quiet behind-the-scenes work that doesn’t get championed nearly enough.”

Microsoft’s Twitch rival Mixer gets a revamp, including new developer tools for interactive gameplay

Microsoft is celebrating the one-year anniversary of its game streaming service and Twitch competitor, Mixer, with a host of new features, including a refresh of the user experience and the launch of an expanded developer toolkit called MixPlay. The new streamer tools will roll out along with the revamped version of Mixer .com across desktop and mobile web, and will initially be available to Mixer Pro subscribers.

The company claims the service saw more than 10 million monthly active users in December 2017 – a figure, we should point out, may be higher because of holiday sales and the accompanying bump in game downloads and playtime seen across platforms.

However, Microsoft also says that the Mixer viewing audience has grown over four times since its launch, and the number of watched streams has grown more than five times. These are still not hard numbers, but third-party reports have put Mixer well behind Twitch’s sizable and still-growing lead in terms of both concurrent streamers and viewers. (Those reports aren’t 100% accurate either, though, because they can’t track Xbox viewership.)

Microsoft says the updated Mixer.com rolls out beginning today, with a focus on making it easier for viewers to find the games and streamers they want to watch, as well as those broadcasting in creative communities.

While Pro subscribers will gain access first, they’ll have to opt-in by visiting their Account Settings and turning the new look on manually. (To do so, select the “Site Version” dialog, then the “Feature/UI Refresh” option, Microsoft says.)

The full refresh will arrive to all Mixer users later this summer.

As part of the new experience, the company is also rolling out more tools for developers with the launch of MixPlay.

As Microsoft explains, instead of just adding buttons below a stream, MixPlay lets developers build experiences on top of streams, in panels on the sides of the video, as widgets around the video, or as free-floating overlays – all of which can be designed to mimic the look-and-feel of the streamed content. Basically, this means the entire window is now a canvas, not just a portion of the stream itself.

One example of what MixPlay can enable can be seen in April’s launch of Mixer’s “Share Controller” feature, which created a virtual Xbox controller that could be shared by anyone broadcasting from their Xbox One.

This allowed gamers and viewers to play along in real-time from the web.

 

In addition, MixPlay will enable other games that are only playable on Mixer where controls blend into the stream –  like Mini Golf, which launched this month and now has 300,000 views, or Truck Stars, for example.

Three new MixPlay-enabled games are launching today, as well, including Earthfall, which lets viewers interact with streamers or even change the game; Next Up Hero, where viewers can help a streamer by taking control or freeze the streamer at the worst possible moment, depending on their mood; and Late Shift, a choose-your-own-adventure crime thriller you control.

These sorts of MixPlay experiences shift the idea of Mixer being just another game streaming service to one where viewers can actively participate by playing themselves, or at least guiding the action. That could also serve as a differentiator for Mixer as it tries to carve out a niche for itself in the battle with Twitch and YouTube Gaming.

 

But MixPlay isn’t just for interactive experiences, Microsoft notes. It can also help developers build experiences that simply enhance streams with additional content, too, like a stats dashboard.

Another update involves the Mixer Create app, which offers mobile support to streamers. Now, streamers can kick of a co-stream by clicking the co-stream button on their Mixer Create profile, then send out invites, among other things.

This is live on Android in beta today, and will launch soon on iOS beta, with a full rollout in early June.

In terms of perks, Microsoft is running an “anniversary” promotion offering $5 of Microsoft Store credit along with any Direct Purchase of $9.99 or more. A second promotion is giving away a free, 1-month channel subscription and up to 90 days of Mixer Pro to anyone who reaches Level 10 on their account between May 24th, 2018 at 12:00AM UST and May 28th, 2018 at 11:59PM PDT.

The company additionally announced a new partnership with ESL on esports, which will bring over 15,000 hours of programming from top competitive games to Mixer, including Counter-Strike: Global Offensive, League of Legends, and Dota 2. These tournaments will take advantage of Mixer’s FTL technology for “sub-second latency,” the company says.

Other announcements around games and esports are mentioned in the Mixer blog post, too.

Amazon to launch a new app store with tools for its two million sellers

Amazon is launching new app store with tools created specifically to help its sellers manage their inventory and orders. Called the Marketplace Appstore, it will feature apps made using Amazon Marketplace Web Service (Amazon MWS) by Amazon and third-party developers screened by the company. According to a report by CNET, the Marketplace Appstore launches to sellers today.

There are now about two million sellers on Amazon, including more than a million small to medium-sized businesses in the United States. Amazon MWS is an integrated web service API that allows sellers to share data about their inventory, orders and logistics with Amazon in order to automate more tasks. It also enables sellers to build apps for their own accounts and other sellers.

The company told CNET that “many developers have innovated and created applications that complement our tools and integrate with our services. We created the Marketplace Appstore to help businesses more easily discover these applications, streamline their business operations and ultimately create a better experience for our customers.”

The Marketplace Appstore is free for developers to join and use, but they are currently required to submit an application to Amazon and undergo a business and practices review.

Comcast’s mesh Wi-Fi system, xFi Pods, launches nationwide

Comcast today is officially launching its own Wi-Fi extender devices called xFi Pods that help to address problems with weak Wi-Fi signals in parts of a customer’s home due to things like the use of building materials that disrupt signals, or even just the home’s design. The launch follows Comcast’s announcement last year that it was investing in the mesh router maker Plume, which offers plug-in “pods” that help extend Wi-Fi signals.

The company said that it would launch its own xFi pods that pair with Comcast’s gateways to its own customers as a result of that deal.

Those pods were initially available in select markets, including Boston, Chicago and Denver, ahead of today’s nationwide launch.

The pods themselves are hexagon-shopped devices that plug in to any electrical outlet in the home, and then pair with Comcast’s xFi Wireless Gateway or the xFi Advanced Gateway to help Wi-Fi signals extend to the hard-to-reach areas of the home.

The pods work with the Comcast Gateways to continuously monitor and optimize the Wi-Fi connections, Comcast explains, while its cloud-based management service evaluates the home’s Wi-Fi environment to make sure all the connected devices are using the best signal bands and Wi-Fi channels. Plus, the devices are smart enough to self-monitor their own performance, diagnose issues and “heal” themselves, as needed, says Comcast.

However, early reviews of Plume’s pods were mixed. CNET said the system was slow and the pods were too expensive, for example. But Engadget found the system had the lowest latency, compared with competitors, and helped devices roam more easily and accurately.

Comcast has addressed some of the earlier complaints. The pods are now much more affordable, for starters. While they’ve been selling on the Plume website for $329 for a six-pack, Comcast’s six-pack is $199. A three-pack is also available for $119, instead of the $179 when bought directly from Plume.

More importantly, perhaps, is that Comcast’s system is different from the pods featured in earlier reviews.

While Plume technology is a component of the new pods, they are not Plume devices, Comcast tells TechCrunch. Instead, Comcast licensed the Plume technology, then reconfigured some aspects of it in order to integrate xFi. It also designed its own pods in-house.

In addition, Comcast’s engineers developed new firmware and new software in-house to make it easy to pair the pods with a Comcast Gateway.

The Comcast xFi pods can be bought from its own website, the xFi app and in some Xfinity retail stores.

The xFi app (for iOS and Android) is also how customers can manage and view the connection status of the pods.

Comcast says it will make buying pods even easier later this year by offering a monthly payment plan.

The company has been upgrading its Wi-Fi offering in recent months as a means of staying competitive. Last year it launched the Xfinity xFi platform to help customers better manage their home Wi-Fi network with features like device monitoring, troubleshooting, “bedtime” schedules for families, internet pause and other parental controls.

Comcast declined to say how many pods were sold in its first trial markets, only that the response so far has been positive and boosted the company’s Net Promoter Score as a result.

Image credits: Comcast