TaskRabbit kicks off Canadian expansion

TaskRabbit officially launched in Canada today.

The on-demand network that connects people with “taskers,” or others willing to do their household chores or errands for a fee, is kicking off its Canadian expansion in the greater Toronto area before rolling out in Vancouver in October and Montreal sometime in 2019.

This is the first major move abroad for the company in some time, as well as its first move under IKEA’s ownership. TaskRabbit first expanded beyond the U.S. in 2014, when it launched its app in the UK.

Otherwise, the service is only available in North America.

IKEA bought TaskRabbit 1 year ago as part of a deal that has allowed the company to operate independently from the Swedish furniture retailer under CEO Stacy Brown-Philpot. TaskRabbit, before its exit, had raised $38 million from investors including Founders Fund, First Round Capital and Floodgate.

An Intel drone fell on my head during a light show

It didn’t hurt. I thought someone dropped a small cardboard box on my head. It felt sharp and light. I was sitting on the floor, along the back of the crowd, and then an Intel Shooting Star Mini drone dropped on my head.

Audi put on a massive show to reveal its first EV, the e-tron. The automaker went all out, putting journalists, executives and car dealers on a three-story paddle boat for a two-hour journey across the San Francisco Bay. I had a beer and two dumplings. We were headed to a long-vacated Ford manufacturing plant in Richmond, Calif.

By the time we reached our destination, the sun had set and Audi was ready to begin. Suddenly, in front of the boat, Intel’s Shooting Star drones put on a show that ended with Audi’s trademark four ring logo. The show continued as music pounded inside the warehouse, and just before the reveal of the e-tron, Intel’s Shooting Star Minis celebrated the occasion with a light show a couple of feet above attendees’ heads.

That’s when one hit me.

Natalie Cheung, GM of Intel Drone Light Shows, told me they knew when one drone failed to land on its zone that one went rogue. According to Cheung, the Shooting Star Mini drones were designed with safety in mind.

“The drone frame is made of flexible plastics, has prop guards, and is very small,” she said. “The drone itself can fit in the palm of your hand. In addition to safety being built into the drone, we have systems and procedures in place to promote safety. For example, we have visual observers around the space watching the drones in flight and communicating with the pilot in real-time. We have built-in software to regulate the flight paths of the drones.”

After the crash, I assumed someone from Audi or Intel would be around to collect the lost drone, but no one did, and at the end of the show, I was unable to find someone who knew where I could find the Intel staff. I notified my Intel contacts first thing the following morning and provided a local address where they could get the drone. As of publication, the drone is still on my desk.

I have covered Intel’s Shooting Star program since its first public show at Disney World in 2016. It’s a fascinating program and one of the most impressive uses of drones I’ve seen. The outdoor shows, which have been used at The Super Bowl and the Olympics, are breathtaking. Hundreds of drones take to the sky and perform a seemingly impossible dance and then return home. A sophisticated program designates the route of each drone, GPS ensures each is where it’s supposed to be and it’s controlled by just one person.

Intel launched an indoor version of the Shooting Star program at CES in 2018. The concept is the same, but these drones do not use GPS to determine their location. The result is something even more magical than the outside version because with the Shooting Star Minis, the drones are often directly above the viewers. It’s an incredible experience to watch drones dance several feet overhead. It feels slightly dangerous. That’s the draw.

And that poses a safety concern.

The drone that hit me is light and mostly plastic. It weighs very little and is about 6 inches by 4 inches. A cage surrounds the bottom of the rotors, though not the top. If there’s a power button, I can’t find it. The full-size drones are made out of plastic and Styrofoam.

Safety has always been baked into the Shooting Star programs, but I’m not sure the current protocols are enough.

I was seated on the floor along the back of the venue. Most of the attendees were standing, taking selfies with the performing drones. It was a lovely show.

When the drone came down on my head, it tumbled onto the floor and the rotors continued to spin. A member of the catering staff was walking behind the barrier I was sitting against, reached out and touched the spinning rotors. I’m sure she’s fine, but when her finger touched the spinning rotor, she jumped in surprise. At this point, seconds after it crashed, the drone was upside down, and like an upturned beetle, continued to operate for a few seconds until the rotors shut off.

To be clear, I was not hurt. And that’s not the point. Drone swarm technology is fascinating and could lead to incredible use cases. Swarms of drones could quickly and efficiently inspect industrial equipment and survey crops. And they make for great shows in outside venues. But are they ready to be used inside, above people’s heads? I’m already going bald. I don’t need help.

Committed to privacy, Snips founder wants to take on Alexa and Google, with blockchain

Earlier this year we saw the headlines of how the users of popular voice assistants like Alexa and Siri and continue to face issues when their private data is compromised, or even sent to random people. In May it was reported that Amazon’s Alexa recorded a private conversation and sent it to a random contact. Amazon insists its Echo devices aren’t always recording, but it did confirm the audio was sent.

The story could be a harbinger of things to come when voice becomes more and more ubiquitous. After all, Amazon announced the launch of Alexa for Hospitality, its Alexa system for hotels, in June. News stories like this simply reinforce the idea that voice control is seeping into our daily lives.

The French startup Snips thinks it might have an answer to the issue of security and data privacy. Its built its software to run 100% on-device, independently from the cloud. As a result, user data is processed on the device itself, acting as a potentially stronger guarantor of privacy. Unlike centralized assistants like Alexa and Google, Snips knows nothing about its users.

Its approach is convincing investors. To date, Snips has raised €22 million in funding from investors like Korelya Capital, MAIF Avenir, BPI France and Eniac Ventures. Created in 2013 by 3 PhDs, and now employing more than 60 people in Paris and New York, Snips offers its voice assistant technology as a white-labelled solution for enterprise device manufacturers.

It’s tested its theories about voice by releasing the result of a consumer poll. The survey of 410 people found that 66% of respondents said they would be apprehensive of using a voice assistant in a hotel room, because of concerns over privacy, 90% said they would like to control the ways corporations use their data, even if it meant sacrificing convenience.

“?onsumers are increasingly aware of the privacy concerns with voice assistants that rely on cloud storage — and that these concerns will actually impact their usage,” says Dr Rand Hindi, co-founder and CEO at Snips. “However, emerging technologies like blockchain are helping us to create safer and fairer alternatives for voice assistants.”

Indeed, blockchain is very much part of Snip’s future. As Hindi told TechCrunch in May, the company will release a new set of consumer devices independent of its enterprise business. The idea is to create a consumer business that will prompt further enterprise development. At the same time, they will issue a cryptographic token via an ICO to incentivize developers to improve the Snips platform, as an alternative to using data from consumers. The theory goes that this will put it at odds with the approach used by Google and Amazon, who are constantly criticised for invading our private lives merely to improve their platforms.

As a result Hindi believes that as voice-controlled devices become an increasingly common sight in public spaces, there could be a significant shift in public opinion about how their privacy is being protected.

In an interview conducted last month with TechCrunch, Hindi told me the company’s plans for its new consumer product are well advanced, and will be designed from the beginning to be improved over time using a combination of decentralized machine learning and cryptography.

By using blockchain technology to share data, they will be able to train the network “without ever anybody sending unencrypted data anywhere,” he told me.

And ‘training the network” is where it gets interesting. By issuing a cryptographic token for developers to use, Hindi says they will incentivize devs to work on their platform and process data in a decentralized fashion. They are starting from a good place. He claims they already have 14,000 developers on the platform who will be further incentivized by a token economy.

“Otherwise people have no incentive to process that data in a decentralized fashion, right?” he says.

“We got into blockchain because we’re trying to find a way to get people to participate in decentralized machine learning. We’ve been wanting to get into consumer [devices] for a couple of years but didn’t really figure out the end goal because we had always had this missing element which was: how do you keep making it better over time.”

“This is the main argument for Google and Amazon to pretend that you need to send your data to them, to make the service better. If we can fix this [by using blockchain] then we can offer a real alternative to Alexa that guarantees Privacy by Design,” he says.

“We now have over 14000 developers building for us and that’s really completely organic growth, zero marketing, purely word of mouth, which is really nice because it shows that there’s a very big demand for decentralized voice assistance, effectively.”

It could be a high-risk strategy. Launching a voice-controlled device is one thing. Layering it with applications produced by developed supposedly incentivized by tokens, especially when crypto prices have crashed, is quite another.

It does definitely feel like a moonshot idea, however, and we’ll really only know if Snips can live up to such lofty ideals after the launch.

Call for smart home devices to bake in privacy safeguards for kids

A new research report has raised concerns about how in-home smart devices such as AI virtual voice assistants, smart appliances, and security and monitoring technologies could be gathering and sharing children’s data.

It calls for new privacy measures to safeguard kids and make sure age appropriate design code is included with home automation technologies.

The report, entitled Home Life Data and Children’s Privacy, is the work of Dr Veronica Barassi of Goldsmiths, University of London, who leads a research project at the university investigating the impact of big data and AI on family life.

Barassi wants the UK’s data protection agency to launch a review of what she terms “home life data” — meaning the information harvested by smart in-home devices that can end up messily mixing adult data with kids’ information — to consider its impact on children’s privacy, and “put this concept at the heart of future debates about children’s data protection”.

“Debates about the privacy implications of AI home assistants and Internet of Things focus a lot on the the collection and use of personal data. Yet these debates lack a nuanced understanding of the different data flows that emerge from everyday digital practices and interactions in the home and that include the data of children,” she writes in the report.

“When we think about home automation therefore, we need to recognise that much of the data that is being collected by home automation technologies is not only personal (individual) data but home life data… and we need to critically consider the multiple ways in which children’s data traces become intertwined with adult profiles.”

The report gives examples of multi-user functions and aggregated profiles (such as Amazon’s Household Profiles feature) as constituting a potential privacy risk for children’s privacy.

Another example cited is biometric data — a type of information frequently gathered by in-home ‘smart’ technologies (such as via voice or facial recognition tech) yet the report asserts that generic privacy policies often do not differentiate between adults’ and children’s biometric data. So that’s another grey area being critically flagged by Barassi.

She’s submitted the report to the ICO in response to its call for evidence and views on an Age Appropriate Design Code it will be drafting. This code is a component of the UK’s new data protection legislation intended to support and supplement rules on the handling of children’s data contained within pan-EU privacy regulation — by providing additional guidance on design standards for online information services that process personal data and are “likely to be accessed by children”.

And it’s very clear that devices like smart speakers intended to be installed in homes where families live are very likely to be accessed by children.

The report concludes:

There is no acknowledgement so far of the complexity of home life data, and much of the privacy debates seem to be evolving around personal (individual) data. It seems that companies are not recognizing the privacy implications involved in children’s daily interactions with home automation technologies that are not designed for or targeted at them. Yet they make sure to include children in the advertising of their home technologies. Much of the responsibility of protecting children is in the hands of parents, who struggle to navigate Terms and Conditions even after changes such as GDPR [the European Union’s new privacy framework]. It is for this reason that we need to find new measures and solutions to safeguard children and to make sure that age appropriate design code is included within home automation technologies.

“We’ve seen privacy concerns raised about smart toys and AI virtual assistants aimed at children, but so far there has been very little debate about home hubs and smart technologies aimed at adults that children encounter and that collect their personal data,” adds Barassi commenting in a statement.

“The very newness of the home automation environment means we do not know what algorithms are doing with this ‘messy’ data that includes children’s data. Firms currently fail to recognise the privacy implications of children’s daily interactions with home automation technologies that are not designed or targeted at them.

“Despite GDPR, it’s left up to parents to protect their children’s privacy and navigate a confusing array of terms and conditions.”

The report also includes a critical case study of Amazon’s Household Profiles — a feature that allows Amazon services to be shared by members of a family — with Barassi saying she was unable to locate any information on Amazon’s US or UK privacy policies on how the company uses children’s “home life data” (e.g. information that might have been passively recorded about kids via products such as Amazon’s Alexa AI virtual assistant).

“It is clear that the company recognizes that children interact with the virtual assistants or can create their own profiles connected to the adults. Yet I can’t find an exhaustive description or explanation of the ways in which their data is used,” she writes in the report. “I can’t tell at all how this company archives and sells my home life data, and the data of my children.”

Amazon does make this disclosure on children’s privacy — though it does not specifically state what it does in instances where children’s data might have been passively recorded (i.e. as a result of one of its smart devices operating inside a family home.)

Barassi also points out there’s no link to its children’s data privacy policy on the ‘Create your Amazon Household Profile’ page — where the company informs users they can add up to four children to a profile, noting there is only a tiny generic link to its privacy policy at the very bottom of the page.

We asked Amazon to clarify its handling of children’s data but at the time of writing the company had not responded to multiple requests for comment.

The EU’s new GDPR framework does require data processors to take special care in handling children’s data.

In its guidance on this aspect of the regulation the ICO writes: “You should write clear privacy notices for children so that they are able to understand what will happen to their personal data, and what rights they have.”

The ICO also warns: “The GDPR also states explicitly that specific protection is required where children’s personal data is used for marketing purposes or creating personality or user profiles. So you need to take particular care in these circumstances.”

The Punkt MP02 inches closer to what a minimalist phone ought to be

There’s an empty space in my heart for a minimalist phone with only the most basic functions. Bad for my heart, but good for a handful of companies putting out devices aiming to fill it. Punkt’s latest, the MP02, goes a little ways to making the device I desire, but it isn’t quite there yet.

Punkt’s first device included just texting and calling, which would likely have worked as intended if not for the inconvenient choice to have it connect only to 2G networks. These networks are being shut down and replaced all over the world, so you would have ended up with a phone that was even more limited than you expected.

The MP02 is the sequel, and it adds a couple useful features. It runs on 4G LTE networks, which should keep it connected for years to come, and it has gained both threaded texting (rather than a single inbox and outbox — remember those?) and Blackberry encryption for those sensitive communications.

It has nice physical buttons you can press multiple times to select a letter in ye olde T9 fashion, and also lets you take notes, consult a calendar, and calculate things. The battery has 12 days of standby, and with its tiny monochrome display and limited data options, it’ll probably stay alive for nearly that even with regular use.

Its most immediate competition is probably the Light Phone, which also has a second iteration underway that, if I’m honest, looks considerably more practical.

Now, I like the MP02. I like its chunky design (though it is perhaps a mite too thick), I like its round buttons and layout, I like its deliberate limitations. But it and other would-be minimal phones, in my opinion, are too slavish in their imitations of devices from years past. What we want is minimalism, not (just) nostalgia. We want the most basic useful features of a phone without all the junk that comes with them.

The Light Phone 2 and its nice e-ink screen.

For me, that means including a couple things that these devices tend to eschew.

One is modern messaging. SMS is bad for a lot of reasons. Why not include a thin client to pass text to a messaging service like WhatsApp or Messenger? Of course iMessage is off limits — thanks, Apple — but we could at least get a couple of the cross-platform apps on board. It doesn’t hurt the minimalist nature of the phone, in my opinion, if it connects to a modern messaging infrastructure. No need for images or gifs or anything — just text is fine.

Two is maps. We sure as hell didn’t have maps on our featurephones back in the day, but you better believe we wanted them. Basic mapping is one of the things we rely on our phones for every day. Whatever’s on this minimal phone doesn’t have to be a full-stack affair with recommendations, live traffic, and so on — just location and streets, and maybe an address or lat/long lookup, like you’d see on an old monochrome GPS unit. I don’t need my phone to tell me where to eat — just keep me from getting lost.

Three, and this is just me, I’d like some kind of synchronizing note app or the ability to put articles from Pocket or whatever on there. The e-ink screen on the Light Phone is a great opportunity for this very specific type of consumption. Neither of the companies here seems likely to add this feature, but that doesn’t change the fact that it’s one of the few things I regularly use my phone for.

Light Phone 2 is possibly getting music, weather, and voice commands, none of which really screams “minimal” to me, nor do they seem trivial to add. Ride-share stuff is a maybe, but it’d probably be a pain.

I have no problem with my phone doing just what a pocketable device needs to do and leaving the more sophisticated stuff to another device. But that pocketable device can’t be that dumb. Fortunately I do believe we’re moving closer to days when there will be meaningfully different choices available to weird people like myself. We’re not there yet, but I can wait.

Sen. Harris tells federal agencies to get serious about facial recognition risks

Facial recognition technology presents myriad opportunities as well as risks, but it seems like the government tends to only consider the former when deploying it for law enforcement and clerical purposes. Senator Kamala Harris (D-CA) has written the Federal Bureau of Investigation, Federal Trade Commission, and Equal Employment Opportunity Commission telling them they need to get with the program and face up to the very real biases and risks attending the controversial tech.

In three letters provided to TechCrunch (and embedded at the bottom of this post), Sen. Harris, along with several other notable legislators, pointed out recent research showing how facial recognition can produce or reinforce bias, or otherwise misfire. This must be considered and accommodated in the rules, guidance, and applications of federal agencies.

Other lawmakers and authorities have sent letters to various companies and CEOs or held hearings, but representatives for Sen. Harris explained that there is also a need to advance the issue within the government as well.

Sen. Harris at a recent hearing.

Attention paid to agencies like the FTC and EEOC that are “responsible for enforcing fairness” is “a signal to companies that the cop on the beat is paying attention, and an indirect signal that they need to be paying attention too. What we’re interested in is the fairness outcome rather than one particular company’s practices.”

If this research and the possibility of poorly controlled AI systems aren’t considered in the creation of rules and laws, or in the applications and deployments of the technology, serious harm could ensue. Not just  positive harm, such as the misidentification of a suspect in a crime, but negative harm, such as calcifying biases in data and business practices in algorithmic form and depriving those affected by the biases of employment or services.

“While some have expressed hope that facial analysis can help reduce human biases, a growing body of evidence indicates that it may actually amplify those biases,” the letter to the EEOC reads.

Here Sen. Harris, joined by Senators Patty Murray (D-WA) and Elisabeth Warren (D-MA), expresses concern over the growing automation of the employment process. Recruitment is a complex process and AI-based tools are being brought in at every stage, so this is not a theoretical problem. As the letter reads:

Suppose, for example, that an African American woman seeks a job at a company that uses facial analysis to assess how well a candidate’s mannerisms are similar to those of its top managers.

First, the technology may interpret her mannerisms less accurately than a white male candidate.

Second, if the company’s top managers are homogeneous, e.g., white and male, the very characteristics being sought may have nothing to do with job performance but are instead artifacts of belonging to this group. She may be as qualified for the job as a white male candidate, but facial analysis may not rate her as highly becuase her cues naturally differ.

Third, if a particular history of biased promotions led to homogeneity in top managers, then the facial recognition analysis technology could encode and then hide this bias behind a scientific veneer of objectivity.

If that sounds like a fantasy use of facial recognition, you probably haven’t been paying close enough attention. Besides, even if it’s still rare, it makes sense to consider these things before they become widespread problems, right? The idea is to identify issues inherent to the technology.

“We request that the EEOC develop guidelines for employers on the fair use of facial analysis technologies and how this technology may violate anti-discrimination law,” the Senators ask.

A set of questions also follows (as it does in each of the letters): have there been any complaints along these lines, or are there any obvious problems with the tech under current laws? If facial technology were to become mainstream, how should it be tested, and how would the EEOC validate that testing? Sen. Harris and the others request a timeline of how the Commission plans to look into this by September 28.

Next on the list is the FTC. This agency is tasked with identifying and punishing unfair and deceptive practices in commerce and advertising; Sen. Harris asserts that the purveyors of facial recognition technology may be considered in violation of FTC rules if they fail to test or account for serious biases in their systems.

“Developers rarely if ever test and then disclose biases in their technology,” the letter reads. “Without information about the biases in a technology or the legal and ethical risks attendant to using it, good faith users may be unintentionally and unfairly engaging in discrimination. Moreover, failure to disclose these biases to purchasers may be deceptive under the FTC Act.”

Another example is offered:

Consider, for example, a situation in which an African American female in a retail store is misidentified as a shoplifter by a biased facial recognition technology and is falsely arrested based on this information. Such a false arrest can cause trauma and substantially injure her future house, employment, credit, and other opportunities.

Or, consider a scenario in which a young man with a dark complexion is unable to withdraw money from his own bank account because his bank’s ATM uses facial recognition technology that does not identify him as their customer.

Again, this is very far from fantasy. On stage at Disrupt just a couple weeks ago Chris Atageka of UCOT and Timnit Gebru from Microsoft Research discussed several very real problems faced by people of color interacting with AI-powered devices and processes.

The FTC actually had a workshop on the topic back in 2012. But, amazing as it sounds, this workshop did not consider the potential biases on the basis of race, gender, age, or other metrics. The agency certainly deserves credit for addressing the issue early, but clearly the industry and topic have advanced and it is in the interest of the agency and the people it serves to catch up.

The letter ends with questions and a deadline rather like those for the EEOC: have there been any complaints? How will they assess address potential biases? Will they issue “a set of best practices on the lawful, fair, and transparent use of facial analysis?” The letter is cosigned by Senators Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR).

Last is the FBI, over which Sen. Harris has something of an advantage: the Government Accountability Office issued a report on the very topic of facial recognition tech that had concrete recommendations for the Bureau to implement. What Harris wants to know is, what have they done about these, if anything?

“Although the GAO made its recommendations to the FBI over two years ago, there is no evidence that the agency has acted on those recommendations,” the letter reads.

The GAO had three major recommendations. Briefly summarized: do some serious testing of the Next Generation Identification-Interstate Photo System (NGI-IPS) to make sure it does what they think it does, follow that with annual testing to make sure it’s meeting needs and operating as intended, and audit external facial recognition programs for accuracy as well.

“We are also eager to ensure that the FBI responds to the latest research, particularly research that confirms that face recognition technology underperforms when analyzing the faces of women and African Americans,” the letter continues.

The list of questions here is largely in line with the GAO’s recommendations, merely asking the FBI to indicate whether and how it has complied with them. Has it tested NGI-IPS for accuracy in realistic conditions? Has it tested for performance across races, skin tones, genders, and ages? If not, why not, and when will it? And in the meantime, how can it justify usage of a system that hasn’t been adequately tested, and in fact performs poorest on the targets it is most frequently loosed upon?

The FBI letter, which has a deadline for response of October 1, is cosigned by Sen. Booker and Cedric Richmond, Chair of the Congressional Black Caucus.

These letters are just a part of what certainly ought to be a government-wide plan to inspect and understand new technology and how it is being integrated with existing systems and agencies. The federal government moves slowly, even at its best, and if it is to avoid or help mitigate real harm resulting from technologies that would otherwise go unregulated it must start early and update often.


You can find the letters in full below.

EEOC:

SenHarris – EEOC Facial Rec… by on Scribd

FTC:

SenHarris – FTC Facial Reco… by on Scribd

FBI:

SenHarris – FBI Facial Reco… by on Scribd

Evernote just slashed 54 jobs, or 15 percent of its workforce

It’s no secret that Evernote, the productivity app that lets people take notes and organize other files from their working and non-work life, has been trying to regain its former footing as one of the most popular apps in the U.S., and that doing so has been an ongoing struggle.

Just two weeks ago, we reported that Evernote had lost several of its most senior executives, including its CTO Anirban Kundu, CFO Vincent Toolan, CPO Erik Wrobel and head of HR Michelle Wagner.

Now, Chris O’Neill — who took over as CEO of Evernote in 2015 after running the business operations at the Google X research unit — is sharing more demoralizing news with employees. To wit, he’s firing dozens of them. At an an all-hands meeting earlier today, he told gathered staffers that Evernote has no choice but to lay off 54 people —  roughly 15 percent of the company’s workforce — and to focus its efforts instead around specific functions, including product development and engineering.

We’ve reached out to the company for more information about what the move means for Evernote. [Update below.]

In the meantime, this newest development certainly doesn’t look encouraging. In fact, a person who tipped TechCrunch off to the executive departures two weeks ago characterized Evernote as “in a death spiral,” saying that user growth and active users have been flat for the last six years and that the company’s enterprise product offering hasn’t caught on.

It’s worth noting that in addition to shoring up its ranks, Evernote may soon be facing a funding shortage, if these layoffs were’t prompted by one. The company has raised nearly $300 million over the years, including from Sequoia Capital, New Enterprise Associates, and T. Rowe Price, but the last round it raised, according to Pitchbook, was a $6 million mezzanine round that closed in 2013.

You can learn more about what happened today via a note that O’Neill just sent to staffers.

For those of you who missed our All Hands today, I have some difficult news to share.

As part of an ongoing evaluation of our business, we’ve decided to make a tough, but necessary decision to set Evernote up for future success. We’ll be saying goodbye to 54 talented and dedicated people, each of whom has contributed to Evernote’s mission. This was an extremely difficult decision and one that we did not take lightly.

As you’ve heard me say during the past few months, I set incredibly aggressive goals for the year. We’ve grown significantly this year, but at the same time we invested too far ahead of that growth.

We must adjust quickly when part of our strategy is not meeting our expectations. Going forward, we’re streamlining certain functions and will continue to make investments to speed up and scale others, like product development and engineering.

I understand that today’s news may cause concern. We need to remember our amazing community of people who rely on our products and believe in our mission. Together, we have built a product that serves over 225 million people around the world who trust us with more than 9 billion notes containing their most important thoughts, ideas, and inspirations.

As I discussed in All-Hands, Evernote grew over 20% in the first half this year and we are in a stable financial position. Our Q3 revenue numbers remain strong and we expect to end the quarter north of $27 million. We have over $30 million in cash on our balance sheet and will exit 2018 generating more cash than we spend.

Though today is hard, this is the right decision for the business and the best way for us to invest in our future. For those friends and colleagues impacted today, we’ll be providing severance and other benefits to support them in their transition. We’ll have a series of AMAs to answer your questions that were not addressed today. As always, feel free to contact me with your questions. Tomorrow, I will publicly address our customers, partners and community on our blog.

Chris

Update: We’ve just been in touch with Evernote. It pointed us to a newly posted piece by O’Neill in which he outlines the company’s strategy going forward, which includes to “operate with a more focused leadership team,” to “operate more efficiently,” and to “double down on product development —  both quality and velocity.”

As for its funding situation, an Evernote representative insists that things are far from dire. The company is not fundraising, says this person; further, we’re told Evernote has $30 million on its balance sheet and will exit the year without burning cash.

Here’s what Google’s $149 Home Hub smart display will reportedly look like

Google is reportedly getting ready to launch some new hardware at its October 9 hardware event and we just learned a lot more about a new product that might be launching.

It was rumored that Google was working on its own Smart Display, now we’ve got images of the Google Home Hub and details about its price tag via a report from AndroidAuthority.

via Android Authority

The device certainly looks like a Google Home product with all the fabric anyone could ask for and then far, far more on top of it.

It’s rocking a 7-inch screen and will cost just $149, which is quite a bit cheaper than the 8-inch Lenovo Smart Display which is currently the cheapest option at $199 while its 10-inch varietal ships for $249 as does the stereo-speakered JBL Link View.

Having played around with Lenovo’s product, Google has some very pretty software for their Smart Displays but there are some strange quirks given that the screen is basically superfluous by design as it can’t ever assumed that the speaker can see the screen when an answer is being given. Google has their work cut out for them, but it might be in their best interest to introduce some light touch interactions that allow you to perform more actions without speaking at all, otherwise the screen is always going to feel a bit misplaced aside from pulling up a YouTube video or watching a slideshow.

What will be interesting to see is what exclusive software wizardry the device has, if anything. The report details that the device will not have a camera like other Smart Displays which is a bit funny given that the whole point of it was to bolster its Duo video call service, which Google seems to realize either isn’t worth the inexpensive components or the potential privacy overhead.

If the rumored price of $149 proves accurate and Google opts for most of the internals that the partner Smart Displays have, this will be a very cool device at a great deal that will not get used very often. It is wildly unclear what the point is of this product vertical, and without breaking it free of its software prison Google seems to be missing a big opportunity that could be fulfilled by whatever the big G’s competitors eventually release.

This report seems pretty solid, but we only have to wait a couple more weeks to see what Google has in store, TechCrunch will be keeping up with the details at the company’s Pixel 3 hardware event on October 9.

Amplify Partners locks in $200 million to transform technical founders into people who can actually lead a startup

Sunil Dhaliwal has had a solid run in his 20 years so far as a VC. Just two years out of Georgetown, Dhaliwal landed at Battery Ventures, a highly regarded venture firm. Fifteen years later, in 2012, he struck out on his own, creating Amplify Partners. It wasn’t so easy at first. His first fund required 18 months of on-again, off-again fundraising before closing with $49.1 million in capital commitments. But things have picked up substantially since. In fact, today, Amplify, once a micro fund, is taking the wraps off a third fund that it just closed with $200 million.

Some early bets made this newest fund much easier to raise than even its second fund, which closed with $125 million in 2015.

In addition to Dhaliwal’s personal track record, which includes leading deals at Battery like Netezza, acquired by IBM, and CipherTrust, acquired by Secure Computing,  Amplify has already seen four of its portfolio companies get acquired, including: the breach-detection software company LightCyber, which sold last year to Palo Alto Networks for $105 million; the sale of Conjur, which made DevOps security software, to publicly traded CyberArk Software last year for $42 million in cash;  the sale of the app development service Buddybuild to Apple (for undisclosed terms); and the sale of AppNeta, an end-user experience performance monitoring startup, to the private equity firm Rubicon Technology Partners.

Two others portfolio companies, which represent the firm’s biggest bets, look like they could eventually translate into even bigger outcomes for the firm: Fastly, which operates a content delivery network to speed up web requests, is already talking about going public, after raising $220 million from investors over the last few years. Meanwhile, DataDog, which offers monitoring and analytics for cloud-based workflows, said five months ago that it had already surpassed $100 million in recurring revenue and that it has been doubling that amount every year so far.

A growing team has also helped. In addition to David Beyer, a cofounder of Chartio who joined as a principal early on and is today a partner with Amplify, the firm features general partner Mike Dauber, who, like Dhaliwal, previously worked at Battery; partner Lenny Pruss, who was previously a principal with Redpoint Ventures; and principals Lisha Li and Sarah Catanzano. Li has a PhD from UC Berkeley and worked previously as a data scientist at both Pinterest and Stitch Fix; Catanzano was previously head of data at Mattermark and, before that, as a data partner at the venture firm Canvas Ventures.

Yet perhaps most helpful, Amplify might argue, is the opportunity it is chasing, which is broadly: distributed computing and developer-centric and data analytics companies, because they increasingly cheaper to launch, and they get their products into the hands of technical buyers faster than ever.

In fact, roughly 80 percent of the teams with which Amplify is working are led by first-time founders, and 90 percent of these are “hyper-technical domain experts” who Amplify aims to help evolve from “technical founders to just founders and CEOs who know how to build out an organization,” says Dhaliwal. Indeed, he says, staking out Amplify’s territory from the get-go has made a big difference in getting the firm connected with the talent it wants to know.

“We work with technical founders on novel applications of computer science at the seed and Series A stages. When you draw a box around that, a lot of people will gladly identify out. Some will say, ‘You really aren’t me.’ But for others who do self-identify, it’s clearly a fit on both sides. We tend to have a deep and powerful connection early on.”

Amplify, which writes checks ranging from $500,000 to upwards of $10 million, has backed roughly 50 companies to date. You can check out its portfolio here.

Kayak’s new AR feature will tell you if your carry-on bag fits the overhead bin

Popular travel app Kayak has put augmented reality to clever use with a new feature that lets you measure the size of your carry-on bag using just your smartphone. Its updated iOS app now takes advantage of Apple’s ARKit technology to introduce a new Bag Measurement tool that will help you calculate your bag’s size so you can find out if it fits in the overhead bin – you know, before your trip.

The tool is handy because the dimensions of permitted carry-on luggage can differ from airline to airline, Kayak explains, so it’s not as simple these days to figure out if your bag will fit.

In the new Kayak iOS app, you can access the measurement tool through the Flight Search feature.

The app will first prompt you to scan the floor in order to calibrate the measurements. You then move your phone around the bag to capture its size. Kayak’s app will do the math and return the bag’s size, in terms of length, width, and height.

And it will tell you if the bag “looks good” or not to meet the carry-on size requirements.

Plus, the company says it compares all the airlines’ baggage size requirements in one place, so you’ll know for sure if it will be allowed by the airline you’re flying.

Augmented reality applications, so far, have been a mixed bag. (Sorry).

Some applications can be fairly useful  – like visualizing furniture placed in a room or trying on new makeup colors. (Yes, really. I’m serious). But others are more questionable – like some AR gaming apps, perhaps. (For example, how long would you play that AR slingshot game?)

But one area where AR has held up better is in helping you measure stuff with your phone – so much so that even Apple threw in its own AR measuring tape with iOS 12.

Kayak’s tool, also timed with the release of iOS 12, is among those more practical applications.

The company says the AR feature is currently only live on updated iOS devices.