Zizoo, a booking.com for boats, sails for new markets with $7.4M on board

Berlin-based Zizoo — a startup which self describes as booking.com for boats — has nabbed a €6.5 million (~$7.4M) Series A to help more millennials find holiday yachts to mess about taking selfies in.

Zizoo says its Series A — which was led by Revo Capital, with participation from new investors including Coparion, Check24 Ventures and PUSH Ventures — was “significantly oversubscribed”.

Existing investors including MairDumont Ventures, aws Founders Fund, Axel Springer Digital Ventures and Russmedia International also participated in the round.

We first came across Zizoo some three years ago when they won our pitching competition in Budapest.

We’re happy to say they’ve come a long way since, with a team that’s now 60-people strong, and business relationships with ~1,500 charter companies — serving up more than 21,000 boats for rent, across 30 countries, via a search and book platform that caters to a full range of “sailing experiences”, from experienced sailor to novice and, on the pricing front, luxury to budget.

Registered users passed the 100,000 mark this year, according to founder and CEO Anna Banicevic. She also tells us that revenue growth has been 2.5x year-on-year for the past three years.

Commenting on the Series A in a statement, Revo Capital’s managing director Cenk Bayrakdar said: “The yacht charter market is one of the most underserved verticals in the travel industry despite its huge potential. We believe in Zizoo’s successful future as a leading SaaS-enabled marketplace.”

The new funds will be put towards growing the business — including by expanding into new markets; plus product development and recruitment across the board.

Zizoo founder and CEO Anna Banicevic at its Berlin offices

“We’re looking to strengthen our presence in the US, where we’ve seen the biggest YoY growth while also expand our inventory in hot locations such as Greece, Spain and the Caribbean,” says Banicevic on market expansion. “We will also be aggressively pushing markets such as France and Spain where consumers show a growing interest in boat holidays.”

Zizoo is intending to hire 40 more employees over the course of the next year — to meet what it dubs “the booming demand for sailing experiences, especially among millennials”.

So why do millennials love boating holidays so much? Zizoo says the 20-40 age range makes up the “majority” of its customer.

Banicevic reckons the answer is they’re after a slice of ‘affordable luxury’.

“After the recent boom of the cruising industry, millennials are well familiar with the concept of holidays at sea. However, sailing holidays (yachting) are much more fitting to the millennial’s strive for independence, adventure and experiences off the beaten path,” she suggests.

“Yachting is a growing trend no longer reserved for the rich and famous — and millennials want a piece of that. On our platform, users can book a boat holiday for as low as £25 per person per night (this is an example of a sailboat in Croatia).”

On the competition front, she says the main competition is the offline sphere (“where 90% of business is conducted by a few large and many small travel agents”).

But a few rival platforms have emerged “in the last few years” — and here she reckons Zizoo has managed to outgrow the startup competition “thanks to our unique vertically integrated business model, offering suppliers a booking management system and making it easy for the user to book a boat holiday”.

MQT builds classy Swiss watches for the truly debonair

Ah, wonderful to see you again, sir. The usual? Kool-Aid Grain Alcohol Martini with a twisty straw. Of course. And I see you’re wearing a new watch. The MQT Essential Mirror. Quite striking.

I see the watch has a quartz ETA movement – an acceptable movement by any standard – and a very elegant face and hands combination. What’s that? It has a quickset date? Of course, no watch over $200 would skimp on that simple complication. $251 you say? On a silver mesh band, also known as a Milanese? A relative bargain, given its pedigree.

Of course, sir. I’ve spoken with the chef and she’s preparing your Ritz crackers with Easy Cheese as we speak. Do tell me more about this watch. It seems to be one of your only redeeming features.

What was that? No, I said nothing under my breath. Do go on.

Made in Berne, Switzerland, you say, by a pair of watchmakers, Hanna and Tom Heer, who left their high-paying jobs to make watches? And their goal is not to create a beautiful quartz piece that is eminently wearable yet quite delicate? Laudable, sir, laudable. I especially like the thin 41mm case. It’s so light and airy! Not unlike your Supreme baseball cap.

No, of course sir, we still give away all the mints you can eat after the meal. If you’d like I can tie that lobster bib around your neck. There we are. Nice and snug.

And they make a marble version? Wonderful! That hearkens back to the Tissot Rock Watches of yore. A delight, truly.

You’ve got a bit of cheese in your beard. Let me get… oh. I’m sorry to say that my hand got into the way of your pendulous tongue. I’m very sorry, sir.

Well, it’s been wonderful chatting with you. I’ll leave you to your Rick and Morty comics. What’s that? Caviar in an ice cream cone? With sprinkles? Of course. I’ll see what I can do. I do commend you, sir, all things being equal, on your taste in watches.

Cross-border fintech startup Instarem raises $20M for global expansion

Instarem, a Singapore-based startup that helps banks transfer money overseas cheaply, has raised a Series C round of over $20 million for global expansion.

The round is led by MDI Ventures — the VC arm of Indonesian telecom operator Telkom — and Beacon — the fund belonging to Thai bank Kasikorn — as well as existing investors Vertex Ventures, GSR Ventures Rocket Internet and the SBI-FMO Fund.

The money takes four-year-old Instarem to nearly $40 million raised to date, although Instarem co-founder and CEO Prajit Nanu told TechCrunch that the startup plans to expand the Series C to $45 million. The extra capital is expected to be closed by January, with Nanu particularly keen to bring on strategic investors that can help the business grow in new emerging markets in Latin America as well as Europe.

“We are a the stage where the color of the money is very important,” he said in an interview. “It is very key to us that we bring people into the round who can add value to our business.”

Nanu added that the company is speaking to large U.S. funds among other potential investors.

Instarem works with banks to reduce their overseas transfer costs, offering a kind of ‘Transferwise for enterprise’ service. Although, unlike Transferwise which uses a global network of banks to send money across the world, Instarem uses mid-size banks that already trade in overseas currencies. As I previously explained, the process is the financial equivalent of putting a few boxes on a UPS freighter that’s about to head out, thus paying just a sliver of the costs you’d incur if you had to find a boat and ship it yourself.

Focused on Southeast Asia primarily, it services over 50 markets with transfers. The company does offer a service for consumers, but financial institutions — which have ongoing demand and higher average spend — are its primary target.

Prajit Nanu founded Instarem in 2014 alongside Michael Bermingham

The company has offices in Singapore, Mumbai and Lithuania and it is opening a presence in Seattle as it begins to look to broaden its business, which already includes three of Southeast Asia’s top ten banks. Nanu said that the company will try to work with banks and financial services such as cross-border services which target users with links to Latin America and Mexico initially. In Asia, it is awaiting payment licenses in Japan and Indonesia which will allow it to offer more services in both countries.

TechCrunch understands that the company is on the cusp of a deal with Visa that will allow its customers to roll out branded prepaid cards, adding another financial service to its offerings. Nanu declined to comment when we asked about a deal with Visa.

TechCrunch has also come to learn that Instarem was subject to an acquisition approach earlier this year from one of Southeast Asia’s unicorns. Nanu declined to name the bidder, but he did tell TechCrunch that the offer “wasn’t the right timing for us.” He is, however, giving increased thought to an exit via IPO.

Last year, when Instarem raised its $13 million Series B, he suggested that it could go public by 2020. Now that target date has shifted back to 2021, with the Instarem CEO telling TechCrunch that the U.S. remained the preferred option for a public listing when the time is right.

Tencent e-wallet is following Alibaba to Hong Kong subways

China’s payments giants have taken their battle to Hong Kong. Less than a week after Ant Financial announced adding QR codes to the city’s MTR public transport network of rail, Tencent’s WeChat Pay unveiled a similar scheme on Wednesday.

Starting mid-2021, commuters in Hong Kong can scan a barcode to enter the subway turnstile through WeChat Pay, the digital wallet linked to Tencent’s popular messaging app. That’s a year behind Alibaba’s payments affiliate Alipay, which claims to enable QR codes for MTR in mid-2020.

Both Alipay and WeChat Pay are making this scan-to-ride option available to visitors from the Chinese mainland and Hong Kong residents.

Hong Kong has become a testing ground for the Chinese e-wallet titans going global due to the city’s geographic adjacency and cosmopolitan population. Its market of 740 million people also offers growth potential as mobile payments adoption is still nascent. In a survey conducted by the Hong Kong Productivity Council, only 30 percent of the respondents said they had paid with mobile devices, while most locals are accustomed to credit cards and cash.

By contrast, 92 percent of China’s 970 million mobile users have paid on smartphones, according to a July report from consulting firm Ipsos.

Cracking the Hong Kong market isn’t easy. For years, locals have used the stored-value Octopus card to pay for everything from MTR rides to convenient store purchases. The card system, which is 57.4 percent owned by MTR, claims to cover 99 percent of the city’s population.

Time will tell whether the payments newcomers could replicate their success in their neighboring city. On the Mainland side, WeChat Pay took off after a series of marketing campaigns that involved users fighting for cash-filled digital packets on WeChat. Alipay, on the other hand, traced much of its success to its ties with Alibaba’s ecommerce platforms, which don’t accept WeChat Pay.

In Hong Kong, the rivals have introduced redeem programs and shelled out generous subsidies to vie for shoppers. AlipayHK said in June that it crossed 1.5 million users, up from one million in March. WeChat Pay Hong Kong is keeping mum about its user base but a company executive said in November that the wallet scored more than ten times growth in transactions over the past year.

Teaching STEM through the wonders of larva harvesting

There’s hardly enough room to turn around in Livin Farms’ office. Pretty standard, really, in Central, Hong Kong, where space is at a perpetual premium. It’s a small operation for the HAX-backed startup — there’s space for a few desks and not much more. The startup’s last product, the Hive, stands next to the door. It’s a series of innocuous trays stacked atop one another.

But it’s the Hive Explorer I’m here to see. The small tray sits in the middle of the room. Its top is open, the brightly colored bits of plastic drawing the eye from the moment you step through the door. Its contents pulsate with strange, random rhythms. Upon closer inspection, the browns are whites and blacks are alive, a small bed of mealworms wriggle atop one other, chowing down of the remnants of oats left behind by the team.

Above them, a neon yellow tray houses a trio of fully grown beetles and a couple dozen pupae. The former are constant on the move, butting up against one another and sometimes doing more with aims of continuing the life cycle. The pupae lie around, seemingly lifeless, occasionally twitching out a reminder that there’s still life inside.

The Explorer finds Livin Farms broadening its horizons into the world of STEM education. Where past products were focused on scalable sustainability, the new Kickstarter project is firmly targeted at youngsters. And there’s a fair amount to be learned in the bucket full of beetles. Mortality, for one. Founder Katharina Unger grabs a nearby jar and twists off the cap.

It’s filled to the top with dried mealworms. She pulls one out and pops it in her mouth, handing it to me, hopefully. I follow suit. It’s crispy. Not flavorless, exactly, but not particularly distinct. Maybe a bit salty. Mostly it just feels overwhelmingly morbid, showing down on on a little larva as its brothers continue to feast a few inches away.

Protein source of the future, now, to quote The Mountain Goats. Livin Farms also produces a unflavored larva-based powder and a surprising tasty granola as a kind of proof concept for its sustainable high-protein foodstuffs. The mission hits home here in one of the world’s most densely packed places.

[She gave me some to take home, if anyone’s hungry.]

The Explorer also offers youngsters a peak at what many consider the future of sustainable farming — assuming food manufacturers are ever able to break through the stigma of eating insects. Kids are encourage to harvest the larva to avoid overpopulation with a bit of dry roasting. The box serves as a relatively odor-free form of composting. Feeding the bugs simply entails tossing excess foodstuffs into the bin. The little buggers will tear through it, leaving a thin powder of waste in a tray below.

The setup also features a heat plate to keep the worms warm and a fan to regulate humidity, assuring that settings are ideal for the beetles to do their thing. Livin Farms is also opening up the controls to the system via Swift, in an attempt to bring a coding component to the system.

The Explorer went live on Kickstarter this week. Early bird pledges can pick up a the box of worms for ~$113.

With no moving parts, this plane flies on the ionic wind

Since planes were invented, they’ve flown using moving parts to push air around. Sure, there are gliders and dirigibles, which float more than fly, but powered flight is all about propellers (that’s why they call them that). Today that changes, with the first-ever “solid state” aircraft, flying with no moving parts at all by generating “ionic wind.”

If it sounds like science fiction… well, that’s about right. MIT’s Stephen Barrett explains that he took his inspiration directly from Star Trek.

“In the long-term future, planes shouldn’t have propellers and turbines,” Barrett said in an MIT news release. “They should be more like the shuttles in ‘Star Trek,’ that have just a blue glow and silently glide.”

“When I got an appointment at university,” he explained, “I thought, well, now I’ve got the opportunity to explore this, and started looking for physics that enabled that to happen.”

He didn’t discover the principle that ended up making his team’s craft fly — it’s been known about for nearly a century, but has never been able to be applied successfully to flight.

The basic idea is simply that when you have a powerful source of negatively charged electrons, they pass that charge on to the air around them, “ionizing” it, at which point it flows away from that source and toward — if you set it up right — a “collector” surface nearby. (Nature has a much more detailed explanation. The team’s paper was published in the journal today.)

Essentially what you’re doing is making negatively charged air flow in a direction you choose. This phenomenon was recognized in the ’20s, and in the ’60s they even attempted some thrust tests using it. But they were only able to get about 1 percent of the input electricity to work as thrust. That’s inefficient, to say the least.

To tell the truth, Barrett et al.’s system doesn’t do a lot better, only getting 2.6 percent of the input energy back as thrust, but they have the benefit of computer-aided design and super-lightweight materials. The team determined that at a certain weight and wingspan, and with the thrust that can be generated at that scale, flight should theoretically be possible. They’ve spent years pursuing it.

After many, many revisions (and as many crashes) they arrived at this 5-meter-wide, 2.5-kilogram, multi-decker craft, and after a few false starts it flew… for about 10 seconds. They were limited by the length of the room they tested in, and figure it could go farther, but the very fact that it was able to sustain flight significantly beyond the bounds of gliding is proof enough of the concept.

“This is the first-ever sustained flight of a plane with no moving parts in the propulsion system,” Barrett Said. “This has potentially opened new and unexplored possibilities for aircraft which are quieter, mechanically simpler, and do not emit combustion emissions.”

No one, least of all the crew, thinks this is going to replace propellers or jet engines any time soon. But there are lots of applications for a silent and mechanically simple form of propulsion — drones, for instance, could use it for small adjustments or to create soft landings.

There’s lots of work to do. But the goal was to invent a solid-state flying machine, and that’s what they did. The rest is just engineering.

They’re making a real HAL 9000, and it’s called CASE

Don’t panic! Life imitates art, to be sure, but hopefully the researchers in charge of the Cognitive Architecture for Space Exploration, or CASE, have taken the right lessons from 2001: A Space Odyssey, and their AI won’t kill us all and/or expose us to alien artifacts so we enter a state of cosmic nirvana. (I think that’s what happened.)

CASE is primarily the work of Pete Bonasso, who has been working in AI and robotics for decades — since well before the current vogue of virtual assistants and natural language processing. It’s easy to forget these days that research in this area goes back to the middle of the century, with a boom in the ’80s and ’90s as computing and robotics began to proliferate.

The question is how to intelligently monitor and administrate a complicated environment like that of a space station, crewed spaceship or a colony on the surface of the Moon or Mars. A simple question with an answer that has been evolving for decades; the International Space Station (which just turned 20) has complex systems governing it and has grown more complex over time — but it’s far from the HAL 9000 that we all think of, and which inspired Bonasso to begin with.

“When people ask me what I am working on, the easiest thing to say is, ‘I am building HAL 9000,’ ” he wrote in a piece published today in the journal Science Robotics. Currently that work is being done under the auspices of TRACLabs, a research outfit in Houston.

One of the many challenges of this project is marrying the various layers of awareness and activity together. It may be, for example, that a robot arm needs to move something on the outside of the habitat. Meanwhile someone may also want to initiate a video call with another part of the colony. There’s no reason for one single system to encompass command and control methods for robotics and a VOIP stack — yet at some point these responsibilities should be known and understood by some overarching agent.

CASE, therefore, isn’t some kind of mega-intelligent know-it-all AI, but an architecture for organizing systems and agents that is itself an intelligent agent. As Bonasso describes in his piece, and as is documented more thoroughly elsewhere, CASE is composed of several “layers” that govern control, routine activities and planning. A voice interaction system translates human-language queries or commands into tasks those layers can carry out. But it’s the “ontology” system that’s the most important.

Any AI expected to manage a spaceship or colony has to have an intuitive understanding of the people, objects and processes that make it up. At a basic level, for instance, that might mean knowing that if there’s no one in a room, the lights can turn off to save power but it can’t be depressurized. Or if someone moves a rover from its bay to park it by a solar panel, the AI has to understand that it’s gone, how to describe where it is and how to plan around its absence.

This type of common sense logic is deceptively difficult and is one of the major problems being tackled in AI today. We have years to learn cause and effect, to gather and put together visual clues to create a map of the world and so on — for robots and AI, it has to be created from scratch (and they’re not good at improvising). But CASE is working on fitting the pieces together.

Screen showing another ontology system from TRACLabs, PRONTOE.

“For example,” Bonasso writes, “the user could say, ‘Send the rover to the vehicle bay,’ and CASE would respond, ‘There are two rovers. Rover1 is charging a battery. Shall I send Rover2?’ Alas, if you say, ‘Open the pod bay doors, CASE’ (assuming there are pod bay doors in the habitat), unlike HAL, it will respond, ‘Certainly, Dave,’ because we have no plans to program paranoia into the system.”

I’m not sure why he had to write “alas” — our love of cinema is exceeded by our will to live, surely.

That won’t be a problem for some time to come, of course — CASE is still very much a work in progress.

“We have demonstrated it to manage a simulated base for about 4 hours, but much needs to be done for it to run an actual base,” Bonasso writes. “We are working with what NASA calls analogs, places where humans get together and pretend they are living on a distant planet or the moon. We hope to slowly, piece by piece, work CASE into one or more analogs to determine its value for future space expeditions.”

I’ve asked Bonasso for some more details and will update this post if I hear back.

Whether a CASE- or HAL-like AI will ever be in charge of a base is almost not a question any more — in a way it’s the only reasonable way to manage what will certainly be an immensely complex system of systems. But for obvious reasons it needs to be developed from scratch with an emphasis on safety, reliability… and sanity.

LinkedIn cuts off email address exports with new privacy setting

A win for privacy on LinkedIn could be a big loss for businesses, recruiters and anyone else expecting to be able to export the email addresses of their connections. LinkedIn just quietly introduced a new privacy setting that defaults to blocking other users from exporting your email address. That could prevent some spam, and protect users who didn’t realize anyone who they’re connected to could download their email address into a giant spreadsheet. But the launch of this new setting without warning or even a formal announcement could piss off users who’d invested tons of time into the professional networking site in hopes of contacting their connections outside of it.

TechCrunch was tipped off by a reader that emails were no longer coming through as part of LinkedIn’s Archive tool for exporting your data. Now LinkedIn confirms to TechCrunch that “This is a new setting that gives our members even more control of their email address on LinkedIn. If you take a look at the setting titled ‘Who can download your email’, you’ll see we’ve added a more detailed setting that defaults to the strongest privacy option. Members can choose to change that setting based on their preference. This gives our members control over who can download their email address via a data export.”

That new option can be found under Settings & Privacy -> Privacy -> Who Can See My Email Address? This “Allow your connections to download your email [address of user] in their data export?” toggle defaults to “No.” Most users don’t know it exists because LinkedIn didn’t announce it; there’s merely been a folded up section added to the Help center on email visibility, and few might voluntarily change it to “Yes” as there’s no explanation of why you’d want to. That means nearly no one’s email addresses will appear in LinkedIn Archive exports any more. Your connections will still be able to see your email address if they navigate to your profile, but they can’t grab those from their whole graph.

Facebook came to the same conclusion about restricting email exports back when it was in a data portability fight with Google in 2010. Facebook had been encouraging users to import their Gmail contacts, but refused to let users export their Friends’ email addresses. It argued that users own their own email addresses, but not those of their Friends, so they couldn’t be downloaded — though that stance conveniently prevented any other app from bootstrapping a competing social graph by importing your Facebook friend list in any usable way. I’ve argued that Facebook needs to make friend lists interoperable to give users choice about what apps they use, both because it’s the right thing to do but also because it could deter regulation.

On a social network like Facebook, barring email exports makes more sense. But on LinkedIn’s professional network, where people are purposefully connecting with those they don’t know, and where exporting has always been allowed, making the change silently seems surreptitious. Perhaps LinkedIn didn’t want to bring attention to the fact it was allowing your email address to be slurped up by anyone you’re connected with, given the current media climate of intense scrutiny regarding privacy in social tech. But trying to hide a change that’s massively impactful to businesses that rely on LinkedIn could erode the trust of its core users.

Facebook is still facing ‘intermittent’ outages for advertisers ahead of Black Friday and Cyber Monday

One day after experiencing a massive outage across its ad network, Facebook, one of the most important online advertising platforms, is still seeing “intermittent” issues for its ad products at one of the most critical times of the year for advertisers.

According to a spokesperson for the company, while most systems are restored there are still intermittent issues that could affect advertisers.

For most of the day yesterday, advertisers were unable to create and edit campaigns through Ads Manager or the Ads API tools.

The company said that existing ads were delivered, but advertisers could not set new campaigns or make any changes to existing campaigns, according to several users of the network.

Reporting has been restored for all interfaces, according to the company, but conversion data may be delayed throughout the day for the Americas and in the evening for other regions.

The company declined to comment on how many campaigns were affected by the outage or on whether it intends to compensate or make up for the outage with advertisers on the platform.

Some advertisers are still experiencing outages and are not happy about it.

I'm on 2 hours of sleep. I have so much more I'm looking at an all nighter tonight. And FB has the audacity to send me a message that says, "Happy Thanksgiving, our offices will be closed for Thanksgiving!"

— David Herrmann (@herrmanndigital) November 21, 2018

It’s easy to make fun of us and laugh at us media buyers expenses bloggers. However, I’m not. Many small biz and, well, my livelihood are dependent on this working. Fbook needs to be help accountable. 28 hours now of a broken ads manager has impacted one biz we work with already.

— David Herrmann (@herrmanndigital) November 21, 2018

This is a bad look for a company that is already fighting fires on any number of other fronts. But unlike the problems with bullying, hate speech, and disinformation that don’t impact the ways Facebook makes money, selling ads is actually how Facebook makes money.

In the busiest shopping season of the year (and therefore one of the busiest advertising seasons of the year) for Facebook to have no response and for some developers to still be facing intermittent outages on the platform is a bad sign.

Apple puts its next generation of AI into sharper focus as it picks up Silk Labs

Apple’s HomePod is a distant third behind Amazon and Google when it comes to market share for smart speakers that double up as home hubs, with less than 5 percent share of the market for these devices in the U.S., according to one recent survey. And its flagship personal assistant, Siri, has also been determined to lag behind Google when it comes to comprehension and precision. But there are signs that the company is intent on doubling down on AI, putting it at the center of its next generation of products, and it’s using acquisitions to help it do so.

The Information reports that Apple has quietly acquired Silk Labs, a startup based out of San Francisco that had worked on AI-based personal assistant technology both for home hubs and mobile devices.

There are two notable things about Silk’s platform that set it apart from that of other assistants: it was able to modify its behavior as it learned more about its users over time (both using sound and vision), and it was designed to work on-device — a nod to privacy and concerns about “always on” speakers listening to you, improved processing on devices and the constraints of the cloud and networking technology.

Apple has not returned requests for comment, but we’ve found that at least some of Silk Labs’ employees appear already to be working for Apple (LinkedIn lists nine employees for Silk Labs, all with engineering backgrounds).

That means it’s not clear if this is a full acquisition or an acqui-hire — as we learn more we will update this post — but bringing on the team (and potentially the technology) speaks to Apple’s need and interest in doubling down to build products that are not mere repeats of what we already have on the market.

Silk Labs first emerged in February 2016, the brainchild of Andreas Gal, the former CTO of Mozilla, who had also created the company’s ill-fated mobile platform, Firefox OS; and Michael Vines, who came from Qualcomm. (Vines, incidentally, moved on in June 2018 to become the principal engineer for a blockchain startup, Solana.)

Its first product was originally conceived as integrated software and hardware: the company raised just under $165,000 in a Kickstarter to build and ship Sense, a smart speaker that would provide a way to control connected home devices and answer questions, and — with a camera integrated into the device — be able to monitor rooms and learn to recognize people and their actions.

Just four months later, Silk Labs announced that it would shelve the Sense hardware to focus specifically on the software, called Silk, after it said it started to receive inquiries from OEMs interested in getting a version of the platform to run on their own devices (it also raised money outside of Kickstarter, around $4 million).

Potentially, Silk could give those OEMs a way of differentiating from the plethora of devices that are already on the market. In addition to products from the likes of Google and Amazon, there are also a number of speakers powered by those assistants, along with devices using Cortana from Microsoft.

When Silk Labs announced that it was halting hardware development, it noted that it was in talks for some commercial partnerships (while at the same time open sourcing a basic version of the Silk platform for creating communications with IoT devices).

Silk Labs never disclosed the names of those partners, but buying and shutting down the company would be one way of making sure that the technology stays with just one company.

It’s tempting to match up what Silk Labs has built up to now with Apple’s efforts specifically in its own smart speaker, the HomePod.

Specifically, it could provide it with a smarter engine that learns about users, operates even if internet is down and secures user privacy, and crucially becomes a linchpin for how you might operate everything else in your connected life.

That would make for a mix of features that would clearly separate it from the market leader of the moment, and play into aspects — specifically privacy — that people are increasingly starting to value more.

But if you consider that spectrum of hardware and services that Apple is now involved in, you can see that the Silk team, and potentially the IP, also may end up having a wider impact.

Apple has had a mixed run when it comes to AI. The company was an early mover when it first put its Siri voice assistant into its iPhone 4S in 2011, and for a long time people would always mention it in conjunction with Amazon and Google (less so Microsoft) when they would lament about how a select few technology companies were snapping up all the AI talent, leaving little room for other companies to get a look in to building products or having a stake in how it was being developed on a larger scale.

More recently, though, it appears that the likes of Amazon — with its Alexa-powered portfolio of devices — and Google have stolen a march when it comes to consumer products built with AI technologies at their core, and as their primary interface with their users. (Siri, if anything, sometimes feels like a nuisance when I accidentally call it into action by pressing the Touch Bar or the home button on my older model iPhone.)

But it’s almost certainly wrong to guess Apple — one of the world’s biggest companies, known for playing its hand close to its chest — has lost its way in this area.

There have been a few indications, though, that it’s getting serious and rethinking how it is doing things.

A few months ago, it reorganized its AI teams under ex-Googler John Giannandrea, losing some talent in the process but more significantly setting the pace for how its Siri and Core ML teams would work together and across different projects at the company, from developer tools to mapping and more. 

Apple has also made dozens of smaller and bigger acquisitions in the last several years that speak to it picking up more talent and IP in the quest to build out its AI muscle across different areas, from augmented reality and computer vision through to big data processing at the back end. It’s even acquired other startups, such as VocalIQ in England, that focus on voice interfaces and “learn” from interactions.

To be sure, the company has started to see a deceleration of iPhone unit sales (if not revenues: prices are higher than ever), and that will mean a focus on newer devices, and ever more weight put on the services that run on these devices. Services can be augmented and expanded, and they represent recurring income — two big reasons why Apple will shift to putting more investment into them.

Expect to see that AI net covering not just the iPhone, but computers, Apple’s smart watch, its own smart speaker, the HomePod, Apple Music, Health and your whole digital life.

Driven to safety — it’s time to pool our data

Kevin Guo
Contributor

Kevin Guo is the CEO and co-founder of Hive.

For most Americans, the thought of cars autonomously navigating our streets still feels like a science fiction story. Despite the billions of dollars invested into the industry in recent years, no self-driving car company has proven that its technology is capable of producing mass-market autonomous vehicles in even the near-distant future.

In fact, a recent IIHS investigationidentified significant flaws in assisted driving technology and concluded that in all likelihood “autonomous vehicle[s] that can go anywhere, anytime” will not be market-ready for “quite some time.” The complexity of the problem has even led Uber to potentially spin off their autonomous car unit as a means of soliciting minority investments — in short, the cost of solving this problem is time and billions (if not trillions) of dollars.

Current shortcomings aside, there is a legitimate need for self-driving technology: every year, nearly 1.3 million people die and 2 million people are injured in car crashes. In the U.S. alone, 40,000 people died last year due to car accidents, putting car accident-based deaths in the top 15 leading causes of death in America. GM has determined that the major cause for 94 percent of those car crashes is human error. Independent studies have verified that technological advances such as ridesharing have reduced automotive accidents by removing from our streets drivers who should not be operating vehicles.

The challenge of developing self-driving technology is rooted in replicating the incredibly nuanced cognitive decisions we make every time we get behind the wheel.

We should have every reason to believe that autonomous driving systems — determinant and finely tuned computers always operating at peak performance — will all but eliminate on-road fatalities. The challenge of developing self-driving technology is rooted in replicating the incredibly nuanced cognitive decisions we make every time we get behind the wheel.

Anyone with experience in the artificial intelligence space will tell you that quality and quantity of training data is one of the most important inputs in building real-world-functional AI. This is why today’s large technology companies continue to collect and keep detailed consumer data, despite recent public backlash. From search engines, to social media, to self driving cars, data — in some cases even more than the underlying technology itself — is what drives value in today’s technology companies.

It should be no surprise then that autonomous vehicle companies do not publicly share data, even in instances of deadly crashes. When it comes to autonomous vehicles, the public interest (making safe self-driving cars available as soon as possible) is clearly at odds with corporate interests (making as much money as possible on the technology).

We need to create industry and regulatory environments in which autonomous vehicle companies compete based upon the quality of their technology — not just upon their ability to spend hundreds of millions of dollars to collect and silo as much data as possible (yes, this is how much gathering this data costs). In today’s environment the inverse is true: autonomous car manufacturers are focusing on are gathering as many miles of data as possible, with the intention of feeding more information into their models than their competitors, all the while avoiding working together.

The competition generated from a level data playing field could create tens of thousands of new high-tech jobs.

The siloed petabytes (and soon exabytes) of road data that these companies hoard should be, without giving away trade secrets or information about their models, pooled into a nonprofit consortium, perhaps even a government entity, where every mile driven is shared and audited for quality. By all means, take this data to your private company and consume it, make your models smarter and then provide more road data to the pool to make everyone smarter — and more importantly, increase the pace at which we have truly autonomous vehicles on the road, and their safety once they’re there.

The complexity of this data is diverse, yet public — I am not suggesting that people hand over private, privileged data, but actively pool and combine what the cars are seeing. There’s a reason that many of the autonomous car companies are driving millions of virtual miles — they’re attempting to get as much active driving data as they can. Beyond the fact that they drove those miles, what truly makes that data something that they have to hoard? By sharing these miles, by seeing as much of the world in as much detail as possible, these companies can focus on making smarter, better autonomous vehicles and bring them to market faster.

If you’re reading this and thinking it’s deeply unfair, I encourage you to once again consider 40,000 people are preventably dying every year in America alone. If you are not compelled by the massive life-saving potential of the technology, consider that publicly licenseable self-driving data sets would accelerate innovation by removing a substantial portion of the capital barrier-to-entry in the space and increasing competition.

Though big technology and automotive companies may scoff at the idea of sharing their data, the competition generated from a level data playing field could create tens of thousands of new high-tech jobs. Any government dollar spent on aggregating road data would be considered capitalized as opposed to lost — public data sets can be reused by researchers for AI and cross-disciplinary projects for many years to come.

The most ethical (and most economically sensible) choice is that all data generated by autonomous vehicle companies should be part of a contiguous system built to make for a smarter, safer humanity. We can’t afford to wait any longer.

Thanksgiving travel nightmare projected to hit these US cities the worst

The latest data from Inrix paints a dismal picture for folks traveling Wednesday (that’s today!) ahead of the Thanksgiving holiday.

Drivers in Boston, New York City and San Francisco will see the largest delays with drive times nearly quadruple the norm, according to AAA and Inrix, which aggregates and analyzes traffic data collected from vehicles and highway infrastructure.

AAA is projecting 54.3 million Americans will travel 50 miles or more away from home this Thanksgiving, a 4.8 percent increase over last year. It’s a record breaker of a year for travel. This weekend will see the highest Thanksgiving travel volume in more than a dozen years (since 2005), with 2.5 million more people taking to the nation’s roads, skies, rails and waterways compared with last year, according to AAA.

The roads will be particularly packed, according to Inrix. Some 48.5 million people — 5 percent more than last year — will travel on roads this Thanksgiving holiday, a period defined as Wednesday, November 21 to Sunday, November 25.

The worst travel times? It’s already here in some places. San Francisco, Chicago and Los Angeles will be particularly dicey Wednesday, with travel times twice to four times longer than usual. Other cities projected to have the worst travel times include Detroit along U.S. Highway 23 north, Houston on the north and southbound Interstate 45 and Los Angeles, particularly northbound on Interstate 5.

Here are a few of the lowlights happening right now. These projections are based off of Inrix’s data. Delay times show how much longer travel times will be from the norm.

In San Francisco on Wednesday:

  • CA 37 westbound will be 54% delayed at 1 p.m. PT
  • I-680 north will be 311% delayed at 1:30 p.m.
  • US101 northbound 188% delayed at 2:15 p.m. PT

In San Francisco on Thursday:

  • US101 northbound will be 20% delayed at 3 p.m. PT 

In San Francisco on Friday:

  • I80 S will be 128% delayed at 11:30 a.m. PT 

Other travel time projections on Wednesday:

  • Washington D.C.: 87% delayed on U.S. 50 eastbound at 4:15 p.m. ET
  •  LA: 82% delayed on Interstate 5 south at 3:15 p.m. PT
  • Detroit: 91% delayed on Interstate 75 south at 7:00 p.m. ET 

inrix thanksgiving traffic dataEven travel times to airports have increased Wednesday. Travel times from downtown Seattle to the airport via Interstate 5 south and Chicago to O’Hare Airport via the Kennedy Expressway will be particularly long. The Chicago route, for instance, is projected to take 1 hour and 27 minutes at the peak time between 1:30 pm and 3:30 pm CT.

There are alternatives, of course. In most cases, the best days to travel will be on Thanksgiving Day, Friday or Saturday, according to Inrix and AAA.