Lame LPs, founder referenceability and the future of VC signaling

I’m still going through some of the comments I received on last week’s articles about the heightened competition among VCs for the best (typically SaaS) venture deals. Some more notes on whether large funds investing in small rounds causes VC signaling risk in a moment, but first, a fun anecdote about how lame LPs (still) are.

Have thoughts about venture? Send them to me ? [email protected]

I was catching up with an ambitious founder of a VC firm this weekend, and we were talking about fundraising for VC firms, and, particularly, the process of connecting with limited partners. Like startup founders, investment firms typically submit fund proposal decks and data rooms to potential LPs, who are then supposed to evaluate said material and either move toward an investment, ask for more information or run the hell away.

Unlike VCs, however, LPs have apparently not caught on to the fact that access to this information is much more trackable than it was in the past. VCs now realize that their perusal of a deck on DocSend is being monitored by founders, and I have heard from more than one VC over the years that they have their executive assistants click through a deck in a deliberately slow fashion to make it look like they are putting more thought and attention into reading a founder’s fundraise deck than they really are.

LPs, though, have no such inkling that this is going on, apparently. From the VC firm founder this weekend (paraphrasing), “What’s amazing is that I get asked for my data room, and then the potential LP will set up a time two weeks in the future to meet again. Fifteen minutes before our meeting, I get an email notification that they finally opened up the data room and started accessing its files.”

The best part is where the potential LP then waxes on about how much thought they put into their feedback to the VC.

Founder referenceability

As I explained last week, the paradox of big VC funds today is that they are actually doing more of the smaller startup fundraises as a way to secure access to later-stage deals.

So for many deals today, those later-stage cap tables are essentially locking out new investors, because there is already so much capital sitting around the cap table just salivating to double down.

That gets us straight to the paradox. In order to have access to later-stage rounds, you have to already be on the cap table, which means that you have to do the smaller, earlier-stage rounds. Suddenly, growth investors are coming back to early-stage rounds (including seed) just to have optionality on access to these startups and their fundraises.

One response I heard from a seed VC is that they focus on “founder referenceability.” What they mean by that is they use their existing portfolio founders as the key persuasion tool to convince new founders to take their term sheet over other (larger) competitors.

This particular seed investor argued (whether true or not) that they spend copious amounts of time in a concentrated manner with their portfolio companies, helping them with recruiting, business strategy and customer development. That’s compared to larger firms, which have dozens (perhaps even hundreds) of seed investments and where founders can easily feel abandoned and without any support. “We win every time when founders talk to our portfolio companies,” was the general sentiment.

And yet. For founders living and dying by the ambiguity of their market, their product, their talent and their future, that imprimatur of a big brand-name VC firm — even with paltry founder recommendations — is extremely hard to turn down. As a founder, do you want the VC who is going to work his or her ass off to help build your company, or the VC whose selection of your startup gives you (and likely your employees and your customers) the peace of mind that things are going really, really well?

The sense I get is that the viewpoint is shifting to the former from the latter, but the reality is that most founders can’t turn down the allure of the big-name fund, even if they get an abundant set of glowing references about a lesser-known firm. Ultimately, that hard-working VC can help you with key hires and customers, but the reputation of a big firm will grease the wheel of every decision that gets made about your startup.

VC signaling

The other line of responses I got — including an extensive missive from a partner at a top-20 firm — is that VC signaling still limits the impact of a lot of the largest funds to invest earlier. Founders realize, the thinking goes, that taking money from a fund that can lead the next three rounds is bad, since if their investor doesn’t lead those rounds, it signals to other VCs that something is wrong with the company.

I increasingly feel VC signaling is a completely phantom pattern these days (disagree? Tell me your story ? [email protected]). Not only do I think that VCs increasingly ignore these types of signals, I think the VCs who hustle the most aggressively are targeting the early-seed checks of other funds in particular and intercepting their best deals.

Why does this work? For one, large firms haven’t really figured out how to manage the information flows from hundreds of portfolio companies simultaneously, so they consistently miss the inflection points of their own startups — points that smart VCs with good noses for opportunity identity faster.

Second, there is indeed something about referenceability and founder abandonment — a number of founders have told me that they send out a multitude of tweaked investor updates that include more or less information based on the relationship they have with an investor. Often their lead investor is getting the least information — and doesn’t even realize it. It’s a subtle hack for handling what could otherwise be an awkward situation. It also helps to create FOMO around a round that is particularly exploited by startup angels eager to find the largest early uptick in their portfolio.

Third, and finally, as with all good VC investors, seeing an investment with a fresh pair of eyes rather than through the cynical air of experience can often lead to radically different investment decisions. An incumbent investor may have heard all the data and promises from a founder for one, two or three years, and fails to see the slight changes happening at the end, while a new investor without that background can make a new decision based on the best evidence in front of them today.

The lesson to me isn’t that investors suddenly decided to ignore signals. It is that with so much competition for startup cap tables, having the right numbers and a great product story and narrative will overcome any other VC signal, positive or negative. And for the VCs themselves, there’s nothing quite like snatching the best golden egg from a competitor’s nest while they are out flying around searching for the next great deal, which if they had looked a little closer, just happened to be right in front of them.

Tim Cook-backed Nebia releases a much cheaper third-gen shower head

Last year, Tim Cook-backed shower startup Nebia announced it had raised a Series A led by the faucet-maker Moen. This year, we’re seeing the fruits of the exclusive partnership — the startup’s third-generation shower head. The product, called Nebia by Moen, is launching on Kickstarter for $199 and will retail for $269 for the shower head and wand.

The startup’s latest product is by far its least-expensive offering yet, and after a side-by-side shower test conducted by yours truly, I can say there isn’t a major difference between the Nebia by Moen and the company’s Nebia Spa Shower 2.0, something that may make continued sales of the last-gen shower, which retails for $499, a bit of a challenge.

The new hardware has earned its new price point largely by a simplified design and being routed through Moen’s supply chain. The water droplets are bigger, there are fewer nozzles and it all in all feels a bit more like a traditional shower than previous efforts. The aesthetics of the new offering are more mass-market, but it still feels distinctly similar to the design of the last two generations. The product comes in three colors and users can buy the shower head on its own for $160 during the Kickstarter campaign.

Former CEO Philip Winter has stepped down into the role of CMO and president, while fellow co-founder Gabe Parisi-Amon has taken over the reigns as CEO. I chatted with Winter at length on the broader hardware market and whether consumers were willing to fork over money for a premium shower experience. Check out the interview linked below (Extra Crunch membership required).

 

Apple Card users can now download monthly transactions in a spreadsheet

One of the big questions I got around the time the Apple Card launched was whether you’d be able to download a file of your transactions to either work with manually or import into a piece of expenses management software. The answer, at the time, was no.

Now Apple is announcing that Apple Card users will be able to export monthly transactions to a downloadable spreadsheet that they can use with their personal budgeting apps or sheets.

When I shot out a request for recommendations for a Mint replacement for my financing and budgeting, a lot of the responses showed just how spreadsheet-oriented many of the tools on the market are. Mint accepts imports, as do others like Clarity Money, YNAB and Lunch Money. As do, of course, personal solutions rolled in Google Sheets or other spreadsheet programs.

The one rec I got the most and which I’m trying out right now, Copilot, does not currently support importing spreadsheets, but founder Andres Ugarte told me it’s on their list to add. Ugarte told me that they’re happy to see the download feature appear because it lets users monitor their finances on their own terms. “Apple Card support has been a top request from our users, so we are very excited to provide a way for them to import their data into Copilot.”

Here’s how to export a spreadsheet of your monthly transactions:

  • Open Wallet
  • Tap “Apple Card”
  • Tap “Card Balance”
  • Tap on one of the monthly statements
  • Tap on “Export Transactions”

If you don’t yet have a monthly statement, you won’t see this feature until you do. The last step brings up a standard share sheet letting you email or send the file however you normally would. The current format is CSV, but in the near future you’ll get an OFX option as well.

So if you’re using one of the tools (or spreadsheet setups) that would benefit from being able to download a monthly statement of your Apple Card transactions, then you’re getting your wish from the Apple Card team today. If you use a tool that requires something more along the lines of API-level access, like something using Plaid or another account linking-centric tool, then you’re going to have to wait longer.

No info from Apple on when that will arrive, if at all, but I know that the team is continuing to launch new features, so my guess is that this is coming at some point.

Facebook speeds up AI training by culling the weak

Training an artificial intelligence agent to do something like navigate a complex 3D world is computationally expensive and time-consuming. In order to better create these potentially useful systems, Facebook engineers derived huge efficiency benefits from, essentially, leaving the slowest of the pack behind.

It’s part of the company’s new focus on “embodied AI,” meaning machine learning systems that interact intelligently with their surroundings. That could mean lots of things — responding to a voice command using conversational context, for instance, but also more subtle things like a robot knowing it has entered the wrong room of a house. Exactly why Facebook is so interested in that I’ll leave to your own speculation, but the fact is they’ve recruited and funded serious researchers to look into this and related domains of AI work.

To create such “embodied” systems, you need to train them using a reasonable facsimile of the real world. One can’t expect an AI that’s never seen an actual hallway to know what walls and doors are. And given how slow real robots actually move in real life you can’t expect them to learn their lessons here. That’s what led Facebook to create Habitat, a set of simulated real-world environments meant to be photorealistic enough that what an AI learns by navigating them could also be applied to the real world.

Such simulators, which are common in robotics and AI training, are also useful because, being simulators, you can run many instances of them at the same time — for simple ones, thousands simultaneously, each one with an agent in it attempting to solve a problem and eventually reporting back its findings to the central system that dispatched it.

Unfortunately, photorealistic 3D environments use a lot of computation compared to simpler virtual ones, meaning that researchers are limited to a handful of simultaneous instances, slowing learning to a comparative crawl.

The Facebook researchers, led by Dhruv Batra and Erik Wijmans, the former a professor and the latter a PhD student at Georgia Tech, found a way to speed up this process by an order of magnitude or more. And the result is an AI system that can navigate a 3D environment from a starting point to goal with a 99.9% success rate and few mistakes.

Simple navigation is foundational to a working “embodied AI” or robot, which is why the team chose to pursue it without adding any extra difficulties.

“It’s the first task. Forget the question answering, forget the context — can you just get from point A to point B? When the agent has a map this is easy, but with no map it’s an open problem,” said Batra. “Failing at navigation means whatever stack is built on top of it is going to come tumbling down.”

The problem, they found, was that the training systems were spending too much time waiting on slowpokes. Perhaps it’s unfair to call them that — these are AI agents that for whatever reason are simply unable to complete their task quickly.

“It’s not necessarily that they’re learning slowly,” explained Wijmans. “But if you’re simulating navigating a one-bedroom apartment, it’s much easier to do that than navigate a 10-bedroom mansion.”

The central system is designed to wait for all its dispatched agents to complete their virtual tasks and report back. If a single agent takes 10 times longer than the rest, that means there’s a huge amount of wasted time while the system sits around waiting so it can update its information and send out a new batch.

This little explanatory gif shows how when one agent gets stuck, it delays others learning from its experience.

The innovation of the Facebook team is to intelligently cut off these unfortunate laggards before they finish. After a certain amount of time in simulation, they’re done, and whatever data they’ve collected gets added to the hoard.

“You have all these workers running, and they’re all doing their thing, and they all talk to each other,” said Wijmans. “One will tell the others, ‘okay, I’m almost done,’ and they’ll all report in on their progress. Any ones that see they’re lagging behind the rest will reduce the amount of work that they do before the big synchronization that happens.”

In this case you can see that each worker stops at the same time and shares simultaneously.

If a machine learning agent could feel bad, I’m sure it would at this point, and indeed that agent does get “punished” by the system, in that it doesn’t get as much virtual “reinforcement” as the others. The anthropomorphic terms make this out to be more human than it is — essentially inefficient algorithms or ones placed in difficult circumstances get downgraded in importance. But their contributions are still valuable.

“We leverage all the experience that the workers accumulate, no matter how much, whether it’s a success or failure — we still learn from it,” Wijmans explained.

What this means is that there are no wasted cycles where some workers are waiting for others to finish. Bringing more experience on the task at hand in on time means the next batch of slightly better workers goes out that much earlier, a self-reinforcing cycle that produces serious gains.

In the experiments they ran, the researchers found that the system, catchily named Decentralized Distributed Proximal Policy Optimization or DD-PPO, appeared to scale almost ideally, with performance increasing nearly linearly to more computing power dedicated to the task. That is to say, increasing the computing power 10x resulted in nearly 10x the results. On the other hand, standard algorithms led to very limited scaling, where 10x or 100x the computing power only results in a small boost to results because of how these sophisticated simulators hamstring themselves.

These efficient methods let the Facebook researchers produce agents that could solve a point to point navigation task in a virtual environment within their allotted time with 99.9% reliability. They even demonstrated robustness to mistakes, finding a way to quickly recognize they’d taken a wrong turn and go back the other way.

The researchers speculated that the agents had learned to “exploit the structural regularities,” a phrase that in some circumstances means the AI figured out how to cheat. But Wijmans clarified that it’s more likely that the environments they used have some real-world layout rules.

“These are real houses that we digitized, so they’re learning things about how western-style houses tend to be laid out,” he said. Just as you wouldn’t expect the kitchen to enter directly into a bedroom, the AI has learned to recognize other patterns and make other “assumptions.”

The next goal is to find a way to let these agents accomplish their task with fewer resources. Each agent had a virtual camera it navigated with that provided it ordinary and depth imagery, but also an infallible coordinate system to tell where it traveled and a compass that always pointed toward the goal. If only it were always so easy! But until this experiment, even with those resources the success rate was considerably lower even with far more training time.

Habitat itself is also getting a fresh coat of paint with some interactivity and customizability.

Habitat as seen through a variety of virtualized vision systems.

“Before these improvements, Habitat was a static universe,” explained Wijmans. “The agent can move and bump into walls, but it can’t open a drawer or knock over a table. We built it this way because we wanted fast, large-scale simulation — but if you want to solve tasks like ‘go pick up my laptop from my desk,’ you’d better be able to actually pick up that laptop.”

Therefore, now Habitat lets users add objects to rooms, apply forces to those objects, check for collisions and so on. After all, there’s more to real life than disembodied gliding around a frictionless 3D construct.

The improvements should make Habitat a more robust platform for experimentation, and will also make it possible for agents trained in it to directly transfer their learning to the real world — something the team has already begun work on and will publish a paper on soon.

Nebia’s co-founder talks about finding product/market fit

Finding the right product/market fit is challenging for any company, but it’s just a little harder for hardware startups.

I recently visited the San Francisco offices of Nebia to chat with co-founder and CEO Philip Winter, whose eco-friendly hardware startup has received funding from Apple CEO Tim Cook, former Google CEO Eric Schmidt and Fitbit CEO James Park. After checking out the company’s latest shower head, we eased into a discussion about the opportunities and challenges facing hardware startups in Silicon Valley today.

TechCrunch: What’s so hard about hardware in 2020?

Philip Winter: The hardware landscape was, at one point, super-hot, at least in Silicon Valley. I would say like three or four years ago. A lot of companies came out with breakout products and a lot of them disappeared over the years since then. A lot of them are our peers — it’s a fairly small community.