More Americans Are On Facebook Than Have A Passport

To celebrate the fact that my vacation during the last two weeks of August has been officially confirmed (!), I am posting the most massive infographic I have ever seen: “The Social Travel Revolution” brought to you by the folks at still-in-beta travel startup Tripl.

Most shocking statistic: 50% of all Americans are on Facebook (155 million) while only 37% of Americans have a passport (115 million). To its credit, the Facebook onboarding process is a lot more streamlined.


VideoInbox, Another Google/Slide Production, Brings Viral Videos To Your Inbox

We’ve come across the latest in Slide’s series of projects developed within Google, VideoInbox – a combination daily newsletter/Facebook app that basically centers around the viewing, sharing and cataloguing of viral videos (proof that it’s from Slide here). Sign up for VideoInbox with Facebook Connect and you’ll get a daily email with “hand selected” viral YouTube videos like “Slow Loris With a Tiny Umbrella,” ”Rubik’s Cube Robot Is Smarter Than You” or “Bollywood Pizza Hut”.

Again exhibiting the autonomy we’ve now come to expect from the Google-owned Slide, the app uses, amazingly enough, the Facebook API to allow you to share videos with individual friends on Facebook or post them to your Facebook Wall.  While the button is there its Twitter OAuth aspect seems to be not yet implemented. The app also allows you to watch the top 5 viral videos from yesterday, as well as “Favorite” videos for watching later.

While VideoInbox is still very “work in progress,” despite its rough design, it’s kind of delightful. I mean I am so lucky to have had the experience of “Accidental Convertible” added to my life, and yes, I just shared it with a Facebook friend that I thought might like it.

Slide has been super productive since Google acquired it for $182 million back in August, coming out with a series of iOS apps including Photovine, Pool Party and group messaging app Disco in recent months. Prizes.org, a Slide-backed platform which allows you to create contests for money, like Video Inbox, heavily implements Facebook Connect.

However it’s still unclear how Slide’s churn of products is contributing to Google’s overall ambitions and strategy. Also: Why aren’t they formally pitching the tech press with this stuff? Honestly, some of it is actually pretty cool. And it’s getting to the point where it hard to keep track of them all.


Obvious Already Ramping Up With Two New Founding Team Hires

Back in January of 2009, we noted that a “superstar team” was about to launch in the MMO space, with a startup called Ohai. A few weeks ago, Ohai was sold, as VentureBeat’s Deak Takahashi first reported. And at least two of those rockstars have now moved on. Susan Wu and Don Neufeld are the newest members of The Obvious Corporation, the idea incubator that was just re-started by the former Twitter guys, Evan Williams, Biz Stone, and Jason Goldman.

Stone makes the announcement in a post today on the Obvious blog. “The most important part of creating this work culture and building these meaningful products is people — but not just any people. People that are often smarter than us, different from us, passionate like us, and dedicated to the idea that the whole is greater than the sum of its parts,” he writes, stating that Wu and Neufeld, employees number four and five at Obvious, are those kind of people.

Like everything else with the re-launch of Obvious, this move also extends from the past. Stone writes:

Many years ago, when Ev and I were working on Odeo, we met Susan as part of Charles River Ventures, and we knew then that we wanted to work with her. We know Susan to be incredibly smart, talented, thoughtful, and driven to make a lasting, positive impact on the world. Through Susan, we met Don and quickly realized he was a rare sort of affable technical genius—an obvious fit!

They sure love those obvious plays on words.

Stone goes on to note that while both most recently worked in the gaming space (with Ohai), Wu and Neufeld bring a range of knowledge. This seems to imply that whatever Obvious is building right now, it won’t be in the gaming space.

The situation surrounding the Ohai exit is still a bit odd. While the company has been sold, at first the buyer was unknown. Then, in a separate story, Takahashi reported that the buyer was EA. Then Ohai denied this. Then they said they were “in the process of completing a transaction”. Then Takahashi heard that EA had interviewed Ohai employees and did not make a purchase at that time.

Okay, that was 11 days ago, and now most (if not all) of the founding team is gone. Something clearly happened. Regardless, Wu and Neufeld are now with Obvious.

Meanwhile, while not much is known about what Obvious will actually work on, we do hear they already do have a first product in mind that they have started. More to come, I’m sure.


Doubts About Lytro’s “Focus Later” Camera

I’ve been meaning to address this Lytro thing since it hit a few weeks ago. I wrote about omnifocus cameras as far back as 2008, and more recently in 2010, and while at the time I was more interested in the science behind the systems, though it appears that Lytro uses a different method than either of those.

Lytro has been slightly close-lipped about their camera, to say the least, though that’s understandable when your entire business revolves around proprietary hardware and processes. Some of it can be derived from Lytro founder Ren Ng’s dissertation (which is both interesting and readable), but in the meantime it remains to be shown whether these “living pictures” are truly compelling or something which will be forgotten instantly by consumers. A recent fashion shoot with model Coco Rocha, the first in-vivo demonstration of the device, is dubious evidence at best.

A prototype camera was loaned for an afternoon to photographer Eric Chen, and while the hardware itself has been carefully edited or blurred out of the making-of video, it’s clear that the device is no larger than a regular point-and-shoot, and it seems to function more or less normally, with an LCD of some sort on the back, and the usual framing techniques. No tripod required, etc. It’s worth noting that they did this in broad daylight with a gold reflector for lighting, so low light capability isn’t really addressed — but I’m getting ahead of myself.

Speaking from the perspective of a tech writer and someone interested in cameras, optics, and this sort of thing in general, I have to say the technology is absolutely amazing. But from the perspective of a photographer, I’m troubled. To start with, a large portion of the photography process has been removed — and not simply a technical part, but a creative part. There’s a reason focus is called focus and not something like “optical optimum” or “sharpness.” Focus is about making a decision as a photographer about what you’re taking a picture of. It’s clear that Ng is not of the same opinion: he describes focusing as “a chore,” and believes removing it simplifies the process. In a way, it does — the way hot dogs simplify meat. Without focus, it’s just the record of a bunch of photons. And saying it’s a revolution in photography is like saying dioramas are a revolution in sculpture.

I’m also concerned about image quality. The camera seems to be fundamentally limited to a low resolution — and by resolution I mean true definition, not just pixel count. I say fundamentally because of the way the device works. Let me get technical here for a second, though there’s a good chance I’m wrong in the particulars.

The way the device works is more or less the way I imagined it did before I read Ng’s dissertation. To be brief, the image from the main lens is broken up by a microlens array over the image sensor, and by analyzing (a complex and elegant process) how the light enters various pixel wells underneath the many microlenses (which each see a slightly different picture due to their different placements), a depth map is created along with the color and luminance maps that make up traditional digital images. Afterwards, an image can be rendered with only the objects at a selected depth level rendered in maximum clarity. The rest is shown with increasing blur, probably according to some standard curve governing depth of field falloff.

Immediately it must be perceived that an enormous amount of detail is lost, not just because you are interposing an extra optical element between the light and the sensor (and one which simultaneously must be extremely low in faults and yet is very difficult to make so), but also because the system fundamentally relies on creating semi-redundant data to compare with one another, meaning pixels are yielding less data for a final image than they would be in a traditional system. They are of course providing information of a different kind, but as far as producing a sharp, accurate image, they are doing less. Ng acknowledges this in his paper, and the reduction of a 16-megapixel sensor to a 296×296 image (a reduction of some 95.5% of the pixel count) in the prototype is testament to this reducing factor.

The process has no doubt been improved along the lines he suggests are possible: square pixels have likely been replaced with hexagonal, the lenses and pixel widths made complementary, and so on. But the limitation still means trouble, especially on the microscopic sensors being deployed to camera phones and compact point and shoots. I’ve complained before that these micro-cameras already have terrible image quality, smearing, noise, limited exposure options, and so on. The Lytro approach solves some of these problems and exacerbates others. On the whole downsampling might be an improvement, now that I think of it (the resolutions of cheap cameras exceed their resolving power immensely), but I’m worried that the cheap lenses and small size will limit Lytro’s ability to make that image as versatile as their samples — at least, for a decent price. There’s a whole chapter in Ng’s paper about correcting for micro-optical aberrations, though, so it’s not like they’re unaware of this issue. I’m also worried about the quality of the blur or bokeh, but that’s an artistic scruple unlikely to be shared by casual shooters.

The limitation of the aperture to a single opening simplifies the mechanics but also leaves control of the image to ISO and exposure length. These are both especially limited in smaller sensors, since the tiny, densely-packed photosensors can’t be relied on for high ISOs, and consequently the exposure times tend to be longer than is practical for handheld shots. Can the Lytro camera possibly gain back in post-processing what it loses in initial definition?

Lastly, and this is more of a question, I’m wondering whether these images can be made to be all the way in focus, the way a narrow aperture would show it. My guess is no; there’s a section in the paper on extending the depth of field, but I’m not sure the effect will stand scrutiny in normal-sized images. It seems to me (though I may be mistaken) that the optical inconsistencies (which, to be fair, generate parallax data and enable the 3D effect) between the different “exposures” mean that only slices can be shown at a time, or at the very least there are limitations to which slices can be selected. The fixed aperture may also put a floor on how narrow your depth of field can be. Could the effect achieved in this picture be replicated, for instance? Or would I have been unable to isolate just that quarter-inch slice of the world?

All right, I’m done being technical. My simplified objections are two in numer: first, is it really possible to reliably make decent photos with this kind of camera, as it’s intended to be implemented (i.e. as an affordable compact camera)? And second, is it really adding something that people will find worthwhile?

As to the first: designing and launching a device is no joke, and I wonder whether Ng, coming from an academic background, is prepared for the harsh realities of product. Will the team be able to make the compromises necessary to bring it to shelves, and will those compromises harm the device? They’re a smart, driven group so I don’t want to underestimate them, but what they’re attempting really is a technical feat. Distribution and presentation of these photos will have to be streamlined as well. When you think about it, a ton of the “living photo” is junk data, with the “wrong” focus or none at all. Storage space isn’t so much a problem these days, but it’s still something that needs to be looked at.

The second gives me more pause. As a photographer I’m strangely unexcited by the ostensibly revolutionary ability to change the focus. The fashion shoot, a professional production, leaves me cold. The “living photos” seem lifeless to me because they lack artistic direction. I’m afraid that people will find that most photos they want to take are in fact of the traditional type, because the opportunities presented by multiple focus points are simply few and far between. Ng thinks it simplifies the picture-taking process, but it really doesn’t. It removes the need to focus, but the problem is that we, as human beings, focus. Usually on either one thing or the whole scene. Lytro photos don’t seem to capture either of those things. They present the information from a visual experience in a way that is unfamiliar and unnatural except in very specific circumstances. A “focused” Lytro photo will never be as good as its equivalent from a traditional camera, and a “whole scene” view presents no more than you would see if the camera was stopped down. Like the compound insect eye it partially mimics, it’s amazing that it works in the first place, and its foreignness by its nature makes it intriguing, but I wouldn’t call it a step up.

“Gimmick” is far too harsh a word to use on a truly innovative and exciting technology such as Lytro’s. But I fear that will be the perception when the tools they’ve created are finally put to use. It’s new, and it’s powerful, yes, but is it something people will actually want to use? I think that, like so many high-tech toys these days, it’s more fun in theory than it is in practice.

That’s just my opinion, though. Whether I’m right or wrong will of course be determined later this year, when Lytro’s device is actually delivered, assuming they ship on time. We’ll be sure to update then (if not before; I have a feeling Ng may want to respond to this article) and get our own hands-on impressions of this interesting device.


How MySpace Tom May Have Inadvertently Triggered The Google/Facebook War

Gotta love Tom Anderson. Newly reinvigorated by the launch of Google+, “MySpace Tom” has become a social power user (and regular TechCrunch contributor!). As a man at the forefront of the early days of the social wars, he’s obviously full of information. And today he decided to share a bit more. This time, it’s a fascinating story about the time Microsoft, not Google, was about to land the MySpace ad deal.

In a comment on (where else) Google+, Anderson tells the story in response to my most recent post about the Google/Facebook war before Google+. Based on a Quora thread, I noted that the 2006 search/ad deal Google signed with MySpace (Fox Interactive Media) may have been the true kick-off of hostilities between Google and Facebook. As a result, Microsoft signed Facebook — which later led to the famous investment.

But as Anderson tells it, it almost didn’t happen that way. In fact, it was Microsoft that was just about to sign the MySpace search/ad deal. “The reason we ended up going with Google search is because I ran into John Doerr and told him we were about to close with Microsoft. Within an hour, Google brass helicoptered out to a News Corp. shindig at Pebble Beach,” Anderson says, noting that he wasn’t allowed in the closed-door meeting where negotiations took place. This resulted in the billion-dollar deal.

“The terms were so screwed up, that it had a big impact (a negative one) on MySpace’s future,” Anderson writes. “Things would have been quite different if that deal hadn’t happened,” he goes on to say.

A few more awesome things about this info:

1) Again, Anderson is leaving this comment on Google+ — the new service by the company whose ad deal way back when helped seal the fate of his company.

2) Anderson says this was actually the first and only time he had ever met Doerr.

3) Vic Gundotra, now the man in charge of the Google+ project, was on the other side at the time, trying to get the ad deal done for Microsoft (Gundotra left Microsoft for Google shortly before the MySpace deal was finalized). This is how Anderson met Gundotra, in fact.

4) Anderson says he had forgotten all of this info until my post.

Indulge me here for a second.

Just think about what would have been had Anderson not run into Doerr? Microsoft would have likely closed the MySpace deal, perhaps with better terms for MySpace. Google, presumably, would have then gone after a similar deal with Facebook. This perhaps would have given them a leg up a year later to do a Facebook investment, instead of Microsoft.

If my wild speculation holds, the Internet would have been a very different place right now. It may have been a place for Google and Facebook to be friends. In a relationship, even.


Google Acquires Facial Recognition Software Company PittPatt

Google has just acquired facial recognition software company PittPatt (Pittsburgh Pattern Recognition), according to an announcement on the startup’s site.

PittPatt, a project spawned from Carnegie Mellon University, develops a facial recognition technology that can match people across photos, videos, and more. The company has created a number of algorithms in face detection, face tracking and face recognition. PittPatt’s face detection and tracking SDK locates human faces in photographs and tracks the motion of human faces in video.

Here’s the notice PittPatt has up on its site: Joining Google is the next thrilling step in a journey that began with research at Carnegie Mellon University’s Robotics Institute in the 1990s and continued with the launching of Pittsburgh Pattern Recognition (PittPatt) in 2004. We’ve worked hard to advance the research and technology in many important ways and have seen our technology come to life in some very interesting products. At Google, computer vision technology is already at the core of many existing products (such as Image Search, YouTube, Picasa, and Goggles), so it’s a natural fit to join Google and bring the benefits of our research and technology to a wider audience. We will continue to tap the potential of computer vision in applications that range from simple photo organization to complex video and mobile applications.

Google has reportedly been exploring adding facial recognition to its products (i.e. Google Goggles) more seriously but has held back because of privacy concerns. As the company told Search Engine Land in March, Google wouldn’t put out facial recognition in a mobile app unless there were very strict privacy controls in place.

But in May, Google Chairman Eric Schmidt said the company is “unlikely to employ facial recognition programs.”

Google issued this statement confirming the acquisition:

“The Pittsburgh Pattern Recognition team has developed innovative technology in the area of pattern recognition and computer vision. We think their research and technology can benefit our users in many ways, and we look forward to working with them.”


Long Before Google+, Google Declared War On Facebook With OpenSocial

Google and Facebook are at war. We’ve known this for a while. Of course, neither side will admit to it, but they are. Winner takes the Internet.

After months of Facebook owning Google in just about every way imaginable (well, except search, of course — but the rise of social is slowly making search less important), Google has finally been able to strike back with Google+. And now a full-on social sharing race is getting underway. It may not be a winner-take-all race, but it will eventually be winner-take-most. We simply can’t share everything across 5 or even 3 networks. Google is fighting an uphill battle in this regard, but at least they finally have a weapon.

But how did we get to this point where the two biggest names on the Internet are involved in a full-scale war? It all goes back to 2007, and perhaps even 2006.

This question was recently posed on Quora: What specific actions led to the massive rift between Facebook and Google? No less than Adam D’Angelo, the co-founder of Quora and very early Facebook employee, chimed in.

“To me, the biggest increase in tension was Google’s launch of OpenSocial in 2007. After seeing the success of Facebook Platform, Google went and got all the other social networks committed to OpenSocial under NDA without telling Facebook, then broke the news to Facebook and tried to force them to participate,” D’Angelo writes, pointing to this TechCrunch post from the time.

Facebook, as you might expect, did not take kindly to that action. “This was particularly offensive to Facebook because Google had no direct interest in social networking at the time and Facebook Platform had no direct impact on Google’s search or ads businesses. They didn’t care about Orkut and they didn’t build any applications,” D’Angelo notes.

A few months later, Facebook banned Google Friend Connect (a part of OpenSocial), further escalating matters. Facebook then went on to dominate social (remember, MySpace was still technically the leader at that time). On top of Platform, we got Connect, Open Graph, the Like button, etc. Facebook seized control, and we began to enter the Age of Facebook.

We’ll see if Google+ can stop that. Certainly, no one talks about OpenSocial or Friend Connect any more.

D’Angelo says that he can’t remember “any adversarial actions of that magnitude” before the OpenSocial announcement. And he says that before that, there was just the regular competition over engineering hires (which continues today). But there may have been something right before OpenSocial that triggered it.

As another Facebook employee (though not at the time), Jinghao Yan, remembers, the Microsoft investment in Facebook may have also contributed heavily to the increase in tensions. While talks had been going on for weeks, if not months, on October 24, 2007 — just a week before the OpenSocial announcement — Facebook formally accepted a $240 million investment from Microsoft for less than 2 percent of the social network.

Humorously, at the time, people were all up-in-arms over the $15 billion valuation this gave Facebook. Now it looks like one of the smarter investments Microsoft has made in recent years — though it was clearly always more about the strategic positioning. And that’s the key. Microsoft outbid Google for the right to secure this investment (and thus, strategic partnership) in the rising social network.

“I feel that this event is what made Google so antagonistic against Facebook–because it actively rejected Google’s embrace for Microsoft’s purse. As a result, it labeled Facebook more as a threat to its online dominance than as a potential partner,” Yan writes.

Below, that another Facebooker, Yishan Wong, points out that the 2006 advertising deal Facebook signed with Microsoft instead of Google may have kicked all of this off. And why did Microsoft go so hard after Facebook for this deal? Because earlier that same month, Google signed a similar $1 billion deal with Fox Interactive Media to run the ads on MySpace.

In other words, Google made a bet — a good one at the time, but one that was potentially very costly long-term.

And now the two sides are giants. At war.

More: How MySpace Tom May Have Inadvertently Triggered The Google/Facebook War


Festo’s SmartBird Robot Flies Through The Air At TED

You may recall the SmartBird, a robot we saw back in March that mimics the flight of birds, flapping its wings like the real thing. The video we saw then was a bit too edited to get a feel for the bot, but luckily one of the inventors was invited to do a TED talk, and of course they had to set the thing free in the auditorium.

Check out the video:

Markus Fischer, the speaker, describes a few finer points and demonstrates the simplicity of their motor and wing system on a skeletal model. It’s really very cool. Unfortunately they are likely limited by the capacity of the batteries they can take on board, which, being heavy, increase the power required to stay aloft, which means more battery capacity is needed… and so on. The bird flies for around 50 seconds in the demonstration, but much longer in these other videos (outside, with curious real birds).

I’m curious as to whether they’ve considered alternative energy sources; they seem to be well-provided with space inside the bird chassis, and a strong but lightweight coil or spring might provide a better power to weight ratio. Batteries are optimized for volume, not weight, so if there’s room to expand, they can take a hit on joules per cm3 but shave a few grams off the total.

[via Reddit]


Founder Office Hours With Chris Dixon And Josh Kopelman: Profitably

Today, we are trying a special edition of Founder Stories; that we are calling Founder Office Hours. Inspired by Paul Graham’s Office Hours onstage at our last Techcrunch Disrupt, we brought together a group of startup founders in our NYC studio to get feedback and advice. Joining regular host Chris Dixon is Josh Kopelman, managing partner of First Round Capital.

In this first video above, Adam Neary, founder of Profitably, asks whether he should charge for a new product or go freemium. Profitably is a business dashboard for small businesses that pulls accounting data from QuickBooks and helps visualize it. The company is developing a new product around business planning and modeling that traditionally is only available to larger corporations. Should he charge a monthly fee for the new product, or go freemium—give it away for free and upsell to premium features?

It depends on what his immediate goals are: getting big or getting profitable. “Customer acquisition for small- to mid-sized businesses is the hardest thing,” notes Kopelman. “You have to market to them as consumers.” If the product has broad appeal, you can consider giving it away for free as a way to subsidize the cost of acquiring new customers. But you need to have something to upsell. “You don’t want to have too much free and not enough -emium,” he says.

What about building a white-label version for a large customers as a way to hit quarterly targets? Both Dixon and Kopelman agree that if Neary wants to raise more money down the line, investors are more likely to put a higher value the business if it has a direct relationship with the end customer.

Watch previous Founder Stories here.


Enhanced eBooks: Valuable Sales Tool or Just a Gimmick? (TCTV)

New technologies usually allow for more. In the move from print media to the Web the “more” was comments, slideshows and of course rapid-fire content. In the move from VHS to DVDs the “more” was all sorts of behind the scenes footage and director commentaries. In the move from Blackberries to iPhones, the “more” was a wonderland of new apps and a browser experience that didn’t make your eyes bleed.

In a world of eBook readers, more is starting to creep in, but it’s unclear whether this is a more that will actually sell books, or a more that only a handful of superfans care about. A lot people still attach a high-art aesthetic to books, and decry anything that makes its content more accessible for readers. Case in point: A gorgeous version of Alice in Wonderland came out on the iPad and some parents were furious that the animated images took away from kids having to imagine, say, Alice growing and shrinking on their own.

Novelist Kitty Pilgrim is betting that more is more with her new book The Explorer’s Code. A long time broadcast journalist she’s included several highly-produced videos to show the real places that inspired her fictional thriller. But does that take something away from the magic of fiction? We caught up with Pilgrim over Skype to discuss.


Leaked LG Roadmap Points To Five Android Smartphones And One Mango Fantasy

The only thing better than a leak is six leaks, which is exactly what we have for you today. Bundled nicely in the form of a 2011 LG Roadmap (discovered by PocketNow), five Android smartphones and one Mango-powered handset have found their way to the web.

Along with the recently announced Optimus Pro and Optimus Net, LG has quite a bit more in store for the rest of the year. However, we don’t expect that this is the entirety of LG’s 2011 smartphone lineup, so if you can’t find something you like here, fret not, more are sure to follow.

The second-half flagship has been dubbed the LG Prada K2. We’re not sure what “Prada-inspired texture on the casing” means, but other specs on this fashion-forward phone are pretty impressive: Android 2.3 Gingerbread, dual-core processing, 4.3-inch Nova LCD display (the power-saving extra-bright screen seen on the Optimus Black), 8-megapixel rear shooter, 1.3-megapixel front-facing camera, and 16GB of internal storage all wrapped up in an 8.8mm thin handset.

Other roadmap highlights include the LG Univa, successor to the Optimus One, and a mysterious Windows Phone 7 handset called the LG Fantasy. Little is known about either of these handsets, although it is expected that the Univa will launch alongside the Optimus Net. Despite the popularity of the Optimus One, I have a sneaking suspicion that the upgrade to an 800MHz processor, 3.5-inch HVGA display, and five-megapixel camera may put this phone ahead of big brother in initial sales.

The Fantasy, on the other hand, should hit shelves in Q4, claims Pocket Now, with Windows Phone 7.5 Mango in tow. The leaked roadmap also points to another upper-midrange smartphone called the Victor and a low-end Android handset called the LG E2, which you can check out in PocketNow’s coverage.

[via Unwired View]


Porsche’s Sport And Rennsport Bikes, For The Car-Loving Cyclist

We’ve already seen bikes from both Audi and McLaren in the last year, so I suppose it’s no surprise to see competition from Porsche. The German sport car giant has actually had a bike for quite a while now, but I believe the new Sport and Rennsport are their first attempts at road-going bikes rather than the mountain variety.

These “Driver’s Selection” bikes are of the refined and sexy type, taking more after Audi’s wood-framed models than McLaren’s highly-tuned racing bikes. The aluminum Sport or S has an 11-gear belt drive and weighs 12kg (~26 lbs), which is light but… not that light. The Rennsport (RS) is much lighter at 9kg, due no doubt to its carbon frame and forks. It’s got a 20-gear Shimano derailleur with a traditional chain, and comes with clip-in pedals. Both have Magura ceramic disc brakes.

Nice bikes to be sure, but let’s talk turkey. What’s the damage on these things? The Sport costs a massive €3300 (~$4750) and will be available in September. The Rennsport… well. Got a spare €5900? That’s $8500 of your puny American dollars. What, you thought Porsche was going downmarket?

I’ll tell you, though, if someone put ten grand in my pocket and a gun to my head and told me to buy one of these luxury bikes, I’d probably go with that McLaren. I’d be too afraid to ride it in the city, but I think I’d prefer it over these status symbols, though I have no doubt they’d be nice rides as well.

[via Born Rich]


Apple’s iOS 5 Beta 4 Update Now Available, First To Be Released Over-The-Air

It’s been just 11 days since Apple released Beta 3 of iOS 5 to developers, but a new Beta is already up in the air — literally. iOS 5 Beta 4 has just gone live, and it appears to be the first update to support installation via iOS 5′s new over-the-air update system.

We can’t actually get the update to work over the air right now, but the patch notes specifically define it as an option. To quote:

“If you are doing a OTA software update from beta 3 to beta 4, you will need to re-sync your photos with iTunes.”

If you’re not already on the iOS 5 Developer Beta to give it a shot yourself, you’re not missing out on much:

Fortunately, as shown in the image below, the update can still be downloaded manually and installed through iTunes. It’s not 100% clear whether or not Apple plans to release the OTA update today (just a day after Lion, which is currently distributed exclusively through the App Store. Way to stress test that new cloud server, Apple!), but it certainly looks like it shouldn’t be long.

Update: Readers in comments and folks on Twitter are reporting that they got the OTA update to work. Here’s what it looks like when it actually, you know, works (Thanks @FungBlog)!

So What’s New?

As any late-stage Beta should be, it’s mostly bug fixes and little tweaks — but here’s some of the bigger stuff we’re hearing:

  • The aforementioned OTA installation support
  • Video content in all applications and websites should now be AirPlay-enabled by default
  • Wireless syncing now works with Windows

This list will be updated as new reports come in.


For The Geek Who Has Everything: A Gold-Plated Atari 2600

One thing most 30-something people in tech have in common is video gaming nostalgia. Generation X (and Generation i) can go on for hours discussing the merits of our favorite Nintendo games, our programming experience in school, and of course our beloved Ataris. Sure there were C64s and Amigas and such, but Atari’s 2600 and its successors were truly groundbreaking in the gaming world.

You can still find a few here and there, working even, but to be honest the machine is a little more humble-looking than my memory has it. But Urchin Associates had the brilliant idea to preserve this piece of computing history forever… in 24-karat gold.

Look at it. Is it not beautiful? Now, whether it works or not, I’m not prepared to say. That gold-plated cartridge (I wonder what game it is?) looks removable, and I doubt they plated over the I/O ports, so unless the system they used was bricked to begin with, it probably works just fine. The controllers, however, may have lost a little functionality in the gilding process.

The whereabouts of this art project are unknown, and no, I don’t think you can buy one. But it’s nice to know that it’s out there somewhere — like Eldorado, or Bigfoot.

[via Technabob]


Keen On: Why Google Is Now A Social Company (TCTV)

It was a first. Yesterday, we were fortunate to welcome Google’s two principle architects of Google+, Vic Gundotra (VP Social) and Bradley Horowitz (VP Product), to the TechcrunchTV studio in San Francisco for an extended interview about what they call their “project”.

So what is Google+? As Gundotra told me yesterday, it is an attempt to “understand people” and to make human relationships the heart of the Google experience. Both Horowitz and Gundotra acknowledge that this is a major project, something that may, in the future, redefine the company. This unGoogle-like goal to,as Horowitz said, put “people first”, may well, in the long run, transform Google from a algorithmic company to a social one.

Gundotra and Horowitz believe that today’s social web has only scratched the service of how to make the Internet into a truly human experience. Google+ is their attempt to transform Google into the leading player of the social age. It’s a massively important project, one that will define the company’s significance in the Web 3.0 age.

Thanks to our readers for sending in so many questions. Many questions came in asking when Google+ was going to add a certain feature. But, to each of these questions, the oracular Horowitz and Gundotra would only say “in the future.” That question and non-answer, therefore, was going to get old pretty quick and I thus mostly avoided this kind of (non)conversation. Many comments were also very specific questions about functionality which weren’t really appropriate for this kind of broad interview. That said, the Google team were happy to hear all the comments and are reviewing the feedback we generated.