Village Voice Media Execs Acquire The Company’s Famed Alt Weeklies, Form New Holding Company

VMG-FINAL-2012

A group of long-running alternative weekly newspapers is changing hands. Village Voice Media Holdings — whose titles include the LA Weekly, Westword, and, yes, the Village Voice — is selling its publications (and their associated web properties) to a new holding company, the similarly named Voice Media Group.

The financial terms are not being disclosed. The deal includes all 13 of VVMH’s alt weeklies, but not the online classifieds site Backpage.com, which will operate as its own company.

When discussing the acquisition, Voice Media Group CEO Scott Tobias emphasizes continuity. Basically, the company’s management is buying the organization from the current owners, and there are no plans for a big shake-up. Tobias was previously the COO at VVMH, the new company’s executive editor Christine Brennan was formerly the executive managing editor, and the new chief financial officer Jeff Mars served as vice president of financial operations. And even though the company’s headquarters are moving from Phoenix to Denver, Tobias says the team has been running day-to-day operations in Denver for some time.

Founded in 1955 by Dan Wolf, Ed Fancher, and Norman Mailer, The Village Voice claims to be the first and largest alternative weekly newspaper. In the 50-plus years since its founding, the paper has become famous for its investigative journalism, as well as its criticism and reviews. (For a good stretch of the 1990s and 2000s, I read both The Voice and the LA Weekly religiously — here’s a letter to the editor to prove it — and the LA Weekly is where my boss Alexia Tsotsis got her start in tech journalism).

The Voice’s parent company was acquired by New Times Media in 2005, at which point New Times changed its name to Village Voice Media. The years since then haven’t been easy for newspaper companies, and VVMH has had its bumps. In fact, the Voice’s editor Tony Ortega and music editor Maureen Johnston both left earlier this month, prompting a former reporter to describe the paper as one that has been “winding down for a while,” and to criticize VVMH for its “generic” approach to coverage.

Asked why he would want to get into the news business, particularly the print news business, Tobias notes, “We’re not a daily newspaper.” He says Voice Media Group’s strength, from an advertising perspective, is the fact that it combines “a national footprint with hyperlocal reach.” The company also plans to grow aggressively on the digital side, with several major launches coming later this year.

The new company’s publications supposedly reach 7 million monthly print readers and 16 million unique desktop visitors on the Web, as well as 1.2 million email subscribers. Its mobile sites see 5.7 million monthly visits. And it also operates a division called VMG National, which sells advertising for 56 partner sites and publications, and holds more than 40 food, music, and arts events per year.


Robotics Revolution: The Robots Are Just Getting Warmed Up

Robot Flowers

Editor’s note: Saad Khan is a seeker of bad-assness. He found it at Evolution Robotics (not to mention Blekko, Zaarly, Jobvite, Luminate, and LendingClub). He’s a Partner at CMEA Capital. And he’s doing the robot today. Follow him on Twitter  and at SaadWired.com.

Last week, iRobot bought Evolution Robotics, maker of the Mint floor cleaner. The implications of the hot-off-the-presses announcement are still forming, but one thing is clear: The robot era has begun, and no one is safe — not even from their short arms and incessant nagging.

As someone who’s had a front seat on the journey of Evolution Robotics, it’s a proud day, and one we’ll always remember. Like being a parent the day SkyNet first went to school (August 29th, 1997 I believe).

I can’t say that it’s been a strictly teleo-reactive ride (*cough* Nils Nilsson). When I first met Evolution Robotics back in 2008, it was on a tour hosted by Bill Gross at the Idealab offices. It was eye opening – autonomous RC cars, phone cameras that could recognize anything, cash readers for the blind. A cornucopia of geekiness. While we were sleeping, robotics had quietly crossed over from heady science fiction to mass-market consumer value.

The inflection had happened silently and the results came in a form that didn’t look like C3PO or the Sony Aibo. But it was fundamentally robotics: AI, computer vision, navigation, sensors, the edge where digital goes analogue — as my former Stanford AI profs lectured, it was robotics through and through.

In Paolo Pirjanian and the Evolution Robotics crew we had a bad-ass team of Armenian cum Italian scientists that had emigrated to the land of CalTech and the Jet Propulsion Laboratory (JPL). Some of them worked on the Mars Rover, and had research interests that included helping the blind see through their ears. Cerebral superheroes, they could do anything. So we invested. And then built sick floor cleaners (more on that story here).

While that was Evolution Robotics’ path, the world of robotics is just getting warmed up. Robots are already helping us find the merchandise we buy from Amazon thanks to Kiva Systems; helping us with agricultural labor with Harvest Automation; and  augmenting medical procedures using Intuitive Surgical. The guys at Y Combinator are getting in the game. My boy Rob Nail, who is president at Singularity U, even wants robotic faculty.

Perhaps closer to home, each one of us is carrying around the guts of an increasingly sophisticated robot in our pockets right now. Take the new iPhone 5 – accelerometers, GPS, and light and proximity sensors. It’s a robot we can talk to (“Hello Siri”) who recognizes our face, and knows where we are (GPS is just the beginning). For many of us, it’s what woke us up this morning. Mine sings me the George Michael anthem “Wake me up, before you go, go…”

The truth is, robots are already among us. They made it into our living rooms years ago. My Xbox Kinect recognizes my face and pairs it with my player profile. I turn on my Mint and wonder what it’s thinking every time it cleans my floor. I’ve even seen a strange car that practically drives itself down the 101.

Their makers tell us these robots were created to serve us. And based on the rising tide, they certainly seem right. But Isaac Asimov and a brooding Will Smith tell us otherwise. We know how this story ends.

Don’t believe me? Here they are, celebrating their victory right now.

[Images: Spectrum, AI Evolution, Transcend]


Biz Stone and Ev Williams On Why Founders Should Err On The Side Of Saying Too Much

Screen shot 2012-09-23 at 5.05.26 PM

Evan Williams and Biz Stone are probably still best known for their roles in co-founding Twitter, but what many people in the general public don’t realize is that Twitter was actually born as just a small project within a technology startup called The Obvious Corporation.

In recent months, Williams and Stone (along with Jason Goldman) have shifted a good deal of their focus back to Obvious, making it into a very unique kind of company — part incubator, part investment vehicle, part idea lab — with the simple goal of “creating products that matter.” Branch, Medium, and Lift are just a few of the big-idea projects that have emerged from Obvious so far.

Williams and Stone swung by the Disrupt SF conference earlier this month, and it was great to sit down with them for a few minutes after their on-stage conversation to dig a bit more into what they’re working on lately.

They’re a pretty interesting and thoughtful pair so the whole conversation was, I think, pretty interesting — and you can watch it all in the video embedded above. But I especially liked what they had to say about communicating with their co-founders and employees. At Obvious, Williams and Stone are known for sending long written missives in an almost stream-of-consciousness style about their general visions for the company. The reason they do this, Stone said, is because it’s better to say too much than to say too little. This bit starts at around 2:03:

“You can’t over-communicate. it’s just really important to share what’s on your mind with everybody, and to share with them where you’re at.”

Williams added:

“I think a lesson for founders is, it depends on the personality, but I try to make a point to do that because I’ve failed at that for a long time. People tell me I’m hard to read. But also, I just assume that people are thinking the same thing that I am, because we’re working on a project together. [I assume,] ‘You know where we’re going, right? Because we mentioned it one time, like three months ago, that we were going to do that thing. So, we’re going to still do that thing.’

And that doesn’t work, and especially it doesn’t work as your company gets bigger, because people have their own perceptions, and they see different things that you don’t see.”

And since Obvious’ projects have their own CEOs, I asked what one piece of advice they try to impart when it comes to leadership. Stone said:

“There are two pieces of advice. One is, be really passionate and emotionally invested in your project, or it won’t work. Don’t just do it because you think that other people will like it. You have to love it and want to use it yourself.

The other thing I think is, be really open and communicative with your co-founders about what exactly it is you want out of work and out of your life. Because otherwise, you’ll make assumptions about each other and you’ll be missing each other, and that will lead to a discordance.

It’s difficult to be super-duper open about everything you want out of your personal life and your work life. But at the same time, it is one of the most important things you can possibly do for the long term health of your company.”


Here’s What Goes Into Making Google Maps, Will Apple Be Able To Recalculate?

1598770905_3ec81fd6ea_z

Everywhere you turned last week, there was another story about iOS 6 Maps. Some feel like it’s a great new direction for Apple, but people like me feel like we’re left with an ugly experience that shouldn’t have been introduced to the public in its current state.

Yes, Google Maps was removed from iOS 6, but we’ve known that for quite a while now. What we didn’t know was that Apple would make no real improvements on its own offering from the second developers starting tinkering with the OS until the day it was made public.

As I tried to use Apple Maps for the first time with the first developers version, it felt very unpolished and not well thought out. That’s rare for Apple, so I figured that things would get better. Sadly, they didn’t.

Google Maps has been a major player in the maps space since it launched almost eight years ago. I had no idea what went into making the product, or more importantly, keeping the product up to date. Luckily, the team allowed me to take the same glance at its processes that a few other publications recently had, but a bit more.

As you listen to the company speak, it reiterates its mission of “organizing the world’s information,” but it rarely talks about how that organization actually happens. I took a look and was pretty impressed. Is it something Apple can catch up to quickly? Let’s see, shall we?

Merging the virtual and real-world

After I sat down with Jack Menzel of Google’s Search team, he walked me through how the product has developed, along with plans and ideas for the future. I was able to do the very same thing with the Maps team on Friday. Both products are changing the way we interact with the physical world around us.

It all starts with a project called “Ground Truth,” which is Google Map’s team that takes all 1,300 sources of map data and merges them into a consumable product that we see on the web and our mobile phones today. It’s not a simple process, but it’s one that is fantastically tricky, involved, complex and yet…full of common sense.

Imagine getting files of junk from every source of topographical information in the world and then having to normalize it for your world-class system that services millions of users. It’s involved, multi-layer and pretty impressive.

I spoke with Michael Weiss-Malik, an Engineering Lead on the Google Maps product, and he showed me the system that it uses internally to munge all of this information together. It’s called Atlas, and it reminds me of Photoshop.

Every piece of information that the company gets turns into a “layer” of sorts, which can be dropped on top of what will become the completed map. Watching someone use the system is the same as watching a designer use Photoshop. There are keystrokes, shortcuts, and well…system issues from time to time. There are also many algorithms backing all of these activities up.

This is how Weiss-Malik summed up what Google Maps does:

If you want to make a map, what you do these days is you collect as many data sources from providers as much as possible: geo codes, water bodies, parks. Things you don’t think about. Even postal codes. There’s really an endless sequence of things. All of the data is slightly imperfect, so we use things to correct this to spit out a higher quality product.

Sounds simple, right? Some other publications might lead you to believe that, but there are a zillion processes that go on behind the scenes, including double and triple checks of every single change that happens before the entire globe can see them.

Make one mistake as a company on a road change, and it could cause serious problems or danger for someone.

If someone is exploring a new major city road in the United States, they have to use all of Google’s tools, including Street View, to get every single detail down just right. I’m talking links to proper signage and everything. Think about it, this data is what shows you and maybe your car where to go, so it’s pretty important that it’s right and thorough.

Here’s a look at what Atlas lets operators do to make sure all of the map data that Google has is updated and correct:

Here’s what they get after that process:

Intersections like this are a pretty tricky thing to get right, according to Weiss-Malik:

This is where Street View is very critical. This is a simple intersection. We examine all of the intersections, determine where you can and can’t turn. Our operatings jump into 3D mode, so you can see what you’re editing. Grab a no uturn sign as an “Observation,” to leave yourself breadcrumbs as you move around the system.

If there’s something that an operator can’t quite figure out, and this road is a major artery somewhere in California, they can file a ticket to send a Street View truck out again to survey the area. It’s that efficient, and that processed filled.

Connecting the dots

“Mapping the entire world,” as everyone on the team tells me that it wants to do, consists of sifting through crazy data, running through petabytes of photos and satellite images and then of course providing a stable service scaled as far as scaling can take you. Oh, and let’s not forget that Google makes all of this available via a public API.

As it stands, the team pushes new imagery every two weeks, 20 petabytes of it, to be exact. As the company now relies less on third-party information and imagery, it doesn’t have to wait a long time to verify and push the new information. Sometimes it used to take up to 18 months to perfect a city in the United States, but now it can be done in hours.

The big piece of connecting the dots for Google is a new focus on indoor mapping. Telling you where to go when you’re at, say, San Francisco International Airport, is an obsession of Google’s engineers. You should never have to wonder where you are, according to the team.

Weiss-Malik shares:

We source floor plans from business centers, malls, etc. We fit them into the map. There are walking paths, just like regular maps, we mark bathrooms and escalators as well. These are “interior driving directions” to get you to your gate at the airport. This experience is available on mobile devices now.

The future of Google Maps

I also got to talk to Brian McClendon, the VP Google Maps and Earth, and he told me about his experiences at Google since selling Keyhole to the company. Keyhole of course turned into what we know now as Google Earth.

I asked McClendon how Google came to make the acquisition:

The Google founders got excited in 2004. They came in and met with us and made the offer to buy it. Their pre-ipo valuation was crazy, we were very nervous about it. Why we went through with it was Google had two things we didn’t have, scale and data. Larry (Page) was willing to write a check to buy satellite imagery for us, on the spot.

During this time, a small team was working on Google Maps. This team included former Google and Facebook employee Bret Taylor. The Earth and Maps teams worked together to normalize Google’s data so that both products could use it. That process now allows the entire company to instantly scale everything that needs even a slice of location information.

What about the future? McClendon was pretty candid with me:

Map data is never perfect; we’re going to try and approach “perfect.” If you’re not close enough, people get frustrated. If you’re really close, people do depend on it more.

For Google, the next level is serendipitous discovery. We should provide you with the best experience that you care about based on location, searches, friends, likes and time of day. That is a near-term must have and it relies on data, coverage and quality.

Apple: Can they win the location battle?

Obviously, Apple decided to make the choice to pull Google Maps from its phone’s operating system for a reason. It’s probably a really sound business reason, slightly pushed along faster than it should have been due to sour business dealings between the two companies.

That’s really unfortunate, because it’s consumers like you and me that will suffer. I have an iPhone 5 on the way. I’ve been using Apple’s Map app for a few months now, and I’m not excited about its future. It feels less than half-baked, even though the company says that it will get better as more people use it.

Google won’t say a word on where it is with its own native iOS 6 Maps app. Not a word. Some folks are saying that it will be available before December, but as I’ve talked to more of my sources, I feel like the company isn’t stumbling over itself to just “get something out there.”

Make no mistake about it, there will be a native Google Maps app for iOS 6, but it won’t be made available until it is ready to blow Apple out of the water, as this is Google’s only chance to give people a true Coke vs. Pepsi test for what they want to use to navigate the globe.

It’s kind of a big deal.

You never want to say never in life, even though it feels like Apple has a long way to come after it made its hasty decision to yank Google Maps from its product. Google clearly has this Maps thing down pat, but even Google admits that it’s not “perfect” yet. To get there, it seems like Apple is going to do something it’s done in the past: copy the shit out of other people by hiring them.

As Steve Jobs once said:

Picasso had a saying – ‘good artists copy, great artists steal’ – and we have always been shameless about stealing great ideas.

This time it doesn’t look like Apple’s just trying to steal ideas, it might be trying to steal the processes and skilled workers behind it. Kind of funny how Google is being transparant about those processes though, huh? It’s their way of saying “Come and get us.” As for the other things that I saw and heard during my visit, stay tuned for that.

For now, I’ll just ask you this question:

Can Apple get “perfect” faster than Google?

[Photo credit: Flickr]


Riots Rock Foxconn’s Taiyuan Plant

foxconn-taiyuan-riot

Richard Lai is reporting that more than 2,000 employees of the Taiyuan Foxconn plant rioted last night, causing damage on the factory campus after a guard allegedly hit a worker at 10pm.

The Taiyuan plant is notorious for their mandatory overtime requirements. Quoth Lai:

An undercover report from August mentioned that the Taiyuan plant processed the back casing of the iPhone 5. It also highlighted the company’s harsh management as well as “practically compulsory” over-time work. We don’t doubt that this riot escalated due to dissatisfaction over working conditions.

Lai found some video from the factory and writers on Weibo are reporting further information about the incident.

UPDATE – Taiyuan factory workers went on strike last March although this is apparently the first “fight” in the factory. The Weibo postings were pulled down and the images are now gone.

Reuters is reporting that Foxconn’s representative Louis Woo said:

“The fight is over now … we’re still investigating the cause of the fight and the number of people involved.” He said the fight happened in the workers’ dormitory facilities.

via Teiba


AMAs, A2As, And The Growth Of Tech-Enabled Political Discourse

ObamaAMAAction

Editor’s note: Jon Bischke is a founder of Entelo and is an advisor to several startups. In the interest of full disclosure, he also serves as a National Co-Chair for Technology for Obama (T4O). You can follow Jon on Twitter here.

It’s election season again and 2012 is likely to be remembered for many things, one of which is the amount of money spent on political advertising. Indeed, this year’s Presidential Campaign is likely to be the most expensive in history. But amidst the talk of Super PACs and $50,000-a-plate dinners attended by amateur videographers, an interesting and inspiring shift is taking place: The increasing ability of the average citizen to connect directly with candidates through technology.

Of course, tech-enabled political discourse is not new. Television in 1960 brought us the “Great Debates” between Kennedy and Nixon and forever changed how the public perceives politicians. And you can’t forget Howard Dean, the 2004 Democratic presidential candidate whose prospects were greatly helped by the Internet (through his success in grassroots organizing via Meetup.com) only to have it play a role in his ultimate demise (Dean Scream anyone?). And many elected officials have taken to Facebook and Twitter in recent years, often using those services to directly answer questions from their constituents.
Still, 2012 seems different as politicians are communicating via a wider variety of channels and in greater depth than ever before. In January of this year President Obama participated in a Google+ Hangouts. Then, just last month, Obama took to Reddit to answer questions directly for Redditors in an AMA (Ask Me Anything), spurring more than 22,000 comments and 2.6 million unique visitors and leading the president to conclude, in allusion to a popular meme, that hosting an AMA was “NOT BAD!”

And it isn’t just the president getting up close and personal with his constituents. Lately Quora has been abuzz with famous politician sightings. Newark Mayor Cory Booker was one of the first to kick things off with a series of well-received answers. More recently newly appointed U.S. CTO Todd Park took to Quora to talk about the things that President Obama had done to encourage innovation and entrepreneurship in the country.

Republicans are showing that technology’s not just for Democrats by getting into the mix as well. This month Vice Presidential candidate Paul Ryan answered a question asking whether the country is better off than it was four years ago. Republicans are often less active on many social media platforms (see recent research from the Pew Internet and American Life Project), so Representative Ryan’s use of this vehicle to convey his message is more evidence of this trend.

Government departments are recognizing the opportunity, as well. As Tommy Sowers, the newly appointed assistant secretary for Public and Intergovernmental Affairs at the Department of Veterans Affairs, told me recently, “We want to speak to our veterans when, where, and how they want to communicate, which is increasingly on social networks. This isn’t a young veteran phenomenon, but an all veteran phenomenon.”

So why does this matter? Today it might be President Obama and Representative Ryan leading the charge, but this is only the beginning. Imagine if Senate and House races all around the country had candidates hosting AMAs where they could answer the most pressing questions from their states and districts. Or if Quora’s new Ask to Answer (A2A) feature became a common way for people to be able to ask things of their elected officials.

At the same time, it’s possible that these early attempts at communicating directly with constituents are simply marketing stunts. A number of people have criticized efforts like the Obama Administration’s We the People initiative (read Laura Meckler in The Wall Street Journal) and the President’s AMA (see Alexis Madrigal in The Atlantic) as lacking substance. While these criticisms are fair, theories on disruptive innovation suggest that disruption in its early stages is often dismissed as a “toy”.

If this is indeed the early stage of a disruptive shift in politics, it could represent the further democratization of our representative democracy. Do you have an excellent question for a politician and enough clout within your respective community? You get to ask it, regardless of how much money you’ve donated to the candidate or who you’re connected to. Want to bring forth an idea for some new legislation? Maybe those Quora credits will help you get on the right person’s radar.

While this trend may be in its infancy, it’s worth keeping an eye on. Social networking was just getting going in 2004 when George W. Bush won re-election, and mobile computing and smartphones were only starting to take hold in 2008 when Obama took the White House. If we’re just starting to talk about how technology is enabling direct discourse between constituents and their elected officials in 2012, it will be interesting to see what 2016 will look like.


Inside The Brand New Makerbot Retail Store

8006336706_fdac57dd0e_z

The handsomest man in the world, Bre Pettis, gives the second handsomest man in the world, Phil Torrone, a nice visit to the Makerbot Store in Manhattan. The store is now selling Makerbots, filament, and pre-made items like watches and toys.

The store is at 298 Mulberry Street.

As Bre notes, they built the store to convince people that 3D printers weren’t all science fiction. We visited with the new Replicator, the $2,199 version 2.0, and came away wildly impressed at the fit and finish of the new model. The store, it seems, is just as cool.

As a proud (and jealous) owner of the first Replicator, I’m really glad to see this thing inch closer to what can only be termed a 3D printing singularity. Once we all have these, the network effects and improvement of general 3D printing techniques will change the way we think about physical objects. Until then, I’m going to keep printing me some proud roosters.

photo via LaughingSquid.


The Free-To-Play Storm and the Freecore Gamer

storm

Editor’s Note: Tadhg Kelly is a game designer with 20 years experience. He is the creator of the leading game design blog What Games Are, and consults for many companies on game design and development. You can follow him on Twitter here.

“All of them hope that the storm will pass before their turn comes to be devoured. But I fear — I fear greatly — the storm will not pass” — Winston Churchill

What’s the biggest change that Zynga, ngmoco, Playfish, Wooga and a brace of other social game publishers have managed to effect in the games industry?

It’s not distribution. While many social game publishers have proved expert at finding players, social always had a half-life based on novelty versus irritation. Many social game publishers are ordinary advertising-driven operations today, still using social but seeing low rates of return.

It’s not game design. True, there are a couple of mechanics to do with time-delays and gating which have spread around social games like memes (formally, game designers call them “ludemes”). Yet the ordinary day-to-day gameplay of most big social games is pulled straight from Dungeons and Dragons, Animal Crossing, Harvest Moon, The Sims and casinos.

It’s not aesthetic. Where indie PC games are wildly experimental in theme and tone, social games are usually interchangeable in look and style. A few manage to raise above that (for example: Dragonvale), but few in social are making a horror game, a genuinely quirky game, an unconventional fantasy or anything that isn’t cute, friendly and mainstream. They tend to lack a real culture of their own.

Ain’t none of the above genius. The really big change was proving that the Asian model of monetising games through virtual goods and other free-to-play business models worked just as well in the West. Social games brought pay-as-you-go, pay-to-cheat and pay-to-skip to games, and the consequent explosion in free play has fundamentally changed what many players expect. The question is whether the so-called mainstream games industry can really survive it.

The Oncoming Storm

Prior to 1997 there were essentially two models for selling gameplay. One was to sell tickets and the other was to sell season passes. The ticket model happened in arcades, where a coin got you a couple of minutes’ play (maybe more if you were really good) and the game tested your physical skills. The season pass model was the console and PC game. You bought the machine, bought the game and it was yours to play with for as long as you liked.

In the late 90s the economics of games started to expand. Massive multiplayer games like Ultima Online emerged and introduced the idea that you could subscribe. While online play had existed for a few years prior, these massive games were more persistent, and so naturally focused on roleplaying games and the like. They slowly started to eat into what was considered the traditional PC market, and then console.

At first it was strictly subscriptions, but around the edges (circa 2003 and forward) some people started to talk about micro-transactions. The initial idea of this revolved around selling game levels (going back to the arcades), and in the West was mostly ignored. However word kept coming from the East of games that sold frivolous digital items and upgrades. To Western ears they sounded ridiculous, but then games like Nexon’s Kart Rider popped up and announced numbers like $250 million a year in revenue. Many of us assumed this must have been a typo.

But of course it wasn’t. It was the tip of the iceberg. Massive multiplayer games increasingly experimented with free to play, then casual games like Puzzle Pirates and then Zynga. Whole conferences were formed just to talk about virtual goods, how they worked, what sold best or worst. The language of classifying players as “whales” or “minnows” emerged, and all the while many in the mainstream games industry looked on in bafflement.

You see, they couldn’t understand this: free to play games were (and mostly still are) objectively pretty bad. They tended to be simplistic rather than elegant, blatantly manipulative rather than earning player loyalty. They lacked a really robust game dynamic. And yet players flocked to them, and so did investment money. Free to play called into question many foundational assumptions about the industry, while working on free to play games became a little bit like the perceived loss of status that movie people used to feel about working in television.

The model was also unavailable. If you worked in Xbox 360, PS3 or Wii there simply was no path to free to play. None of the platforms supported it as standard, and the odd toe-dip into micro-transactions was usually only the sale of extra content in purchased games. Since those sales were often small (with one or two exceptions), it seemed as though this whole free-to-play thing was either a fad or low-rent. Believing that its content was of premium value, the “proper” games industry largely left Zynga and a few others alone. They thought the storm would pass.

Tough Times

If you are lucky enough to work at one of five or so premium studios (Blizzard, Bethesda, Valve, etc.) then you probably don’t see it, but for most of the rest of the industry these are miserable times to be in AAA games. Big-budget game sales are down at least 23 percent on the previous year, and some months have been truly miserable.

Some peg this on the cyclical nature of the industry and point to the aging Xbox and PS3 hardware, the relatively short lifespan of the Wii and the overall damp squib that was Kinect. They say that gamers are probably waiting for the next round of hardware to be announced, and so 2013 will be a bumper year. I’m not so sure.

For one thing, the handheld industry (Nintendo 3DS, Playstation Vita) has recently had a fairly significant hardware refresh, but reactions were apathetic. Nintendo had to drop the price of the 3DS massively, and Sony’s Vita is essentially dead. It seems that players don’t want to pay premium prices for handheld games in a world where the iPod Touch sells games for 10 percent of the price. Meanwhile the Wii U certainly looks amazing and has many among the hardcore stoked. Yet there’s much rumbling over the price of the unit ($300).

Secondly, I’m not sure the specs argument really works. PC gamers still cling to their rigs, to Steam and to perceiving themselves as having the best machines. Yet they are mostly doing so with mid- or low-range hardware. PC sales have stalled where traditionally it used to be gamers who pushed the PC forward. Not any more it seems. If it transpires that the would-be console purchaser has the same reaction to the PS4 (“I can’t really see the difference”) then that means something.

If technological power is no longer a competitive vector, then price is the primary one. That plays into digital platforms’ hands, especially digital platforms that give their gameplay away. It means that parents are more likely to buy their kids iPads and have them play apps rather than shell out for consoles and TVs (and have the family TV taken over by video gaming). It means that the expectation becomes that the game is free so that the player can know what she’s buying before she buys it. It means that the price of game development itself has to drop.

And that changes the dynamic of the games industry, probably permanently. The storm will not pass.

The “Freecore”

Publishers need long-tail revenues to avoid betting the farm every time they go to market. Activision figured this out a long while ago in merging with Vivendi so that they could get access to that World of Warcraft money. Electronic Arts also saw the light and jumped in feet first by acquiring Playfish. So, too, did Disney with Playdom. Many mid-tier publishers have not.

In mobile and tablet, lots of games are released into the app landscape for prices ranging from $0.99 to $4.99 but then disappear. The ones that remain on top of the Top Grossing charts are usually free to play, like Clash of Clans, Dragonvale and CSR Racing. Many of these are not great games, but what they lack in smarts they make up in offering themselves for free on the understanding that maybe 1 in 20 players will ever get around to paying anything.

The upshot is that you get a lot of okay games trawling users and finding the few who like the experience enough to buy something. That mechanism is one that an increasing number of big game makers are looking at, and wondering if they can be a part of it. They want to find the hardcore gamers who will not be put off by free to play. They have to. Their livelihoods are at stake.

But does that freecore really exist?

“Core” or “Hardcore” describe a type of gamer that has not significantly changed since Doom, who has a fixed set of values about what games are. Core gamers are very vital, often the kind of people who power Kickstarter games and indie successes like Minecraft and Steam. They are passionate and believe in gameplay innovation as a key value. They also believe in the idea of objective fairness. A core gamer prefers to have ground through a game to get the special sword as a badge of honour than to buy it. He considers buying his way through shameful in terms of cultural status and legitimacy.

That core market (which is an umbrella term for a much more complicated landscape) buys a lot of games per capita and is very influential. It serves the function of the evangelist. Many in what I call the muggle market take their cues from whatever the core is excited by, which helps propel a Call of Duty or a Borderlands 2 to success. However the core by itself probably (by my own rough estimate) comprises no more than 20-25 million players worldwide. Whereas the muggle console market is easily 150 million or more.

Free to play is no threat to the core (outside of massive multiplayer games), or for the game developers who focus on it. Mojang software (of Minecraft fame) or TellTale Games (of the Walking Dead) can rely on the audience to stay where it is and keep buying games on PC for many years to come. This is great news for Steam, and it’s not too bad for platform holders like Nintendo either. There’s enough room in that end of the market still for retailing to work, at least for first-party (i.e. published by the platform holder) games.

However free to play is a big threat to large parts of the muggle market. It is much more likely to fragment for reasons of price and so to be attracted to free to play games. And for the mid-level publishers who are used to using the core as a springboard into the muggle market, that represents a massive problem. They often have no idea how to talk to muggles, so if the core gamer is not interested in their free to play proposition then they have no way to really market them.

Core gamers seem perfectly happy to watch the rest of the industry burn as long as they get their FTL, Torchlight 2 and so on. Those are the games they want at the price they want to pay, with the culture that they want to see reflected in their games. Free to play is of zero interest to them, and that situation is likely to remain in the long term. In a sense, many of them would prefer if the games industry got smaller, stopped trying to pretend to be Hollywood and got back to its roots. They want to see what’s happening at PAX, not E3.

So AAA publishers who have used the hardcore as a springboard want them to become a freecore that fills the same role. But the hardcore is not interested. Those publishers can try as much as they like to make big splashes in the market with graphics and sounds and all the rest of it, but there’s nobody really there to take that message in.

This leads some to say that companies like Kabam and Kixeye are the future, that they will either find the new hardcore or create it. Maybe. I think core gamers will simply become the sub-culture that they want to be and promote many smaller developers that reflect its values. I think free to play will continue to go from strength to strength, anointing many new developers and publishers and it does so. I think the next generation of consoles will prosper with a combination of first-party big-budget games and also working with those new free-to-play publishers.

But the current mid-level publishers? The ones who can’t sink low enough to make free to play games and can’t control their own destinies? Those companies will be devoured by the storm.


Intel Confirms Medfield x86 Chips Don’t Support LTE Yet — But Says It Won’t Be Long Coming

intel

Intel’s second bite at the smartphone market has been more akin to a gentle nibbling around the edges. At the end of last year the chipmaker teased a smartphone reference design running its Medfield x86 Atom SoC. Nine months later Intel chips have found their way inside six real world smartphones, yet none apparently destined for the U.S.

The six smartphones are the Lava XOLO X900 (an exclusively Indian device), the Lenovo K800 (targeting China first), Megafon’s Mint (a Russian carrier-branded device), the Orange San Diego (a UK carrier-branded launch), the ZTE Grand X IN (heading to Europe first) and Motorola’s RAZR i (coming to select European and South American markets).

Aside from Intel internals, the RAZR i closely resembles the recently announced Droid RAZR M (the latter is a U.S. device) – which further flags up the U.S.-shaped hole in Intel’s smartphone strategy. What’s going on here?

The likely explanation is there’s no support for LTE in Intel’s current Medfield chips. And with 4G such a dominant force in the U.S. you need to command a brand as massive as Apple to get away with flogging LTE-less phones (the iPhone 5 being Cupertino’s only 4G phone).

The lack of LTE support in Medfield chips was confirmed to TechCrunch by Sumeet Syal, Intel’s Director of Product Marketing (he wouldn’t be drawn on explaining the politics behind Medfield’s current geographical spread). He also confirmed 4G support is in the pipeline, noting that Intel will be “shipping some LTE products later this year and ramping into 2013″ – so that particular barrier to U.S. entry may soon be removed.

Multicore chips vs hyper threading

Syal said Intel is also readying a dual-core Medfield chip.  Its current chip architecture is single core, although the SoC includes a technique to boost multitasking called hyper threading which — Intel claims — allows it to out-perform some rival multicore chips.

“Even though it’s a single core it has hyper threading technology so essentially you’re able to do multitasking through a hyper-threaded environment. So that’s how we’re able to demonstrate that a single core from Intel outperforms a lot of the dual-core and quad-cores out there,” said Syal.

“Our next gen product will be a dual-core but again that product will also have hyper threading so essentially… you will also have dual-core with four threads. So again just like we demoed that a single core hyper-threaded can outperform dual-core/quad-core I think we’ll do it again when we introduce the dual-core product with four threads.”

But if hyper threading is as good for performance as Syal says it is, why does Intel need to invest in making multicore chips at all?

“You have to take a look at how many instructions per clock can the architecture handle — our belief is that others are throwing cores at the issue in terms  of getting more performance.  We make that determination based on our architecture so we felt very comfortable coming out with a single core dual-threaded for our first product, and as we’re able to get more and more performance in the right implementation of the architecture we believe putting in dual-core would be the right thing for our next generation product,” said Syal.

On the question of quad-core, it seems likely Intel sees four cores in Medfield’s future but Syal would not be drawn. “We’re not disclosing any plans yet of quad-cores,” he said.

Android app incompatibility

App compatibility is another area where Intel is having to play catch up. Despite working closely with Google to optimize its chip architecture for Android, not all Android apps are compatible with Intel’s SoCs — including, in a recently flagged example, Google’s own Chrome for Android browser. This was noticed by Android Central – after some hands on time with a pre-release version of the RAZR i. (Chrome compatibility is due to be fixed in time for the RAZR i’s launch, says Motorola.)

Syal said the “majority” of Android apps are compatible with Medfield chips but refused to specify an exact percentage — although Intel has previously claimed 95 per cent of apps are compatible (which was a correction of a previous Intel statement pegging Android app compatibility at just 70 per cent of apps).

“We’re not quoting any numbers — but the majority of all the apps we’ve tested work just fine,” said Syal.

Syal added that Intel’s internal software and services group has been working “since the launch of our product and constantly round the clock to make sure that all these apps work… so those numbers [of incompatible apps] are changing by the day”.

Asked to sum up Intel’s current performance in the smartphone space, he described the company as “comfortable” with how much progress it’s made this year.  ”We’ve just gotten into the game, since the beginning of this year, right now we’re really comfortable with how we see our penetration — six products have now been publicly announced into the marketplace. There’s more stuff to come — but we’re not talking specific numbers.”

Intel is currently in a quiet period, ahead of its Q3 earnings report (scheduled for October 16) which may be one reason for keeping its powder dry.


Speed’s Other Needs

SANYO DIGITAL CAMERA

Editor’s note: Michael Weinberg is a staff attorney at Public Knowledge, an organization that preserves the openness of the Internet and the public’s access to knowledge; promotes creativity through balanced copyright; and upholds and protects the rights of consumers to use innovative technology lawfully. Michael focuses primarily on copyright, issues before the FCC and emerging technologies like 3D printing. Follow him on Twitter.

FCC Chairman Julius Genachowski wrote last week on TechCrunch about the importance of speed. Specifically, he highlighted the importance of speed in the next wave of Internet innovation. While he is right about the importance of speed, he missed one key point: broadband speed isn’t worth much if it is crippled by data caps.

All of the advances Chairman Genachowski pointed to – in-cloud computing, education, health care, energy, and public safety – will rely on fast broadband connections. 100 megabit-per-second networks could transform the way our society and economy function. But speed is not an end in and of itself.

The fastest car in the world won’t get you very far if you only have 20 feet of road, and a blazing-fast 4G LTE network is not worth much if you are limited to 2 GB of data per month. By and large, next-generation Internet technologies need high-speed networks because they need to move a lot of data quickly. Big Data is called Big Data for a reason – there is lots of it.

Unfortunately, while the FCC has taken steps to encourage the deployment of broadband networks, it has done little to ensure that people can make use of those networks without running into road blocks in the form of data caps.

Data caps impose real costs on consumers and society as a whole. At their most basic, they create a disincentive to use broadband. Instead of freely exploring new technologies, services, and ideas, data caps force users to decide if site, or app, or video is really worth using some of their valuable data.

Caps also have a tendency to freeze innovation. When they set their caps, most ISPs insist that “normal” users will not run into problems. Even if this were true, those caps have proven extremely slow to change. “Normal” usage patterns today barely include streaming video, let alone the types of next-generation innovation that we all look forward to. Under a data cap, that next-generation immersive and creative software to help children learn is reserved for “data hogs” willing to pay tens, or hundreds, of dollars a month in overage fees. That is not a recipe for widespread adoption of innovation.

We are also starting to see ISPs use data caps to pick winners and losers online. Whether it is Comcast exempting its own online video service from its data cap or AT&T offering to let developers buy their way out of its data cap by paying a fee, data caps become an excuse for ISPs to charge more people more money while shutting out disruptive innovators. Of course, it is also no surprise that cable companies are setting data caps well below what it would take to replace their TV offering with an over-the-top competitor.

Data caps are also beginning to create a second-class Internet for traditionally disenfranchised communities. Low-income communities, rural communities, and communities of color are increasingly relying on wireless internet connections as their only internet connection. Single digit data caps make it unreasonable to expect these connections to be used to access the “real,” data rich, Internet.

It is hard to find a positive benefit of data caps to balance out all of these costs. You do not have to be a network engineer to know that a monthly data cap is an inefficient way to address network congestion – something that happens at a specific time and place on a network. Monthly data caps cannot tell the difference between streaming a high-definition movie on a weekday evening and backing up data overnight during the weekend. There is no evidence that monthly caps shift usage away from congested periods – only that they reduce internet usage generally.

And while there may be some benefit in charging heavier users more, data caps are a bad way to do that. Most customers have no idea how much data a given activity requires. Even AT&T and Verizon disagree on how much data an hour of streaming video will consume. This uncertainty guarantees that consumers end up over-paying and under-using when it comes to broadband. If ISPs are going to charge heavy users more, they should at least use a metric that everyone – especially Chairman Genachowski – understands: speed. You may not know a megabit per second from a gigabyte per month, but when web pages start to load slowly you know that it is time for a faster connection.

However much we might wish things were different, limited competition between ISPs means that we cannot rely on market forces alone to ensure the internet remains an open platform that continues to enable innovation without permission. After all but ignoring the issue in the past, the FCC has just started the process of looking into data caps. For over a year, we have been urging them to ask ISPs basic questions about data caps: What is their purpose? How are they set? Once they are set, how are they evaluated against their purpose? What would cause them to change?

Without answers to those questions, we may end up with a blazing-fast network with everyone stuck in the slow lane.


Iran Announces Plan To Launch Domestic Internet By March 2013 (And To Block Google Today)

Flag_of_Iran

It seems that the Iranian government is working to take even tighter control of the country’s already heavily-censored version of the Internet.

The government said that it’s going to launch its own domestic Internet, and that the system will be fully operational by March 2013, according to Reuters and others (who, in turn, seem to be basing their reports on the Iranian media). It’s not clear whether all access to sites outside of Iran will be blocked once the domestic system is live.

Cybersecurity is the official reason for the growing online restrictions (sites like YouTube and Facebook are already blocked), but it’s probably not coincidental that the Internet was also seen as a key tool in 2009′s protests against President Mahmoud Ahmadinejad. (The importance of tools like Facebook and Twitter in those protests has been the subject of some debate.) Iranians “commonly” get around the existing government filters by using VPN software, Reuters says.

Earlier this week, the Washington Post reported that an Iranian domestic Internet system was in the works, giving the government more power to restrict online access during protests or other periods of civil unrest. However, planning such a system and actually making it work are two different things — a retired security director from the National Security Agency told the Post that “any attempt by a country to make an intranet is doomed to failure.”

The Iranian government also announced, via state television, that it will be blocking accessing to Google and Gmail within “a few hours.” The Iranian Students’ News Agency says this is in response to the anti-Islamic “Innocence of Muslims Video” that was posted (and then blocked) on YouTube. I’ve emailed Google for more information and will update if I hear back.


Source: Apple Aggressively Recruiting Ex-Google Maps Staff To Build Out iOS Maps

ios6maps

Apple is going after people with experience working on Google Maps to develop its own product, according to a source with connections on both teams. Using recruiters, Apple is pursuing a strategy of luring away Google Maps employees who helped develop the search giant’s product on contract, and many of those individuals seem eager to accept due in part to the opportunity Apple represents to build new product, instead of just doing “tedious updates” on a largely complete platform.

My source — a contractor who worked on Google Maps as part of a massive undertaking to integrate Street View and newly licensed third-party data to improve European coverage, as well as develop the platform’s turn-by-turn navigation — says that when attention turned to indoor mapping, things started to become less interesting and a lot of staff began looking around for other opportunities. That turned out to be good timing for Cupertino. Here’s what my source describes happening around that time:

Many of my coworkers at Google Maps eventually left when their contracts ended or on their own accord. One guy looked around for other GIS work and ended up at Apple when a recruiter contacted him. He had heard rumors for a while that Apple was going to develop its own in-house mapping platform, and given his experience at Google, he was an easy hire. Apple went out of their way to bring him down to Cupertino and he’s now paid hansomly as a GIS Analyst. Another coworker that was a project lead at Google Maps, left for the East Coast after his contract ended, and was recently contacted by an Apple recruiter. The position sounds like a product development manager position, and will pay him $85k+ and all the moving expenses from the East Coast. He’s gone through 2 rounds of interview and seems like a frontrunner to land that position.

The interest in ex-Googlers is well-placed, he says, and it does seem like Apple is actively looking for more talent to add to its team, according to recent job listings the company has posted. And while there’s a tough road ahead for Apple playing catch-up in this area, my source believes that the possibility of building a platform that truly competes with Google Maps is well within reach for Apple.

Apple has a lot of catching up to do if it wants to build a robust mapping platform to counter Google Maps, so it doesn’t surprise me that it’s going out of its way to lure former and current Google Maps employees. At Google Maps, we know what data’s important, rendering priorities, keyword searches, and how the user experience is suppose to be. However, Apple needs to find a way to get its own 5 million miles of street view data, partner with the right folks, and spend a fortune on licensed data – which it can.


The Death Of The Non Practicing Entity?

Patent-Law1

Editor’s note: Leonid (“Lenny”) Kravets is a patent attorney at Panitch, Schwarze, Belisario and Nadel, LLP in Philadelphia, PA. Lenny focuses his practice on patent prosecution and intellectual property transactions in computer-related technology areas. He specializes in developing IP strategy for young technology companies and blogs on this topic at StartupsIP. Follow Lenny on Twitter: @lkravets and @startupsIP.

While perusing the latest patent lawsuit filings on PriorSmart this week, I was drawn to a series of cases filed by a small company called PersonalWeb against RackSpace (possibly for hosting GitHub), Nexsan, Facebook, Apple, Yahoo, Microsoft, and IBM:

The SHIELD Act

RackSpace responded strongly on its blog to being sued by PersonalWeb, taking the opportunity to call support for the SHIELD Act. The SHIELD Act ostensibly aims to protect high-tech companies from patent-infringement suits from Non Practicing Entities (NPEs) by requiring unsuccessful plaintiffs in hardware and software patent cases to pay for the litigation costs of defendants. Its passage may lead to the end of the Non Practicing Entity business model, and, by extension, allow infringers of patents owned by NPEs to continue practicing patented technologies without fear of litigation.

The SHIELD Act may also have implications on startup companies. Shifting the cost burden of litigation to patent plaintiffs may result in startups having even less ability to protect their legitimate inventions from larger competitors. Which brings me back to PersonalWeb, a company that blurs the line between a traditional Non Practicing Entity and a startup technology company. Perusing PersonalWeb’s website, it is clear that this is a real company with real offices (in Tyler, Texas, the home of the famous Eastern District of Texas Federal District Court), real employees and at least one product. Did PersonalWeb hire a staff, furnish its offices, and make a product simply to give the appearance of being a real company? Certainly, having a real product would help make the case for seeking lost profits under the patent act and would help avoid the possibility of venue transfer. However, it’s possible that they are a small company with interesting products – like many startups. The point is that the line between NPE and operating company is so easily blurred – especially in today’s age of low-technology costs – that it is often difficult to tell what the true intentions of a company are.

Why Does It Matter That PersonalWeb Is Not A NPE In Its Strictest Form?

It shows that the traditional Non Practicing Entity model is evolving. The strong public opinion against the traditional Non Practicing Entity business model has led to proposals, such as the SHIELD Act, admonitions from federal judges, and the like. At the same time, the cost of starting a software-based business has never been lower. These notions are clearly not lost on PersonalWeb, which can act as a startup with real employees and real products, while still attempting to enforce its patents in a friendly court. After all, it is  likely that a jury in the Eastern District of Texas, which would be drawn from a pool of jurors residing in and around Tyler, Texas, would be more kind to a local 15-person company battling against corporate giants.

In the long run, I expect this model of pairing a patent portfolio with a small, but operating, technology company to become more popular. The business model provides a mutually symbiotic relationship, as the entrepreneur may find it easier to receive funding when the technology business is paired with a patent monetization program, and the funding entity can continue patent monetization while covered by an operating technology company with real employees and a real business model.

What Does This Mean For The SHIELD Act?

Certainly, there are compelling reasons for and against the Act and it is worth watching its progress in Congress. However, I am not sure that the answer to the Non Practicing Entity problem is to make patent lawsuits financially untenable for small entities, as there are plenty of legitimate examples of  small companies attempting to enforce patents against wrongdoers. Similarly, as the line between a traditional Non Practicing Entity and an operating company blurs or disappears, we should not assume that every small company attempting to enforce a patent is a patent troll.


How Do You Make Home Heating and Cooling More Exciting? Add a Touchscreen

Image courtesy of Venstar

Who’d a-thunk thermostats could be sexy? No one, that’s who. And yet when the Nest thermostat started hitting walls earlier this year, homeowners went ga-ga over its Jetsonian design, web-savvy features, and almost sentient learning capabilities.

Hoping to cash in on our newfound love of climate control, Venstar endowed its already impressive ColorTouch T5800 thermostat with Wi-Fi connectivity and app-powered controls. The result is a home HVAC controller that’s not quite as smart or streamlined as the Nest, but still very cool and capable.

For starters, it’s buttonless. A 4.3-inch color touchscreen handles everything from setting your preferred temperature to creating a schedule to viewing a custom slideshow. Yep, say hello to the world’s first thermostat that doubles as a photo frame.

There’s nothing quite like nudging the AC down a couple degrees without getting out of bed, or making a cold house nice and toasty just before you return home from vacation.

Before you can start packing it with pictures of Fido, however, you’ll have to install it. That’s theoretically a 10-minute job, provided you’re handy with a screwdriver and can manage some basic wiring. However, if there’s no power lead running from your furnace, you may need to call in a pro.

You may also need a firmware update to get the T5800 to recognize the Skyport Wi-Fi Key, which plugs into a side SD slot and sticks out like a sore thumb. Not that the rectangular ColorTouch was all that sexy to begin with, but the key totally kills any aesthetic it had going.

Ironically, you need to transfer some files via SD card to install that first firmware update, though once you get the ColorTouch connected to your Wi-Fi network, future updates can be downloaded directly. Photos, however, cannot: You have to copy them over via SD card.

That’s not only a hassle, it’s a disappointment: Why couldn’t Venstar add a “Send Photos” option to its web-based control panel or Skyport app? They’re otherwise quite capable, letting you adjust temperature settings from afar, monitor heating and cooling runtimes, turn various modes on or off, and even send a text message to the thermostat.

All these functions work quickly, easily, and awesomely: There’s nothing quite like nudging the AC down a couple degrees without getting out of bed, or making a cold house nice and toasty just before you return home from vacation.

Image courtesy of Venstar

More cleverness: The app, web panel, and thermostat will show you the outside temperature and forecast so you can plan your indoor settings accordingly. You can set up a passcode to lock out kids or visitors, schedule temp adjustments for morning, daytime, bedtime, and no-one’s-home time, and pore over runtime graphs to see just how much heating and cooling is happening.

Those are some admirably smart features, and yet if you’re looking for serious thermostat intelligence, the Nest wins the day. The ColorTouch can’t do things like sense your presence in a room, monitor your habits, or learn how quickly your furnace heats or cools the house. Nor can it determine when you’re away and adjust the climate accordingly.

What’s more, although Venstar doesn’t specify a list price for the T5800 and optional Skyport Wi-Fi Key, they sell for around $225 online — just $25 less than the Nest. As much as there is to like about the ColorTouch, it feels less like a wildly advanced climate-control system and more like a kludge that’s straining to keep up with the times.

WIRED Replaces your boring analog thermostat with a big, colorful touchscreen. Companion app lets you tweak the temperature from anywhere — and send text messages. Doubles as a small but attention-getting photo frame.

TIRED Wi-Fi dongle costs extra and protrudes awkwardly from the side. Can’t receive photos from your phone or the web. Not nearly as smart as the Nest, but nearly as expensive.

Still Sleek and Fancy, But a Nicer Price

Photo by Ariel Zambelich/Wired

Two years ago, Bowers & Wilkins debuted its P5 headphones, some of the best-sounding compact headphones we’d ever heard.

B&W’s design for the P5s coupled a minimalist and luxurious aesthetic — complete with supple memory-foam ear pads swathed in black New Zealand sheep leather — with truly stunning audio performance. They earned a very high rating from Wired (and from most other reviewers) even though they were priced at $300.

That’s pretty steep for a pair of on-the-ear headphones. B&W has won many fans who are used to paying a premium for the company’s high-end audio products. Other consumers who are less aware of the relatively obscure British manufacturer, not so much.

Although not as opulent as the P5s, the P3s still sound excellent, and they deliver enough pleasure — auditory and otherwise — to justify the $200 price tag.

Now B&W has come out with a more affordable option, the P3 headphones, which sell for only $200. They look a lot like the P5s, though the materials aren’t as swank. The newer P3s are smaller, however, and they fold up, making them even more portable. Although not as opulent as the P5s, the P3s still sound excellent, and they deliver enough pleasure — auditory and otherwise — to justify the $200 price tag.

B&W made some very smart design decisions on the P5 headphones, and many of those choices are repeated here. For one, the spindly metal skeleton that forms the headband — it’s sleek, provides a good clamping force, and is very light. Also, the earpads are held attached using magnets, and you can pop the pads off with a tug, exposing the speaker and the small jack above it where the cable attaches. This layout lets you replace the two bits of the headphone that get worn the most quickly: the cables and the earpads. The P3s ship with a spare cable, and replacement earpads are available online for about $30 each.

Some corners have been cut. Gone is the lavish sheepskin leather that covered the headband and the earpads. Instead, a more traditional cloth is used. It feels rough to the fingers, but it’s actually quite comfortable on the ears (and less sweaty). Also, there’s more plastic in the build. The headphones don’t feel cheap, however, and there’s the nice addition of a hinge just above each ear, which allows the pads to fold inward. The P3s ship with a hard plastic clamshell case.

Photo by Ariel Zambelich/Wired

The emphasis here is clearly on portability, so during two months of testing, I listened to the P3s exclusively using mobile devices: an iPhone 4, a Nexus 7 tablet, and a MacBook. They aren’t particularly thirsty headphones, so they got plenty loud and showed an impressive level of detail without the use of an amp or software boosting. You should be able to just plug these into your phone and start listening. I would recommend going into your device’s EQ settings and cranking up the treble a bit, however. The P3s tend to be a little dark, meaning there isn’t as much of that crisp, high-end detail you’d expect if you’re used to a brighter headphone like the P5 or a really nice pair of in-ear monitors.

The P3s don’t offer much in the way of sound isolation. They get loud enough to compete with moderate street noise, and they do so without any noticeable distortion. But since these are on-the-ear headphones, they can’t block out all the revving and honking and jabbering going on around you. The memory foam pads conform to the shape of your ear, creating an adequate seal. But that immersive, all-encompassing audio experience where the world just shuts off, leaving you and Chuck Mangione alone together to chase the clouds away, is still out of reach.

Photo by Ariel Zambelich/Wired

One last quibble: the cabling. It’s the weak point on many of the headphones made by companies that don’t specialize in mobile products, and it’s the same story here. The remote — which advances the tracks, controls the volume and contains a microphone for phone calls — feels cheap. More troubling, the 1.9-millimeter-thick cable is gummy and doesn’t coil well. This presents a problem when it comes time to stow the headphones away inside the case. I suspect many people will simply wrap the cable around the headband, which puts unwelcome wear on the cable. Good thing it ships with an extra.

If the $300 P5s seemed too expensive, give the $200 P3s a shot. They don’t have the same super-luxe fit and finish, but they produce very good sound and the innovative, compact design is key for on-the-go listening. Besides, a Jaguar gets you there just as fast as a Bentley, yes?

WIRED Great sound quality, as expected from the Brit audio powerhouse. 30-millimeter mylar drivers handle mids and lows with gusto. Cables snake under the earpads, protecting the connection jack. Earpads and cables are user-replaceable. Folding design makes them great for travel. Available in black or white.

TIRED Light on high-end detail — you’ll have to fiddle with your EQ. Ambient noise isolation is lacking. Stowing them away inside the plastic case takes patience and practice. Cheap, thin cable.

Photo by Ariel Zambelich/Wired