An awful lot has been written recently about whether Apple is has lost its spark. “Does Apple have an innovation problem?” asks the Washington Post. Forbes claims to lay out “Apple’s innovation problem“, although that piece is so muddled and lacking in specific details I came away more confused than illuminated.
“Apple hasn’t created an innovative product in years“, claims inc.com. “Has Apple’s innovation engine stalled?” asks USA Today. Fox News tells us “Why Apple is ailing.” The Telegraph reports that “three in four investors [say Apple is] losing [its] innovative edge.” There are hundreds, if not thousands, of posts like this, and many of them come from the mainstream media — so it’s possible that this is becoming, or is already, the view of the man in the street.
It seems Apple has been stung by some of this criticism; Tim Cook took the time to reassure investors that “we’re unrivalled in innovation,” as reported by ZDNet. Phil Schiller slammed Android in an interview with the WSJ just hours before Samsung launched the Galaxy S4. And the “Why iPhone?” page added to apple.com has a tinge of defensiveness to it, at least to my eyes. Other people agree; Apple was named “most innovative company” in a wide-ranging poll late last year, for example.
John Gruber wrote about how strong narratives can displace the facts. I think this is particularly true in tech reporting, which (let’s be honest) isn’t all that dramatic a lot of the time. As the sublime @NextTechBlog put it: “REVIEW: New Telephone Is A Black Rectangle That Provides Phone Calls, Text Messages, The Internet, And Other Applications, Plus A Camera” and “I’m Replacing My Old, Black Rectangle With This Brand New, Black Rectangle Because This One Is New“. That’s a pretty neat meta-story for almost every smartphone launch ever.
You and I like to obsess over the details, sure; but most people don’t care that much. People like you and I read tech blogs. To hook those other people in, though, the mainstream media needs a little drama, and if it doesn’t have much to work with; well, it has to sex up whatever it can lay its hands on. Hence, Gruber suggests, the virulence of the “Samsung steals Apple’s crown” meme.
I think there’s a related meme afoot also, though. It comes in two parts. Firstly, the idea that Apple under Jobs was an innovating powerhouse, constantly turning markets upside down or creating them from whole cloth with unexpected new gadgets. And secondly, that those days ended with Jobs’s passing, and that Apple’s innovating days are over.
I think this is pretty risible, but to explain why I’m going to have to dig a bit deeper into what innovation, exactly. For Apple’s critics, such as those writing the articles I linked to above, “innovation” seems to be defined mostly as “entering or creating new markets” and Apple’s innovation showreel is the iPod, the iPhone, and the iPad. Consider the Fox News piece, which seems to be pretty typical to me:
Since October the price of Apple shares have fallen from $700 to about $425. No one should be surprised — the company has been misstepping for a long time.
Without the genius of Steve Jobs for neat, wholly-new products, it is going to take tougher management, and a change in the company’s core business strategy to match its past record of profitability.
Apple’s remarkable success was premised on being first and better with a succession of new products, dating back to the earliest computers to smartphones and tablets. It was greatly aided by a superior operating system, which provided a more elegant and user friendly experience than rival Microsoft offerings, and the fact Apple both wrote the software and designed its products.
This thinking leads to people pondering “what fields could Apple enter next” and in turns leads to people calling for Apple to prove its innovation credentials by releasing a smartwatch or a television, to name but two of the Rumours That Will Not Die.
However, I strongly believe this view of ‘innovation’ is reductionist — I think concentrating on innovation at the product level glosses over too many details. If we’re really going to seriously look at whether Apple has become less innovative we’re going to have to be a bit more clear about exactly what we’re discussing.
Defining innovation
Let’s start by considering what we mean by innovation in the first place. The concept of innovation is a bit like art: everyone knows it when they see it, but ask five people to define it precisely and you’ll get a dozen different answers. The Mirriam-Webster Dictionary defines innovation as “the introduction of something new; or a new idea, method, or device” and defines innovate as “to introduce as or as if new“.
Merely defining it as “making changes”, however, is rather shallow and overly broad. When Apple released speed-bumped MacBook Pros in February, for example it had certainly changed something old into something new; but few would put that in the same sort of class as the release of the iPad mini. It seems to me that if we’re to debate the merits of innovations then we’re going to need a framework to weigh up the qualities and quantities of very different kinds of changes.
When I first started drafting this post, the Wikipedia page quoted a set of multi-faceted definitions I liked; they’ve been removed now by some capricious editor so I’ll summarise them here instead:
-
Innovation as novelty: Most people would agree that for something to innovative it has to be new in some way, either in and of itself or the application of an old idea in a new way or a new context.
-
Innovation as change: The most potent innovations provoke changes, perhaps opening new doors for the user to work with. In the best cases, they might change whole industries, creating new product sectors or new ways of thinking that entirely replace the old. Or to put it another way: these are the changes that a company will be remembered for in fifty years.
-
Innovation as advantage: Assuming anyone actually wants the innovation, then it seems reasonable to conclude that it’ll convince people to buy the innovating product. Hence the company will sell more stuff than it would have done so otherwise.
The most significant innovations, I claim, will be those that score highly on all three of these fronts.
Bubbling under: candidates that didn’t make the cut
There were a number of possible things I considered for inclusion in this post but ruled out for various reasons.
I dismissed the iPhone, iPod, and so forth because I believe it’s more interesting to say “no whole products.” To say “the iPhone is innovative” is, to my mind, reductionist and frankly not that interesting. I want to dig into which specific bits of it are innovative, and why. So I ruled out entire products and instead chose to focus more closely on the individual features of products.
I ruled out the graphical user interfaces, something which certainly caused industry change and Apple certainly played a crucial role in the history of. As with entire products, I think it’s perhaps a little sweeping to count “GUIs” as one innovation — I think it would be more interesting to dig deeper into individual elements. However, I must confess that most of the real cutting edge early stuff predates me; my involvement in computing only goes back to the mid ’80s and I don’t want to overreach by claiming I’m familiar enough to be a good judge of what is “most innovative” from that era. If your memory is longer than mine, I’d love to hear your thoughts in the comments on what you think might be the biggest innovations from Apple of that era. I’m going to confine the scope of my article to the last fifteen years or so.
I’ve also ruled out iOS itself (or, as we called it when it first arrived, “iPhone OS”). Like Harry McCracken, I also think the first iPhone owes a significant debt to Palm OS: the full-screen apps and app launcher comprised of a regular grid of icons are both very similar concepts, and notably different to how Apple designed the Newton. To my mind, the greatest innovation iOS offered was how it brought a large number of features together and made them work in a brilliantly accessible way; but I think that accomplishment, as significant as it was, is eclipsed by the things I list below.
So here’s what I did come up with, after some hard thought and bouncing ideas around in the TUAW newsroom.
Third place: Retina/HiDPI displays
Apple introduced the “Retina display” with the iPhone 4 in June 2010, since when it’s rolled it out across various iPhones, iPads, and MacBooks. Defining “retina” as “a screen where the pixels are too small to be individually perceptible at typical usage distance” (which is a claim that stands up to scientific scrutiny), these screens were immediately very popular, offering a degree of visual fidelity that few had seen before.
Now it must be noted that this was not the first ultra-high-density display in the world. I remember salivating over the IBM T220, a 21″ monitor from 2001 with a breathtaking 3840×2400 screen and a $22,000 price tag. At 200 pixels-per-inch, at a distance of 17″ it was a true “retina” display, with a pixel density only slightly below today’s MacBook Pro with Retina display. The T220’s resolution even tops the now-cutting-edge 4K format. It required three DVI cables to drive it to an even remotely sensible refresh rate of 41 Hz, because of the sheer data rate necessary to keep this monstrous screen fed. It was sold to a handful of customers, mostly for use in medical imaging, physics labs, and other specialised applications. Still, this behemoth is (clearly!) in quite a different category to a smartphone retailing for under $1000.
The Retina display’s innovation was not just skin deep, either. Quadrupling the number of pixels on the display means you also need four times the graphics memory and four times the bandwidth, just to maintain performance parity; then you also need a correspondingly more powerful graphics chip, and you have to do all that without compromising battery life, or weight, or making a device you can’t sell for a reasonable price tag. This is why many of Apple’s devices like the space-compromised iPad mini don’t yet have retina displays.
Apple was the first to climb this technological mountain — but far from the last. Since the iPhone 4’s release in 2010, no high-end smartphone has dared to arrive without a similar pixelicious screen. As Apple has spread Retina-quality (or HiDPI) screens beyond smartphones and into tablets and laptops, so other manufacturers have followed also, with devices like the Chromebook Pixel arriving with rMBP-class screens.
So, to sum up: novel? Certainly in terms of consumer level devices. Change? A big fat check. Advantage? Difficult to gauge — sales of Retina-equipped devices are high, for sure, but then the iPhone and iPad were already wildly successful before they were introduced. I think it’s hard to imagine that retina displays didn’t help, however.
Second place: Capacitive multitouch
I think the iPhone was a good deal less innovative than many people believe. You might have seen this snarky image by Josh Helfferich doing the rounds on forums and Twitter, purporting to show how the iPhone changed the phone market. The inconvenient truth it glosses over is that the iPhone’s basic design — a black touchscreen slab — was far from unheard of at the time. To name just one example, consider the HTC TyTN, which was the smartphone I had before my first iPhone, and predates the latter by six months.
But there was one piece missing, one thing no-one else had, and it was key to massively increasing the appeal of this design with consumers. The clue is in the two elements of that HTC that are radically different from the iPhone: it has a stylus, and it has a physical keyboard. It needed both of those because it lacked a screen that worked when you touched it with a fingertip. The TyTN’s resistive touchscreen worked only on pressure, and needed the precision of the device’s stylus to function. To my mind, the capacitive multitouch screen was by far the most innovative feature Apple brought to the market with the first iPhone, enabling an intuitive UX built around touch, swipe, and natural gestures such as pinch-to-zoom.
There were compromises though. Fingers splodge over a much larger screen area than a tiny stylus tip, so on-screen buttons had to get bigger to compensate. That meant screen size had to increase too, by quite a lot. iPhone early adopters will probably remember friends asking how we carried phones that were “so damned big”, a puzzling attitude now in this world of 5.5 inch smartphones — but it made sense in the context of a time when for many years the fashion was for ever-smaller phones.
(An aside. A common meme in the Appleverse is that the original iPhone 3.5″ screen size was some sort of platonic ideal for one-handed use, as proposed by Dustin Curtis. I think this is bunk, if only because it only works for people with fairly large hands and quite flexible thumb joints, which can only be some small proportion of Apple’s desired target audience for the device. I think it’s much more likely that the way Apple designed the screen was as follows: (1) work out the minimum width that can hold a QWERTY keyboard and still have the keys wide enough to be typeable on (2) multiply width by 1.5, desired screen aspect ratio, to calculate height (3) There is no step three. Look at an iPhone keyboard some time — it’s hard to imagine typing on it if those keys were even just a few pixels narrower. Just a personal theory. Any Apple engineers reading this are quite welcome to let me know off-the-record if I’m correct.)
Novel? I’ve never encountered any prior devices that used capacitive touch, so if anything did exist I’m pretty sure it was very obscure. Change? This is where Hefferich’s picture does have a point — although all-screen smartphones were not unheard of before the iPhone, they were rare. Now there’s very few models that aren’t cut from that cloth. So yes. Advantage? Arguably, this was the iPhone’s biggest unique selling point — and Apple has sold nearly half a billion of them now, plus the iPad. I think that’s a yes too.
First place: Microtransactions
Now for the big one.
For decades, e-commerce experts were crying out for some feasible way to charge consumers for small amounts ($1, $2 and such) without being eaten alive by the credit card fees and transaction costs in the process. What new forms of commerce could be enabled, they would wonder, if this was achievable? We could unbundle albums, and sell consumers individual songs. We could sell them individual TV show episodes instead of box sets. We could unlock all sorts of interesting economic models that simply cannot exist with microtransactions.
Then Apple quietly built exactly that for music, turning that industry on its head in the process, and then changed everything again by rolling it out for apps.
Think of the impact that this has had. Without microtransactions, the App Store would be far less vibrant; with no middle ground between free and (say) $10, there would be orders of magnitude less developer interest. That bracket between free and how much apps used to cost before the App Store is where almost all of the interesting stuff is. And that’s before we talk about the revolution in the music industry, now shifting to an almost entirely digital model, powered by microtransactions, and other digital content distribution channels, undergoing the same seismic shift.
Novel? I think so — I cannot find any substantial adoption of microtransaction commerce before iTunes, with the arguable exception of e-cash systems which skirt the issue of card fees by loading a smart card with some sort of alternate currency. Not really the same thing, in my opinion.
Change? Without a doubt. Microtransactions enabled the app market, which everyone has copied, and dramatically changed how we can buy other kinds of digital content. Advantage? Content lock-in to the vibrant App Store ecosystem is probably Apple’s greatest asset in terms of encouraging customer loyalty at phone contract re-up time. I’d say for sure this is a compelling advantage.
So why does Apple bore people now?
Wall Street seems to define Apple’s innovation according to a simple narrative: Apple enters an existing product category (portable music players, smartphones, tablet PCs), turns it upside down, redefines it, and a few years later, ends up owning it. So all Wall Street wants to see is Apple doing that again and again, to new categories: televisions, smart watches, who knows what else.
But when we examine Apple’s track record in more granular terms, I think we come to the conclusion that genuine, feature-level innovation is very hard and consequently very rare. I don’t think there’s any evidence at all that Apple has become less innovative. Sure, Apple hasn’t produced anything breathtaking new for a little while now, but when we look back over the last fifteen or so years, it’s always a few years between the real big-hitting innovations anyway. So something’s probably on its way — many of you said as much in our recent TUAW poll.
But! These are only my opinions, and this is a highly subjective topic. Perhaps you disagree entirely with how I’ve defined innovation, or perhaps you agree with my framework but think I’m an idiot for overlooking Feature X. Comments are open. Have at it!
On the eve of WWDC: What are Apple’s three greatest innovations? originally appeared on TUAW – The Unofficial Apple Weblog on Sat, 08 Jun 2013 11:57:00 EST. Please see our terms for use of feeds.