Hear our thoughts

We are an awesome collective of designers, engineers and ideators with truly world class skills in building ideas in the mobile space. This blog is our capture of everything that makes us Mokriyans tick! So read on to hear our thoughts on everything under the sun.

Why Foursquare will kill Yelp, and then Google will kill them both

Posted by: Adam Chavez, Categories: Blog

He who has the most data, wins.

When I bring up Foursquare with folks in tech, the most common reaction I get: “Yeah, there’s some new app they did. Swarm, right? <yawn>”

You can’t blame them. For years, Foursquare was that one guy‘s hobby.

“Mayor of Trader Joe’s, eh? I hope that works out well for you!”

The only positive conversation I had about the company was about 2 years ago, with Derek Kohn, a friend and former colleague. He saw data-driven apps built on top of it, and possibilities. I listened skeptically as he painted his vision for where the company might head.

Fast-forward to about 4 weeks ago, and I’m starting to come around.

Real-Time Data Beats the Contrived “Write a Review” Paradigm

Yelp relies on an awkward paradigm where people create reviews. There’s a lot wrong with this model:

  1. Selection bias. The very fact that you’ve written a review means that you don’t represent the overall population of restaurant-goers. With Yelp (company), rather than getting a random sample of good data, all I’m getting is the opinions of passionate people [1] (or worse! see point 3). Just ask Eric Cantor.
  2. The data gets stale quickly. Ever gone into a restaurant with all 5-star reviews, only to find out that they changed owners 2 months ago and the last 5 years of data are faulty?
  3. It is easy to game. Two words: paid reviews.

Foursquare is different.

Think past check-ins.

Try this little experiment.

  1. Install Foursquare.
  2. Go to a restaurant in your area
  3. and then….. wait for it…. Push notification in 3..2..1, there it is!

This Foursquare notification might ask you if you want to check in, or it might give you a tip, “Try the carne asada. To die for!” 

Think about what that means. Foursquare is always aware of where you are, 7 days a week, 24 hours a day. Robert Scoble saw the future a few years back. Mokriya had helped build out some of the technology for Alohar Mobile, which was doing something very similar.

Scoble’s reaction on Alohar’s 24-7 data-collection:

If somebody returns to a restaurant multiple times, that’s a much better signal than somebody who just writes a 5-star review. Who knows why they wrote that review. Maybe they were guiltedinto it, maybe they were incentivized (free food anybody?). Maybe they were (gasp!) paid for it.

Yelp says that they have some magic algorithm that can tell when a review is written by a real reviewer. (<cough> Bull <cough> sh&#). How many times have you spent the time to write a real, heart-felt review on Yelp, only to find that the algorithm thinks you’re a spammer? 

The bottom line: Foursquare recommendations are BETTER.

I’m a long-time user of Yelp, and I remember the glory days of finding great hidden gems, hole-in-the-wall paradises with the most delicious food you could imagine. NO longer. The last dozen times I’ve tried to find a decent restaurant using Yelp, I’ve gotten back a terrible recommendation.

I don’t remember the last time I used Yelp to find a good place to eat.

I’ve been testing Foursquare for the last 6 weeks or so, and every, single recommendation it has given has been AMAZING. [2]

Foursquare will win because their data produces better recommendations, which are organically created, updated in real-time, and based on actual behavior.

Some say, “but nobody checks in” and so there couldn’t possibly be enough data there to give much useful information (esp. outside of technophile-rich SF). I say: they’re drastically underestimating how much data you can get from 50 million people giving up free data all day, every day. (Source: About Foursquare)

Others think that this system can be gamed, and that this incredible Foursquare product experience is short-lived. To them I say: you’re right that Foursquare’s rise may be short-lived, but for the wrong reason. Foursquare will lose in an epic battle for control of local commerce, a massive market. Folks in the US spend $3.2 TRILLION per year on local commerce.
(Source: Why Local Commerce Will Be Larger Than E-Commerce For The Next Decade, An Analysis | TechCrunch)

Foursquare’s real problem: Google & Apple are both gunning at this same market, and they have gobs more data! Neither one want to be left out of that multi-TRILLION $ pie.

He who has the data, wins the war.

In the US, there are roughly 66 million active iPhone subscribers, and 82 million active Android subscribers as of Jan 2014 (Source:comScore Reports January 2014 U.S. Smartphone Subscriber Market Share)

Compare that to 22.5 million US-based Foursquare users. [3]

Personally, my money is on Google for being able to actually pull this off in the long-term. Google Now already towers above Siri in terms of functionality, completeness of the data, and overall usability. Google seems to be better at anything that involves massive amounts of number-crunching and machine learning.

In the meantime, I’m Foursquaring my way to the next restaurant I go to!

[1] Some would argue that this kind of passionate bias would be good for a recommendation engine — the people with the loudest voices are maybe the biggest foodies. They know the scene the best! This turns out not to be true.

[2] I will admit that my argument only looks at quality of product. I’m not thinking about other important things such as business leadership, mindshare, financial strength, and other important and relevant information.

[3] Based on a pretty lame calculation TBH. Foursquare said they have 45m users in Dec 2013. (Source: Foursquare Raises $35M More, Says It Has 45M Registered Users | TechCrunch) Combine that with a Jan 2011 article where Foursquare said 50% of users are outside the US (Source: Foursquare Now Officially At 10 Million Users | TechCrunch45m * 50% = 22.5m If anybody has a better number, please answer my Quora question here: How many users does Foursquare have as of June 2014?

Designer Duds: Losing Our Seat at the Table

Posted by: Mills Baker, Categories: Blog


If design hadn’t triumphed by 2012, it had by 2013.

Three years after launching the iPad, Apple was the world’s most valuable company, and even second-order pundits knew why: design. Steve Jobs’ remark that design was “how it works” had achieved what seemed like widespread comprehension, and recruiting wars for top designers rivaled those for top engineers. Salaries escalated, but cachet escalated faster; entire funds emerged whose only purpose was to invest in designer founders, and with money and esteem came the fetishization of design, the reduction of designers into archetypes, the establishment of trade cliques and the ever-increasing popularity of trend-ecosystems.

There were valedictory encomia about the power of design to deliver better products and therefore better commercial outcomes for companies and better utilitarian outcomes for users. In his rather more sober but nevertheless remarkable talk at Build 2013, David Cole noted that thanks to Apple,

Taking a design-centric approach to product development is becoming the default, I’m sure it will be taught in business schools soon enough… This is a trend I’ve observed happening across our whole industry: design creeping into the tops of organizations, into the beginnings of processes. We’re almost to the point, or maybe we’re already there, that these are boring, obvious observations to be making. Designers, at last, have their seat at the table.

For those of us who believe in the power of design thinking to solve human problems, and to a lesser extent in the power of markets to reward solutions when the interests of consumers and businesses are correctly aligned, this was invigorating news. Parts of the technology industry spent much of the 1990s and even the 2000s misunderstanding what design was and how it could help improve products. There was a time, after all, when Apple was a laughingstock. Now, in part thanks to Jobs and Ive and the entire culture of the company, as well as its undeniable financial success, designers would be heard and could make bigger contributions to human progress.

It’s now 2014, and I doubt seriously whether I’m alone in feeling a sense of anxiety about how “design” is using its seat at the table. From the failure of “design-oriented” Path [1] to the recent news that Square is seeking a buyer [2] to the fact that Medium is paying people to use it [3], there’s evidence that the luminaries of our community have been unable to use design to achieve market success. More troubling, much of the work for which we express the most enthusiasm seems superficial, narrow in its conception of design, shallow in its ambitions, or just ineffective.

To take stock, let’s consider three apps which ought to concern anyone who hoped that the rising profile of design would produce better products, better businesses, better outcomes.

Dropbox’s Carousel

Dropbox isn’t an obvious candidate for a design-obsessed company, but under the direction of former Facebook designer Soleio it has nevertheless become one, stockpiling designers at an impressive rate with a relatively simple pitch: we have the world’s data, its photos, its documents, its digital lives, as a massive foundation. Build great products on top of that. One can see how this might seem plausible enough, and indeed Soleio has assembled a strong team. In the design community, anticipation for the fruits of their labor has been widespread, and Carousel seems to be the first indication of what they’ll be up to.

Carousel is an app for storing and sharing photos. Dryly described, it almost seems like it was released several years late by accident; after all, many solutions already address both needs. Carousel has nice touches —it attempts, with middling success, to enlarge the “most interesting” photo on a given view when it lays out your pictures, and it uses some possibly handy but easily forgotten gestures— but its main standout at launch was some gratuitously sentimental and derivative marketing. It’s honestly hard to determine what should be interesting about it, even in theory; it takes mostly standard approaches to organization, display, and sharing, and seems to do little to distinguish itself from the default iOS Photos app + iCloud Photo Sharing, let alone apps and services like Instagram, VSCO Cam, Snapchat, iMessage, Facebook Messenger, and so on.

Its value seems to be unclear to iPhone owners, in any event:

 Screenshot 2014-04-22 13.24.26

If Carousel is intended to solve a user problem, neither I nor other potential users seem to be able to figure out what it is. It seems likelier to solve a Dropbox problem: how to get consumers to pay for higher tiers of Dropbox services by getting them to store all of their photos there. But Flickr, too, will store all of my photos, with additional functionality and without a fee. And Apple will also store many of my photos, and with iCloud Photo Sharing will let me share them in the manner Carousel does [4].

Despite an immense amount of press at its launch, Carousel is faring poorly in the App Store. Perhaps there are plans for the future development of more useful, differentiating features, but until then, it’s duplicative of existing and adopted solutions and seems to offer no incentive for switching from whatever one uses currently.

If you gathered some of the world’s best designers and gave them significant organizational support and all the resources they need, is an app which at best matches the functionality of bundled OS features from a few years ago what you’d expect?

It should go without saying that Carousel could be on its way to becoming a great product; it should also be acknowledged that absent intimate familiarity with its development, we can’t be confident who’s to fault for its paltry functionality and underwhelming differentiation. Perhaps it was rushed, poorly executed, or the fault of some errant executive. But that hardly accords with what one knows about Soleio, and in any event: the team seems happy with it.

But who is helped by this app? Whose life is improved? Whose problems are solved? What is now possible that wasn’t before? What is even merely easier?

Facebook’s Paper

Facebook has landed some of the best designers in the industry over the past several years, often acquiring their companies outright in order to do so; folks like Wilson Miner, Nicholas Felton, Mike Matas, and many more have gone over to the new Big Blue [5]. While Felton was reputed to be responsible for a Timeline redesign, Paper is known to be the work of Matas, along with several other folks in Facebook’s “Creative Labs” group.

Apart from its performance in the marketplace or success with users, Paper is interesting for two reasons:

  1. The continuing physicalization of the UI, which Matas helped along by designing iPhone OS 1.0 while at Apple, is important for making computers usable. A significant percentage of our progress in computational accessibility comes from the utilization of increased power or device awareness to deliver more persuasively “realistic” physical models for UI elements. First, the GUI; then additional colors, layering, and transition animations; now, velocity physics, the making of manipulable “objects” out of app elements, and so on. The better we get at this, the easier computers are to use and the more power we devolve to users, enabling their aspirations [6].
  2. Facebook spent lots of time and effort on creating a design / development environment to make UI elements like those in Paper easier to implement. They also open-sourced some of their work, helping others to physicalize their UIs, too. Given that Apple seems uninterested in this at the moment —focusing more on data, services, interconnection, and the like while iOS remains mostly a series of scrolling views with headers and footers — Facebook’s leadership is useful.

That said, Paper is not actually a good product in itself, and users don’t seem to be keen on replacing the main Facebook app —which is quite awkward in its animations, webby in its janky scrolling— with it:

 Screenshot 2014-04-22 13.25.22

While this is better than Carousel, it’s still beneath apps like Keek, We Heart It, Kik Kak, and even Google +. If rumors of poor in-app metrics like engagement are true, the download numbers are just the start of the problem. What Paper seems to be is a lovelier UI for Facebook. Path spent tens of millions of dollars attempting to achieve the same goal; both teams seem determined to ignore that for most users, the problems with Facebook do not actually have to do with how pretty it is.

To the extent that Paper is an initial step of UI renovation with substantial functional goals —for example, perhaps the “cards” of user stories have the ultimate aim of making it easier for neophytes to express what they don’t want to read or see by, say, flicking the card down— some of these criticisms are baseless. However, some of the people who made Paper suggest that in fact users are the ones who need to improve:

 Screenshot 2014-04-22 17.38.32

Hoping that users “worry more” about the quality of their photos seems like the wrong attitude to have. It’s worth noting that everyone I know who’s happiest with Facebook uploads whatever they want, including ugly, low-resolution photos, garbage meme-images, random, hyper-compressed videos, and the rest of the junk that they find interesting. It all looks insane in Paper. Wanting users to spring for DSLRs and learn how to shoot their kale salad with a shallow depth of field so that the very lovely new app you’ve built isn’t ruined by their tastelessness is exactly backwards.

Using Paper, I have a sense of anxiety: what if this is what designers make when not yoked to “product thinking”? What if Matas et alia sans Jobs or Forstall are capable of impossibly perfect physics in UIs, of great elements of design, but not of holistic product thinking, of real product integrity? What if design uses its seat at the table to draw pretty things, but otherwise not pay much attention to the outcomes, the user behaviors, the things enabled?

Because Paper, after all, not only adds little to Facebook per se, but is in fact feature-limited relative to the main app. And this is to say nothing of the strange information architecture in it, the issues with information density, and so on. What were they really solving for? Whose lives will be bettered by it? What has been enabled?

Jelly

Jelly is Biz Stone’s empathy-boosting app. Or it’s an app you use to get answers to questions. The marketing makes it hard to understand:

The idea for Jelly is a complete reimagining of how we get answers to queries based on a more human approach. Jelly is a mobile application that uses photos, interactive maps, location, and most importantly, people to deliver answers to queries. On a fundamental level, Jelly helps people. (source)

The idea is a reimagining? It is a reimagining of how we get answers based on a more human approach? More human than what? Than Google Search, which is made by humans to respond to your human inputs by connecting you to resources made by other humans? More human than Quora, which functions similarly?

Using Jelly to help people is much more important than using Jelly to search for help. If we’re successful, then we’re going to introduce into the daily muscle memory of smartphone users, everyone, that there’s this idea that there’s other people that need their help right now. Let’s make the world a more empathetic place by teaching that there’s other people around them that need help. (source)

Stone seems everywhere to hedge his bets about Jelly’s real purpose, and it’s not hard to understand why: a sufficiently vague target is harder to miss. On the other hand, using the app oneself is a depressing experience; my experience with it, despite being in precisely the demographic that is likeliest to use it heavily, bears out this writer’s opinion: it is a desert. That’s probably because no one is downloading it:

 Screenshot 2014-04-23 13.25.10

To be clear: I do not believe that Jelly is purely a marketing failure. Its design is not good, despite being the startup of a fully pedigreed “thought-leader” in the industry, who says in interviews that “we designed a better way to ask a question.”

But when Google Search designed a better way to ask a question, the proof was in the answers. With Jelly, the answers are rare, slow-in-coming, often jokes, gags, or irrelevant comments, and have nothing like the crowd-vetted quality of sites like Quora. Jelly, by design, is a step backwards, re-emphasizing the quality of your own existing networks as though the very problem to be solved for isn’t the contingent availability of knowledge itself, distributed inefficiently and unequally through social connections!

What Jelly does is it uses photos, locations, maps, and most importantly, people from all your social networks meshed together into one big network. It goes out not just one degree but two degrees of separation. Your query is going to real people. And they either know the answer or they can forward it to someone in their social network. This is where the strength of weak ties comes in… You and your friends generally know the same sort of stuff. But then you’ve got that one acquaintance, that lawyer, say, who brings a whole new circle of expertise. So the queries jump into these new arenas, and within a minute you get back answers from people. You see how you’re connected to that person. A real answer from a real person. (source)

Hopefully you’re connected to lawyers, or to folks who know some! Otherwise, this “better way to ask a question” yields the same divisions society already has: someone people know the right folks to get the right answers, and others don’t. It’s not hard to see why one particularly acerbic pundit called it “Yahoo Answers for the bourgeoisie,” just as Medium is a CMS for the bourgeoisie.

While Stone’s questions about M&A law and where to get the absolute best handmade bike probably get responses, for most of us, Google, Quora, Wikipedia, and dozens of other sources besides are better places to get questions answered.

Again one wonders: what were they designing for? What outcomes did they hope to catalyze through the software and service? Whose life will be improved, or even affected? How seriously are they even taking this?

Designer, Heal Thyself

It’s not fashionable to rain criticism on creative and entrepreneurial efforts in Silicon Valley, and I apologize to anyone rankled, vexed, or hurt by these remarks [7]. I also acknowledge again that, from outside of an enterprise, one’s analyses can be quite mistaken, and if I’ve maligned any apps or companies due to errant assumptions, I regret it. And as it happens: I use and enjoy Paper.

But for the design community, the issue is larger than anyone’s feelings, or even the success or failure of these apps. I worry about the reckoning to come when Square sells to Apple for less than its investors had hoped, or when Medium shuts down or gets acquired (or pivots to provide something other than an attractive, New Yorker-themed CMS for writers, the poorest people in the first world). While Biz Stone will walk away from Jelly smiling about yet another “valuable failure” and Soleio and Matas will always have their bodies of work, ordinary designers will be asked to please gather their things and leave the conference room in which CTOs and VPs of Sales and CEOs who remember how useless all of Square’s attention to detail turned out to be will resume making decisions. Design has, after all, passed out of vogue before.

In order to avoid losing its place atop organizations, design must deliver results. Designers must also accept that if they don’t, they’re not actually designing well; in technology, at least, the subjective artistry of design is mirrored by the objective finality of use data. A “great” design which produces bad outcomes —low engagement, little utility, few downloads, indifference on the part of the target market— should be regarded as a failure.

And if our best designers, ensconced in their labs with world-class teams, cannot reliably produce successful products, we should admit to ourselves that perhaps so-called “design science” remains much less developed than computer science, and that we’d do well to stay humble despite our rising stature. Design’s new prominence means that design’s failures have ever-greater visibility. Having the integrity and introspective accuracy to distinguish what one likes from what is good, useful, meaningful is vital; we do not work for ourselves but for our users. What do they want? What do they need? From what will they benefit? While answering these questions, we should hew to data, be intuitive about our users and their needs, and subject our designs to significant criticism and use before validating them.

Combining epistemological humility, psychological perceptivity, and technological-systematic thinking remains the best defense against launching duds, but necessary too is some depth of character, some intelligence about purposes, some humane empathy for those we serve. Because if what we design is shown by the markets not to have been useful, it’s no one’s fault but ours. And we shouldn’t think that others in organizations won’t take notice.

Notes

1 Path is a themed Facebook, little more; it’s a shallow variation on an existing product, and its lack of use reflects this. In a recent essay, Path co-founder Dave Morin argued for an approach to design which he called Super Normal, borrowing from a distantly-related set of ideas advanced by Jasper Morrison. Morin writes:

Imagine a basic metal bucket in your mind… To apply Super Normal thinking we start by looking at what is normal and then ask the question: What are the key problems? In the case of our basic metal bucket we can find a few. First, the metal handle cuts into your hand when carrying a bucket full of cold water. Second, when picking up a bucket of cold water the metal is freezing to the touch. Third, when pouring the water out, it’s hard to control the stream of water, causing you to lose water.

In thinking through these problems we can come up with some simple innovations that would make the bucket better. First, we can add a wood or plastic wrap to the metal handle, creating more surface area and thus a more comfortable carry. Second, we can wrap the entire bucket in a thin layer of plastic creating insulation when carrying hot or cold water. Third, we can add a spout to the side, making it easy to control the pour, causing you to lose less water.

It’s easy to see how Morin could mistake Path for design innovation when one reads this. To be clear: what’s needed isn’t plastic-covered buckets (or red-covered Facebooks). What’s needed is plumbing. Design is about solving problems that humans have, not problems that products have. We start with problems people have —how do I get clean water to drink, how do I fill my bathtub, how do I water my plants— and find the best practicable solution. It’s not a more comfortable bucket. Morin seems to believe design is varying the ideas of others in obvious ways. I disagree, and so does the market.

2 Square had a complex, ambitious, multi-phase product and services strategy that seems to have required levels of adoption they simply couldn’t achieve. Without substantial marketshare for the card reader, Wallet didn’t work in enough places to be worth downloading (and it had issues with reliability, too); without Wallet adoption rising, card reader adoption depended solely on its value proposition relative to other POS solutions, which are no longer unprepared for Square. Without merchants switching over to Square solutions en masse, Market doesn’t make as much sense, and other companies are already attacking the problem of e-commerce for smaller businesses, with more focus. Square Cash works well, but competitors like Venmo have a multi-year head start and, perhaps, a better model. Building a mutually-reinforcing ecosystem of payments-technology products and using leverage from one success to propel another, in sum, seems not to have worked.

3 While Medium fiddles with its organizational structure for approving journalists, adoption is sufficiently slow that they’ve resorted to paying people to use their product, since doing so brings no real functional or distributional advantages (beyond the pleasures afforded by their beautiful UI and visual design). Contrast this, say, with Quora’s value proposition: perhaps it’s less pretty, but for writers distribution, longevity, and much more besides full-bleed images matter. As it happens, I feel like Medium is approaching a painful moment Quora itself faced: early adoption by a certain sort of user can affect the brand in the eyes of a larger market of users. Medium is becoming synonymous with bloviating design and tech writing of the “thought leadership” variety and the occasional one-off confession, apology, or hatchet job. Its collections system doesn’t seem to drive much browsing behavior, and it certainly can’t afford to pay micro-celebrities and freelancers forever. And yes: they paid me for this essay.

4 I’m reminded of the short-lived Apple ad campaign which said that the first question in design is something like: “Does it have a right to exist?” Does Carousel?

5 Miner and Felton have both already left. People often ask how Facebook, which isn’t particularly beloved among designers or their “set,” can entice these sorts of talents. Beyond material compensation, I think it also has to do with something I wrote about here:

There’s a millennial element to insisting on living in public, but it’s also just an effect of the social media age. As it happens, I think this is the one unreservedly positive cultural effect of social media, and I assume this is how Zuckerberg et alia recruit idealists to work on social media products. Thanks to such networks, two things happen: (1) it becomes harder to conceal secrets, to hide ourselves and our behaviors and choices; and (2) it’s harder to ignore the true, unconcealed nature of others, their humanity, the validity of their behaviors and choices.

Together, these bring about necessary revisions in our moral standards and cultural judgments; while it is too slow for persons affected by discrimination and abuse, this process is unbelievably rapid by historical standards.

In particular, the transformation of American attitudes about homosexuality —the decreasing acceptability of using words like “gay” pejoratively, the commonplace presence of gay characters on TV, etc.— has occurred at breakneck speed, due both to activist political efforts and phenomena like George Takei’s presence in everyone’s Facebook news feeds for the past few years. Takei has 6M “Likes” on Facebook and over 1M followers on Twitter, lots of them heartland folks whose exposure to a “safe” and funny gay person changed how they thought; it’s harder to dehumanize those who appear alongside your family in your feed, making amusing observations and getting 100K likes from “regular people.”

Being brought into frequent contact with cultural output of George Takei and others probably did more to shift American attitudes than many would believe. That’s a foundational idea behind Buzzfeed’s LGBTQ coverage, and that they’ve been so successful suggests a lot about the centrality and importance of social media in culture.

6 See Paper Prototyping for more.

7 If it helps: I was not a success as a designer or as an entrepreneur in the marketplace, either. I mean: I’m not a success in any sense!

Apple’s Healthbook: More Data-Driven Design

Posted by: Mills Baker, Categories: Blog

Today, an Apple news and rumors site posted screenshots of Healthbook, which seems to be Apple’s first major software stab at what will likely become a major focus for them. We can consider the software and hardware to come in a few ways:

  • the product angle: this is an important, useful product that can help all sorts of people in myriad ways to live longer, have more energy, feel better, understand what drives their health, feelings, and abilities, etc. Helping the world get healthier is truly important, after all.
  • the “quantified self” movement, a niche trend among a relatively small slice of the population with disproportionate influence in Silicon Valley which seeks to gather more quantifiable data about all aspects of an individual’s life for the purposes of self-optimization.
  • the health and wellness sector, a panoply of industries and media operations which monetize, in various ways, the desire of Western populations in particular to improve their health, manage their fitness, etc. There’s a lot of money aggregated in all these industries; it’s hard to predict which, if any, Apple expects to disrupt, but there are many targets.
  • the “ecosystem” angle: with devices like a smartwatch, a smart ring (see: furbo.org: “Wearing Apple” for more on a possible ring), the Apple TV, vehicles with CarPlay, and so on, Apple continues to enhance the value proposition for iOS users who buy other Apple products. This also leverages their existing strengths; competitors in, say, watches or rings will also need to have iPhone-level CPUs wirelessly communicating with their sensors; unless they’re Samsung, that’s unlikely to be the case.

Taking a closer look at the screenshots reveals what —at present, in pre-release software— Apple thinks it can handle with coming iPhone and associated devices like the presumed watch or ring (which presumably has an enhanced version of the M7 co-processor and other sensors):

This card-based UI recalls Passbook (as does the name Healthbook, although it seems strange to base the names of marquee brand apps on the name “book”), but this array of functions is astonishing. How will they capture all of this information?

  • Emergency and Bloodwork seem likely to feature user-entered information; a screenshot of the Emergency card indicates as much, so it’s safe to assume that not all cards are associated with sensor data or automatic reporting; I assume this applies to Weight and Nutrition at least as well
  • There’s no current Apple-branded solution for persistently reporting Heart Rate automatically, even though there are ways to measure it periodically and 3rd party solutions
  • I have no idea how HydrationBlood Pressure, Blood Sugar, Sleep, Respiratory Rate, or Oxygen Saturation could work; rumors suggest that some of these are measured by novel sensors Apple is investigating, but this is all novel. It’s worth noting that the leaked icon suggests confidence that Blood Pressure, Activity (calculated based on M7/8 data and possibly more; see Moves, an Activity Tracker for iPhone and Android for an example of how this works), and Heart Rate will be measured at launch:

 

Whether a smartwatch or a ring or cool electrodes in Dalmatian, Space Gray, or Gold will send this data, what seems notable from a mobile design perspective is that hardware is enabling innovation again in a structural way, not merely by permitting software to run more quickly, render more, or hold more in memory but by taking software into new domains.

Thus, for a designer Healthbook is interesting in the way that Reporter for iPhone is interesting: this is early software design work in a frontier area, one likely to be developed very quickly as competitors and imitators flood the market and consumers awaken to the possibility of “software eating the world” and “the internet of everything” and so on. I’ll call this area “data services” in this essay.

Since 2000, there have been lots of areas of intensely innovative activity in our industry. Some of them, like work on scaled computing —I include BigTable, MapReduce, data center designs, etc. in this— were driven by engineering but had significant consumer effects (enabling software like Google Maps to be developed and enhanced at reasonable cost, for example). Others, like web design in the mid-2000s’ Web 2.0 era, are too diverse to cover briefly here.

An obvious era is that of touchscreen and mobile computing, inaugurated in 2007 when the iPhone launched; this era entailed, among other things:

  • a re-invention of the general-purpose PC with total commitment to ease of learning / use for neophytes;
  • an expanded notion of who should be able to use computational devices; no tolerance for “expert cultures”
  • discarding anything that cannot be made simple: filesystems, simultaneity of app processes (with exceptions), wizards, configurations, user profiles, macros, scripting, etc. etc.
  • the reorientation of UIs towards the new inputs of touchscreen computing, with a corresponding simplicity

Since this mostly meant, in practice, “making new iPhone and iPad apps” and “designing UIs,” there’s lots of momentum in software design behind visual innovation. Indeed, much of iOS 7′s ostensibly radical revision comes down to stylistic variance, content-first principles, and app chrome that looks less like chrome or hides when it’s not being used.

In other words: for several years, the “exciting” part of software design has been UI design. And UI design has brought enormous benefits because for decades, UIs were the pain point for computation. While there’s still work to be done, I think the decreasing yields of UI revision indicate that we’ve largely solved the touchscreen UI problems of our moment; you won’t help many folks by making an “easier to use” camera, an “easier to use” social network, an “easier to use” note-taking app. It’s all getting “easy enough,” you might say.

But Healthbook, Reporter, the aptly-named Automatic driving assistant, and the like show the more challenging type of design work that remains: building “data services” that automatically interact with our environments, automatically compute data, and automatically present their findings in useful ways.

One way to think about this is in a sequence of plausibly “exciting” consumer apps: in 2008, an app which allowed you to note what you ate, take photos of it, and use estimates or input its caloric content would have been a useful development. In 2011, a consumer might not be excited unless the app guessed the food and calorie count from the photo; they might have also expected some recommendations based on what they ate.

In 2015, a consumer will not want to have to open an app and trigger an entry at every meal; she will not want to photograph her food or worry about the accuracy of the estimate. As sensors get closer to the body, disperse through our appliances (as with Nest), and envelop the home and commercial environments (like iBeacons), she’ll expect a system of quality to passively, automatically, and perpetually record what she eats, notify her if she needs to alter her normal behavior at opportune moments, and otherwise stay out of her way.

The designer’s goal approaches absence much more in this instance than in Jony Ive’s notion of “deference to content”; for data services to be all they can be, they must truly reduce the cognitive involvement —to say nothing of “burden”— of the user.

The future isn’t better-looking or even easier-to-use apps; it’s centralizing in an app the minimal management work required of users of data services which will monitor their health, note their tastes and behaviors for useful suggestions at the right moments, record the parts of their lives they want recorded without turning them into photographers or diarists.

If Healthbook is any indication, then, the future isn’t on your phone; it’s in the interaction of your phone, various sensors and devices in your home, car, office, and city, and on your body, and servers / systems that machine learn and analyze your data for you. Pervasive and invisible, this area of opportunity will not reward those who simply have great UI ideas or excellent taste; while app interface design obviously remains crucial, the innovation and value to come won’t render in a PSD, but it will be all the more transformative as a result.

Textual or Visual UI Elements?

Posted by: Mills Baker, Categories: Blog

The field of design, like most early- or non-sciences, struggles with abstracting or generalizing its principles for many reasons —some addressable, some not— and as such many of its practitioners have only thousands of “rules of thumb” with scant systematizing or organizing structures behind them.

So there is no fixed answer; but here are some considerations:

Text advantages:

  • Text wraps, scales, and has been “responsive” for many years in most UIs. Since filenames and font-sizes vary, to say nothing of displays and different character sets, OSes, apps, and the web all handle variations in text’s dimensions very well; the conventions are established. Thus: text is more variable than a visual element, in most cases, and can display well in more varied conditions.
  • Text is universally comprehensible, if it’s read; while many users seemingly skip text anywhere they see it —or rather, they don’t see it— it is still less ambiguous than many of iOS 7′s iconographic glyphs. Visual metaphors are much likelier to be misunderstood than language, since they are usually representations of language, words abstracted into constrained illustrations
  • Text is cheap. You can easily iterate on text, push changes to text, A/B test copy or textual calls-to-action, without bugging your designer (should your designer still be sequestered in the world of deliverable PSDs) or your UI engineers. While edits may sometimes have consequences one must address, they’re nearly always lesser than the comparable consequences of changes to a very visual design.
  • Related to its cheapness is its ubiquity: text works on any device, wether interaction is accomplished via touch or mouse-click or speech or shake. As devices and device-types proliferate, text seems likely to be the easiest means to achieving cross-platform coherence. See: One Interface to Rule Them All: iOS 7 & Future Apple Products.
  • Thanks to what experts call “the world wide web,” users are somewhat accustomed to clicking or tapping on text to interact with software.

Text disadvantages:

  • Text is commonly the content. When your Facebook, Twitter, Quora, WhatsApp, Baidu, Mail, or Messages apps have all-text UIs, the UI text competes with the text you’re actually there to read. Why not break up the visual plane with some easily-spotted and used visual elements?
  • No one reads text; no one reads instructions, captions, tooltips, tabs, titles, headers, footers, or paragraphs. Users click around, make their cow-paths, and stick to them; thus any gains in “explicitness” with text amount to very little.
  • While hypertext links on the web are widely understood, it isn’t clear that this convention is portable. Users may contextually recognize that underlined words (that trigger UI changes on hover) are clickable on the web without then thinking that all words in all UIs are clickable. Indeed: they’re not! Despite Apple’s efforts in iOS 7, it is clear that without affordances of some sort, there is too little distinction between “text” and “UI text” in many apps.

Visual element advantages:

  • UIs made of visual elements or visualizations of metaphors can be really intuitive. When one-year-old babies attempt to “Slide to Unlock” their parents’ iPads, it isn’t because it says “Slide to Unlock.” Pre-verbal and a ways away from understanding what “lock” means in this sense, they understand the motion of the elements and the responsiveness of the screen, and that’s enough.
  • There seems to be a correlation between use-expertise and preference for information density and textual UIs; while a novice is happy with the tradeoffs involved in Cover Flow, experienced users don’t benefit as much from it. They understand regular data views (like iTunes) and favor precision and access. Thus: visual elements are most useful in new use-cases or with newer users.
  • Visual elements can be beautiful, exciting, fun, funny; they have aesthetic qualities which exceed those of text (which is nevertheless capable of carrying aesthetic data), and can be enticing or appealing to users simply for those qualities. Aesthetics are important, especially experientially. When Steve Jobs bragged about OS X’s (now deprecated) aesthetic by saying “we made the buttons on the screen look so good you’ll want to lick them,” he wasn’t being superficial; making UIs approachable by developing visual elements is smart.
  • It is increasingly easy to code visual elements, giving them many of the advantages of text (scalability, ease of iteration, etc.); most of the disadvantages listed below can be mitigated or eliminated by using coded visual elements rather than rasterized image assets or the like.

Visual element disadvantages:

  • Adding visual elements, supporting interactive physics, or even just generating illustrative assets for a UI is costly relative to text, both initially and in terms of continued development / maintenance. This cost comes in several forms, but most important to good development practices is the cost to speed of iteration. Changes to text typically take the time to type them; new PSDs or renders or drawings can take much, much longer, especially when every visual item needs to be rendered for many devices, display densities, etc.
  • Visual elements are less universal and less variable. If someone with assistive features in their OS activated —say, larger fonts or higher contrasts or a “zoomed in” view— comes to your product, how will it fare? Text is far likelier than a graphic or interaction or icon to scale appropriately, play well in a different configuration, handle the many environments in which it might be displayed. In multi-device or multi-platform applications, the importance of this cannot be overstated.
  • Visual elements seem to age more quickly. To again bring iOS into the discussion: Apple’s transition from iOS 6 to iOS 7 was taken by many to indicate a maturing, increasingly savvy user-base, one bothered by (or at least indifferent to) skeuomorphic elements like the Corinthian leather of Find My Friends and so on. No one plausibly argued that the skeuomorphism made iOS difficult to use; they simply stopped liking it. Thus: increasing the amount of aesthetic design (in the form of visual elements) also increases the degree to which one must chase subjective tastes. While “timeless” designs —long-lived ones, that is— are possible, visual elements can age. Not all visual elements age, of course; but many do.

This is not comprehensive, of course; there are lots of other potential factors to consider, such as internationalization issues (with both the textual and the visual), device capabilities (screen, processing, graphics, and network considerations), ecosystem concerns (as with iOS), searchability (the semantic value of text is worth considering), and more. Thanks to what can be accomplished in CSS and HTML, visual elements needn’t cost bandwidth or come from Photoshop; at the same time, an ever-tech-savvier world population using the web means that visual elements may be decreasingly necessary in some applications. Newer applications and devices will always benefit from making their features and UIs analogously comprehensible, however, and for lots of uses visual UIs are simply better. It seems likely to remain a case-by-case or even element-by-element question.

The Return of the User Manual

Posted by: Mills Baker, Categories: Blog, GeekTalk, Mobile Development

Two challenging software design briefs to consider:

  1. We have a new system for controlling the features, functions, and apps on your smartphone. However: it is more or less invisible. It is also novel in what it can (and can’t) do, evolving to include ever-more capabilities each year, and somewhat unbounded in what it could eventually do. This system will be distributed to hundreds of millions of novice smartphone users. Please design the system such that users can understand what they can and can’t do as well as what’s happening as they interact with the system. Note: the system has very imprecise inputs and many inputs will fail.
  2. We have a new system that can perform many forms of calculation, charting, graphing, computation, comparison, measurement, and more. As before: it is novel in what it can (and can’t) do, evolving to include ever-more capabilities, and somewhat unbounded in what it could eventually do. This system will have much lesser distribution and will mostly be found by sophisticated users; however: it must have a conventional and familiar graphical UI. Please design the system such that users can understand what they can and can’t do.

READ MORE

Greg Jorgensen: “Why don’t software development methodologies work?”

Greg Jorgensen writes at Typical Programmer about a vexing question: ”Why don’t software development methodologies work?”

Whether a methodology works or not depends on the criteria: team productivity, happiness, retention, conformity, predictability, accountability, communication, lines per day, man-months, code quality, artifacts produced, etc… I haven’t seen any methodology deliver consistent results…

READ MORE