Gmail's been redesigned

Gmail doesn't look quite the same anymore. Not sure when its visual look and feel last changed in any substantial way, something like 2007 perhaps, but it has always had a no-nonsense, "just get on with it" design that's been quite refreshing in the current day and age. 


My initial thought on the redesign is that the new "Material design refresh" looks a lot better than the ghastly "Material 1" design paradigm gmail in fact never really received. I also like how they've integrated some of the best Inbox features into gmail (like quick access to attachments).

But perhaps it's no surprise that gmail and some of the other Google web apps never got the material treatment. Why? Let's see. First, I can't help but notice that a quite complex app like gmail — and by that I mean complex from an interactional perspective, i.e. lots and lots to click on at any time — also seems to reveal some inherent weaknesses of the "Material design refresh" design system and most other flat or semi-flat design paradigms.

When you're designing a simple web app or a phone app, there are just a couple of things on the screen that you can interact with at any time. If you need to dig down into the details, there's typically a hamburger or settings menu behind which a lot of the complexity is hidden. If you only have a couple of things to click or tap on, users typically understand that just by the placement of elements on the screen, by using icons, and/or by certain elements having a dedicated color that the user has learnt means "tappable". Apple iOS's blue color is a good example of this, paying homage in some ways to the blue link color of the early days of the web. Anything that's that hue of blue can be tapped.  

However, gmail is the quintessential opposite type app. Everything's clickable and everything's available at any time. If you're designing a more complex app with lots of things to click on, things get a little harder both from a design perspective but also from a user perspective. From the user's perspective: What can this system do for me? From the system's (and designer's) perspective: How do I show what I can do?

The GUI designers of yore — think the 70s — rapidly came up with the idea of designing clearly visible buttons to signal to the user: "Hey, I'm clickable!" Later on, they started to add depth and shadows and other visual effects to make it even more obvious to the user that the screen real estate occupied by this particular set of pixels is somehow special.

Although debated over the years, in the Human-Computer Interaction (HCI) literature this "specialness" is often referred to as the buttons "afford" clicking.    

But for 5-6 years or so now, a different breed of interfaces have become popular. Gone are the clear, visual affordances of buttons with depth and shadows. In fact, gone are most visual adornments full stop. Instead, the so-called "flat design" design paradigm has conquered the digital world.   

While this blog post isn't going to delve into all the current flavors and history of flat design, let's just say it's been and still is a highly influential and fashionable design trend, impacting almost all digital design domains.

From what we've seen so far from the Material design refresh however, there are a couple of interesting aspects to consider in relation to its flatness. In gmail, at any one point in time there are a lot of clickable things on the screen. On a typically gmail front page showing your inbox, you can click on about a 100 different things.

As traditional buttons are not allowed anymore in the flat paradigm, the system still has to show the user what can be clicked. So instead of affording that in the traditional way by clearly delineating clickable buttons from other text using clear visual cues and color, the designers have instead chosen to add so-called hovering effects, i.e. when you move the mouse pointer over a clickable area the area changes font, color, background etc. When you move the pointer away from the area, it goes back to the way it was. 

Hovering effects are fine and there's nothing inherently wrong with them, in fact they were originally introduced and used in web design to make interaction more fun , playful, and engaging. However, when you have a complex interface like gmail where pretty much every inch of the screen is a clickable area, moving your mouse pointer over the canvas is like constantly shaking a kaleidoscope — there's just too much stuff going on all there time.

To me, this makes interacting with the new gmail a bit messy and visually tiring.

Why there is a notch on the iPhone X

Lots of people have had a go at Apple recently for a variety of reasons. One of them is the so-called "notch" that sits on top of the iPhone X and houses the components necessary to make Face ID work. Folks have called it ugly, a disgrace, an eyesore, a joke, etc. Some are even seemingly touting it as a symbol for everything that's wrong with Apple and often in a somewhat high-pitched voice

As a designer, I don't mind 'the notch' at all. In fact, I quite like it.

However, this little piece is not about whether I like it or not, it's about why there is a notch and why the notch is necessary given Apple's current design language and dogma.

I would describe Apple's current design language — both when it comes to its physical products and its virtual interfaces, but perhaps most pronounced in its physical products — as resting on two central themes: symmetry and minimalism.

Apple's version of minimalism, heavily inspired by Dieter Rams both when it comes to his aesthetics as well as his principles for good design, has to do with taking everything that's "unnecessary" away from the designs. Or, following Rams' principle, "as little design as possible". This is perhaps most evident in Apple's design of the Apple Watch, their Airport Extreme, the Apple TV, and almost all of their accessories such as cables, adapters, and so on. If something isn't absolutely necessary, it's up for grabs, it should be removed. For Apple, less is more, regardless what Yngwie thinks.

The doctrine of symmetry rules all of Apple's product designs and is a contributing factor to why many people, myself included, think that they are beautiful as objects. Think for a second about some other Apple products such as the iMac 5k with its big bottom bezel and black frames of equal size around the screen, the MacBook Pro with its equal number of USB-C ports on each side of the laptop, its huge, centered touchpad, and — if you look closely — the fact that the touch bar is perfectly centered too, adding some dead space to the left to balance the space the Touch ID sensor on the right-hand side occupies. Not many companies would do this. But at Apple, symmetry is king.

With past iPhones, all the way from the first iPhone to the just-released iPhone 8, this visual symmetry has been created by framing the screen by pronounced, unapologetic bezels of equal size at the top and the bottom. 

With the iPhone X's promise of removing the screen completely, Apple's product design team was faced with an interesting design problem. How do we keep the visual balance?

There were obviously a number of choices the iPhone X design team could have taken. However, from the various interviews surrounding the release, it seems clear that Apple has been working on the new design for several years and that the leading ideal from the start has been that of "just screen". Let's design a phone that is just screen, nothing else.

But how do you do that? How do you design a phone that is just screen, while it still does all the things consumers want it to do?

I've made these quick mock-ups to illustrate this design problem:


Let's go over these from left to right.

First, there's the Ideal design. This design is all perfectly symmetrical all around. I'd bet this was what the iPhone X sketches looked like until everyone had to agree that Touch ID wouldn't work under the screen and/or that Face ID was a thing and needed screen real estate.

Second, the Notch keeps the symmetry from the Ideal but adds a cut-out around the part of the screen that houses the Face ID hardware. In some sense, this is an honest design — it's the ideal design that's open with the fact that it's not possible. Rather than trying to hide the sensors needed for Face ID, this design turns it into something very visible. Others have commented on Apple trying to make the notch iconic — perhaps as a bit of an afterthought — but I think there's merit to that argument. Instead of quickly drawing the iPhone as a rectangle with a small round button towards the bottom, we'll now start drawing the iconic iPhone as a rectangle with a thing sticking down from the top. 

The reason the notch works is that it solves a number of design and hardware issues while staying true to Apple's design dogma of symmetry and visual balance.

When you look at this design, your eyes will close the gap created by the notch and you will get the impression that the iPhone X is all screen, although it isn't. This is why Apple is demanding developers to not hide the notch by using black background color around it.    

Third, there's the Top Heavy. This design can either be achieved programmatically by adding black background color around the notch or by product design by having a top bezel. The reason this design doesn't fly with Apple is that it introduces visual disharmony. Apple would never do this, and for a good reason. If you only look at the top of the screen, this design looks fine. However, if you look at the entire product it looks imbalanced and as if the top half doesn't belong to the lower half. Another example of this top-heavy design is LG's 5k "Apple" screen sold in the Apple stores in lieu of an actual Apple Display. I'm surprised Apple let that one in the door, although they probably told LG they have to make it black so no-one would confuse it with an Apple product.

Finally, to aid the visual imbalance of the Top Heavy design, Apple could have done what they have done in the past — balanced out the design by also adding a bezel or black area towards the bottom of the device. I call this design the Samsung, as this is the design path taken by Samsung S8 line of devices as well as a host of other flagship phones released in 2017, such as the LG v30, LG G6, Google Pixel 2 XL, etc. Apple probably steered away from this design early on. First, it looks too much like phones already released and Apple doesn't want to be seen as copying the designs of others. Second, as much as one might wonder why on earth the bezels on the iPhone 8 and the 8+ have to be so gigantic, they are in fact quite well-proportioned in relation to the size of the device and the display. The much smaller top and bottom bezels on 'the Samsung' design just isn't as balanced and harmonious. 


What You're Buying Is an Ecosystem that Comes with a Phone

A recent trend on technology websites has been to try to dumb down fairly complex and multifaceted matters into bite size, ELI5-type articles with the basic stance that you, dear reader, are in need of having something explained to you.

I'm not really sure why this is or where it came from, but I can see marketing folks running around the editorial room shouting "—Hey guys, great great stuff... but we need something that captures people's attention — they're seeking fast answers. Millennials, ok? MILLENNIALS!! OK!! Any questions?"

These articles are often called things like: "Here's the best TV for you", "We've picked your next fast food”, "Your next car is electrical", "Eat nothing but carrots", and so on.

The Verge is an online tech and culture website I usually take pleasure in reading. However, their latest addition to the buzzfeedy clickbait genre outlined above is called The Best Phone You Can Buy Right Now (2017). Their suggestion is the Samsung Galaxy S8:

“Samsung’s Galaxy S8 and S8 Plus is the best phone for most people. It’s available across all four US carriers and unlocked. It has the best display on any smartphone right now, a head-turning, premium design, a top of the line camera, reliable battery life, and fast performance.”

Confusingly, they later suggest that the iPhone 7 is an option “If you’re not into the S8’s curvy design or are locked into Apple’s ecosystem”.

There are many possible angles from which one could try to approach this article, such as who this article is actually for (assuming that most people who find their way to a semi-obscure website such as the Verge have probably heard of both Samsung and Apple already) — but let’s just focus on the “ecosystem” part mentioned in passing above.

As a disclaimer — I'm a long time user of both Apple's iPhone (since model 1) as well as Android. I currently have and use both an iPhone 6S and a Nexus 6P. I'm not a strong proponent of either and see pros and cons with each platform and ecosystem. In this, I think I differ from most of the authors of these articles, that unfortunately tend to be written by people that have a very strong preference for either Android or iOS. Android fans tend to stress superior tech specs of Android phones, especially per dollar spent, where Apple fans tend to stress the quality of the user experience and the quality of the third party apps. 

This actually gets us to the point here. When you choose a phone in 2017, you’re basically not primarily picking a phone, you're main decision is choosing between two competing, behemoth ecosystems: Apple or Google.

Sure, there’s a bit of overlap, especially Google's ecosystem can bleed into Apple's and Apple has made a few lame attempts at providing some of its services on other platforms, but basically, you pick one or the other as your go-to place for things like cloud storage and backup, mail and calendar, and media consumption.

The phones that are currently for sale are in some sense just the latest physical incarnations of these competing ecosystems, whether they are from Apple, Samsung, Google, Nokia, HTC, Huawei, LG, OnePlus, Sony, or what have you.

There’s a new iPhone every year and a never-ending stream of new Android phones – although the one that counts, the latest Galaxy from Samsung, is also updated on a yearly basis. The tech specs of the latest and greatest phones in terms of CPUs, storage, cameras, screens, and so on are incredibly similar, to the point of being identical, across almost all the so-called ‘flagship’ phones. Sure, there are some small differences, but these are mostly only relevant until the competition’s upgrade cycle kicks in which tends to level everything out again. So right now, as duly noted by the Verge, the Galaxy S8’s screen “has the best display on any smartphone right now”.

A few  observations here. First, what’s meant by “best display?" Second, note the “right now” disclaimer.

The S8 has a great display, no question about it. It’s one of the first phones to shrink the bezels around the its main screen to such an extent that the phone and screen becomes one. This isn’t really a device with a screen on it; it’s more a screen wrapped around a device. The screen's curved edges add to the feeling that what you’re holding is a screen, not a 'device'. The Galaxy’s screen is also very bright, has a ridicolously high resolution for its size, and uses HDR and other display techniques to boost colors and contrast to make the image pop.

Many people find that these effects produce a “better” image. Personally, I find these images artificial, plastic, and fake looking and I much prefer the more natural image reproduction offered by the iPhone and most of Apple's products (as well as some other Android phone manufacturers for that matter). What this does point to however is that “best” is a difficult concept when it comes to things where personal preference matters.

Similarly, the Galaxy’s camera is indeed “top of the line”, as argued in the Verge's article, but if you look at side by side comparisons between images taken with different smartphones you see that, first, they differ very little in anything measurable (they all have very similar color depths, resolution, etc.), and, second, where they do differ is almost entirely in their subjective aesthetics — color temperature, saturation, differences in fake bokeh, low light compensation, etc — most of which is a result of differences in post-production software processing, not hardware specs. Again, “best” here is in the eye of the beholder.

Additionally, the differences between the phone tend to be evened out very quickly. Apple's new phone is due to come out later this year — which by the way seems to look very similar to the S8, Andy Rubin's Essential Phone, as well as the LG G6 and V30. The next Pixel? Chances are it'll be an all screen phone too (or not, surprisingly enough).

What this generation of phones more clearly than ever shows us is that phones-as-devices are increasingly becoming less important, they are gradually becoming just windows into the software — software which in itself is just the machinery needed to access the underlying ecosystem. More than ever, what sets phones apart in 2017 is what’s on the screen, not the screen itself and what’s literally behind it.

The argument is that the choice you make when buying a phone is first and foremost not about the phone you buy, but the ecosystem you invest in.

If we buy this argument, then there are three relevant questions you need to ask yourself:

  1. What user interface do you prefer, Android or iOS? Both have pros and cons, yet they are becoming increasingly similar in some key areas. In my view, iOS is a much more polished user experience and I'm to this day surprised that Android isn't catching up faster in this area.
  2. What ecosystem is right for you? Google’s or Apple’s? Both have pros and cons. In my view, Google's ecosystem is better for your own stuff (such as the integration between Gmail, Google Drive, and Google Photos) whereas Apple's ecosystem is much better when it comes to entertainment. iTunes has a bad rep, sure, but it's also very functional.
  3. How privacy conscious and concerned are you? If you are concerned about privacy, you would naturally gravitate towards Apple's ecosystem. This is not to say your data is safe with Apple, but unlike Google, Apple's key "product" isn't your data. What's a bit counter intuitive in all this is that it's probably less likely that Google gets hacked than Apple, so in some sense your data is probably more secure with Google, but on the other hand Google's using that data in a variety of ways — not all of which you're aware.

Looking at it this way — when you take a photo with your phone, what happens to that image after you taken it? What would you want to happen to it, or not happen to it?

Talking to Digital Assistants Is Weird. What Can We Do About It?

It would be interesting to see actual usage data on voice-activated systems and devices such as Apple’s Siri, Amazon’s Alexia, Google’s Assistant, and Microsoft’s Cortana. I suspect actual usage per user is very low.

From a user experience perspective, there are some fundamental issues with talking to a bot. First, it’s an awkward thing to do in public. Humans are social creatures. Walking around and seemingly talking to yourself has never been that cool.

Second, how do you talk to a bot, especially in public? Siri is not a real person (promise!) and you don't have to talk to her politely, yet a lot of people do: "hey siri, can you please tell me the weather forecast for tomorrow?". On the other hand, if you would walk around and command Siri like a dog, would that be a socially acceptable behavior?   

Third, it’s not just about appearing weird, it’s also about privacy. Unlike on Facebook and Instagram that are heavily curated and hands-off in the sense of the audience not actually being there, in real life people don’t want other people to know what they’re up to, what they're curious about, and what the weather is like in the twin cities.

This could potentially also suggest why Alexa is the perhaps most useful and also most used — question mark — of these devices. First, although Amazon is desperately trying to find other settings for it, Echo devices are typically used in your home. As you’re not in a public setting, it might make you a bit more relaxed about what you say and how you say it. Second, Alexa can meaningfully connect to some of the infrastructure in your home and bring it to life. I would suspect anyone who has an Echo device and any form of smart lights have hooked them up together and are using them as the killer demo for friends and family. How useful or more convenient or faster it really is though, well. It’s a good demo.

The rise of the speaking assistants over the last 5-10 years is interesting from a lot of different perspectives. One of them is that it is something that has been driven mainly by technological advances, not by user needs. However, as users don't always know what they want to have in the future (and no, I'm not going to quote Henry Ford here), that's fine with me. What I'm a bit more curious about is the way the speaking assistants' designers imagine the actual interaction between the human and the system. 

The intended interaction is a form of dialogue-based interaction, or an interaction style, that research in Human-Computer interaction (HCI) has shown has a number of very clear disadvantages to some other types of interaction styles, including the lack of visual exploration made possible by direct manipulation type interfaces (and a few advantages, too, many of which are unfortunately lost when you’re not typing your commands but speaking them out).

The most obvious problem with a dialogue based interface is that the capabilities of the system, i.e. what for instance Siri can actually do for you, is hidden from you as a user. You are the one that have to tell Siri what you want her to do for you. This is a great idea in theory, but how do you know what she actually can do? Like most users, you probably start guessing and probing the system with things you think it might know. Unfortunately, Siri often replies with “I didn’t quite catch that” or she simply tells you that she can’t do that, or, she has no idea what’s going on and just performs a web search for you hoping she gets away with it.

If you instead have a visual interface, say a menu on a website, the capabilities the system holds — i.e. what the menu can help you do — are visible to the user. You can glance over them until you find what you are looking for and then tap or click on that.

With voice, things are different. The system must either tell the user what choices are available (as typical phone menu systems do, i.e. “Press 1 to….”) which is very inefficient as you typically need to listen to all the choices before you pick, making glancing or quickly browsing more or less impossible, or — which is the choice of the digital assistants — you don’t really give the user anything and expect them to figure it out themselves. The idea here is that the digital assistant should be clever enough to figure out the user’s imperfect commands — from a system perspective — and be able to act on it anyway. So far, this isn’t really happening.

It would be fascinating to study the history of the digital assistants and why they have such a strong following in the tech side of user experience, particularly in computer science. Apart from being an interesting computational challenge involving natural language processing and machine learning, I think Star Trek, 2001: A Space Odyssey, and Dick Tracy might be worth investigating.

So then, what can be done to them to make them more usable?

Until it’s been released, it’s of course difficult to say what Samsung’s new digital assistant Bixby really is. Hopefully, though, it will be different from Apple’s Siri, Google Assistant, Amazon’s Alexa, and Microsoft’s Cortana, so that at the very least they are not all trying to do the same thing.

In my view, one way in which voice could actually be useful is for what we can call hand, eye, and voice coordination. The idea here would be that you could operate your smartphone, tablet, or PC by grabbing on to some on-screen object using touch or the mouse pointer and then simultaneously tell the system to perform an operation on the thing or things that you’ve selected. For instance, you could tap and hold a picture in a gallery and tell the system to rotate it. Another thing voice could be useful for is to bring to the screen objects that aren’t there: for instance, in a vector graphics application you could say “add circle”.

This could be a very powerful way to make more complex interaction possible on smartphones without resorting to interface ideas such as contextual menus that don’t really work well in non-workstation settings. I don’t know if this is the way Samsung’s Bixby will operate, but I hope so. In a research project together with ABB, we have explored a very similar notion in some depth by combining a visual interface with eye tracking and gesture recognition. This was intended for an industry control room, which is typically a very complex environment when it comes to interaction. In our prototype, the user is able to gaze on an object visible on one of several screens and while focusing on that object using gaze, the user is able to manipulate it in various ways using gestures. The video is below. Read more about this project here:


Update (March 29): The S8 has now been released and Samsung thinks of the Bixby as "a bright sidekick." There's also a dedicated hardware button for it, indicating that Samsung believes in it. Will be very interesting to see what it can do and how it differs from the other voice-driven assistants.


Do You Need to Know How to Code to be a Designer?

Wired has published an article about John Maeda's annual 'Design in Technology' report, called "John Maeda: If you want to survive in design, you better learn to code" that's worth reading — and so is the report itself.

While I sometimes find John Maeda more the Don Cherry of design than the Warren Buffett (as argued in the article) this is all reasonable stuff to me. Rudimentary coding skills are key for any designer in the digital space. But then again, understanding users and being able to use that understanding in various creative ways to come up with ideas for, shape, and build new things are things that have always been asked of good designers. In some sense, this ability is design — it's not new by any means.

What's changed, I think, are two things. First, design has matured in business and is now a more integrated part of a wide variety of different kinds of industries. This integration is increasingly requiring designers to not sit by themselves in their fancy design studios but to be part of multidisciplinary teams of business people, marketing, and engineers. The challenge for design in this integration has been finding its way to operate, communicate, and making its voice heard.

This brings us to the second point, which is that most, if not all, of the products and services shipped today are digital in one way or another. Back in the day, designers used to make fancy wooden or clay models of their designs that everyone could gather around, look at, and critique. While this still happens, most designs these days aren't as tangible, physical, and fixed. How do you make a clay model of a service? So, while we're not primarily making physical models anymore, we're still making models — but we're now making them in what we in the more theoretically interested interaction design research community for about a decade have been talking about (and debating) as 'the digital material'.  

A problem design as a discipline has had — perhaps especially in a business context but also in its relation to engineering — when moving from shaping physical models to designing in the virtual and intangible space has been how to express and stand up for design ideas early on and how to make them tangible so they can be 'sold' to the c-suite, marketing folks, business people, and the engineers. What is the best language for design to communicate their ideas?

There are three main phases here. First, as noted above, back in the day design used to have an ace up its sleeve when it could invite everyone to an event where the model was to be revealed, preferably to a unison 'aooow!' As the physical aspects of design work became less and less important, design searched for a new megaphone through which to speak. We've made stumbling attempts in 3D modeling, coding, video making, animation, VR, AR, etc. However, I think in the end the fallacy of design was to lean too heavily on the business side of things and adopt Powerpoint and the deck as its main vehicle of communication.

If you're in a meeting and there is a deck, everything in that meeting will center around the deck, and a deck is generally a terrible way of presenting design ideas. I firmly believe that interesting design directions such as service design are currently being held back by having adopted a style of communication which is too linear, logical, fact-based and fact-driven, and just isn't a very designerly way of working. 

While decks might get some altitude with management and marketing, decks also don't speak to engineers. Engineers hate decks. Engineers generally want to see what the idea, what it should look like and how it should work so that they can figure out how it can be built. In siding with the business side, I think design is losing some of its strong ties to engineering. Historically, many product designers have teamed up with engineers to work closely together to make their ideas happen. In the last decade or so, some of the most successful design-driven businesses such as Apple have been good at teaming strong design up with strong engineering. For this to work well, engineers need to be equipped with a sense of why design is important and needed. Likewise, designers need to understand the engineers.  One way to do this is to learn to speak their language, literally.

Over the last 10-15 years or so, designers have tended to try to team up with business people rather than engineers to make design happen on a larger scale, and speaking through Powerpoint is just one aspect of that. This is all good and has perhaps helped move design up the food chain, but I think that one of the points Maeda is making is that design also needs to reconnect with the engineers to make their ideas come to life. This requires coffee script and C#, not sanding paper and scrapers.

Tesla's Real Game Changer is Maintenance, or Rather Lack thereof

Trains and hyperloops for sure, but electric vehicles—autonomous or not—are the future of personal transportation. Yet the true disruptive dimension of electric cars is their reliability. This also partly helps explain why dealers don't like them. Tesla sold about 50,000 cars in 2015, which is a very small portion of the 17.5M cars sold in the US that year. So why are the dealers doing everything they can to try to stop them—often using what appears to be ludicrous arguments of 'unfair competition' as well as state legislation that awkwardly seems to fight innovation and new business models?

Reliability is an overlooked issue in this regard. Not only is Tesla trying to upgrade the car buying process from its current 1950s model, sidestepping the dealers to gain full control of the end to end car manufacturing and sales process. With the reliability of electric cars, Tesla is also revolutionizing car ownership. Traditionally, dealers have made a small cut on every new car they sold or leased, but in truth, they make most of their money (80/20) from handing out products designed to need service every 6-12 months and, after a couple of years, break down at regular intervals.

Electric cars hence threaten to shut down this vital income stream for the dealers. Even through Tesla is selling comparatively few cars, dealers are worried that other car manufacturers will eventually figure out where the market is heading and follow suit. This means that BMW, Mercedes, Audi, Ford, etc. will release more and more mainstream, low-maintenance electric vehicles. Even if these manufacturers continue to work through existing dealer networks, the dealers don't see where they would make their money.   

Edward Tufte and Interaction Design

Went to a full day course today here in NYC given by Edward Tufte; a legend in the field of information visualization. I've always enjoyed reading Tufte's work. His historical odysseys, writing style, thoughtfulness, and carefully crafted books should be required reading for everyone from business schools MBA-ers to CS to the arts to social sciences to journalists to design educations. His books are that good and that important. For me, the central theme in Tufte's work has been the integration of text and visuals, content and form, which has been an important source of inspiration for my own work.

I'm not sure what I had in mind for a one day course and I did enjoy most of it, but I left feeling a little short changed. Perhaps the course is mostly meant as a basic introduction to his work and not a deepening of it? His presentation style is also slow, quite likely knowingly so—deeply rooted in the 'old american male professor' genre—but still slow. I like that he's not using Powerpoint slides but instead builds up a story (slowly) around a couple of key images that he zooms in and out of (although to be honest some of his pictures looked a lot like slides!) I also really liked that he just jumped right into it, on the hour, without even the shortest "hello and welcome." Refreshing!

Tufte's work is hence utterly relevant to so many different areas and has impact and implications for even more fields. Yet, one of the areas I feel his work is relevant to but he hasn't quite grasped is my field—interaction design. He's hovering over a host of relevant topics here, for sure, but doesn't quite get the details right, which unfortunately for him is exactly what he keeps calling out other people on, so... 

First of all, his thinking in this space seems a bit old. For instance, Tufte kept referring to some Dell laptop where the scrollbars apparently covered 11% of the screen real estate. That's a relevant anecdote if the year is 2003, but not really in 2016. In a world of smartphone apps, retina displays, tablets, hamburger menus, world wild west, notifications, creative online typography, etc. etc., it would seem that there are so many other more recent examples of the same idea (i.e. badly designed interface elements that hide rather than promote content) that this rather archaic anecdote more serves to confuse than enlighten. There were a few others like this as well, including web designers confusing the short term memory magic number 7 +/-2 theory. I'm sure this has happened, but probably not in the last decade and certainly not a common occurrence anymore, if it ever was.

There are so many examples of Tufte's work that are still so relevant for today's web and app designers, so why not talk about these instead? He mentioned one in passing: that many sites today are conceived and designed to be responsive, i.e. aiming at providing an 'optimal reading experience' regardless of what device/resolution you use—in effect separating content from layout and function. I think it's fair to say that Tufte's collected works can be seen as a critique of this very relevant and timely design idea. So why not spend time on massacring this? I would.

Tufte also mentioned that he was one of maybe 10 people in the world who thinks theoretically about these issues. Again, he's been at it for a while and this was maybe, even probably, true at some point, but I also think it's a bit negligent towards what has happened in the field in the last 20-30 years. Folks from all kinds of (academic) disciplines are doing it now: such as Human-Computer Interaction (HCI), interaction design, philosophy of technology, and design research, as well as a host of non-academic thinkers utilizing blogs, internet-based magazines, professional conferences and workshops, etc.

Third, Tufte's only substantial idea (at least in the way it was framed at this talk) about interaction design echoed the Heideggerian notion of "not letting the interface get in the front of the content". This idea was popularized in HCI and interaction design by Winograd & Flores back in 1987 and even earlier than that in the philosophical field called philosophy of technology. This idea is one that I've drawn on heavily in my own work, among several other researchers and practitioners in interaction design.

Yet, the problem here seems to be the distinction between form and content. In all his work, Tufte shows that these go best together if they are considered hand in hand. 

Let's look at this in a bit more detail. One of Tufte's recurring rhetorical refrains is that you now have better tools at your disposal in your smartphone than those that you use at work and that we should all rise up and demand at least equally good tools for 'work'. That's fair, but what Tufte misses here is that this also means that there is a substantial overlap between "work" and "leisure"—or whatever we want to call it, i.e. when we're not actually 'working'. I think one of the fundamental shifts in interaction design over the last 15 years or so is that the computer now just isn't something we think about as a machine for work. It's so much more than that. We use our PCs, laptops, and smartphones to mindlessly scroll through Facebook, play games, pass time, buy stock, watch movies, find plumbers, stalk coworkers on LinkedIn, read a text, write novels, keep swiping left and only occasionally right, anonymously rant on online fora just because you can, check out new music, do some work stuff (mostly emails), create graphs for our kid's soccer team, pass time before I can get out of here, plan the holiday in Cape Cod, and so on and on and on. All using the same magical machine.

With this in mind, I think Tufte's explicit notion that the primary role of interaction design is to make the interface disappear in favor of content is still a relevant perspective in many ways. However, it also implicitly suggests a rather old-school, work oriented perspective lurking underneath—which is that the user's only goal of using a computer or a smartphone is to get to the 'content' that their interfaces are hiding. It often is, but not always. Such a view is not enough to understand what interaction design is today. In what I have called the 'third wave of HCI', we see for instance web sites and apps where the interface is knowingly designed to be unclear and fuzzy and it is the user's task (or fun, to be more precise) to figure it out. Here, the interface and the content blend into one—they become the same thing. The interaction itself becomes valuable and meaningful, not just the so-called 'content' that it is supposed to hide or show. Computer games have always had an element of this. What makes Flappy Birds irresistible is the interaction, not the content.

At the end of the day, literally, I left Tufte's talk not bedazzled but yet hopeful. Tufte's thinking is still relevant for interaction design, it might just require someone with more detailed knowledge about the area to be able to interpret, see, and further develop its significance. One potential path is the current interest in digital assistants such as Siri, Alexa, Cortana, and 'Ok Google'. Applying Tufte's information visualization principles to these would probably reveal quite a few design obstacles to overcome in the next couple of years.

That said, I ended up taking a lot of notes and did get to doodle a bit too. This one, for instance, I call "A Bear with Many Faces" (yes, of course I name my doodles!)

Towards Integrated Headphones? On Apple and the Rumored Removal of the Headphone Jack

Quite a few people, not least so tech bloggers, seem to almost violently oppose one side of design I’ve always thought Apple is doing well, at least on the hardware side—that good design is as much about taking things away as it is to add stuff. Apple's done this with the floppy drive, CD-drive, VGA port, and lately—with the rather incredible Macbook—with all but one port. Others have quietly followed a couple of years later.

Some of these bloggers are now upset that the next iPhone might not have a traditional headphone jack: the 3.5 mm stereo connector. This connector, which of course is analogue, was invented in the 19th century to be used in telephone switchboards. The same design is still in use for connecting a wide variety of audio peripherals, from headphones to electric guitars.

That the connector is old is not the reason it might have seen its prime, however. In many ways, the 3.5mm is the perfect analogue audio connector. It rotates 360 degrees, you can charge at the same time as you listen to music, headphones do not have to be charged, etc. Yes. 

If the rumors are true, I am sure Apple has converging reasons for why they want to remove it. First, phones seek to get thinner and thinner yet with huge batteries inside and at some point size and real estate does become an issue. Here, Apple's engineers probably struggle with the length of the connector, not necessarily its diameter. Second, I would be surprised if there is not an element of selling-new-headphones here too. Beats, after all, is an Apple company, and yes, yes, you do wonder what company would be ready with a line of USB-C headphones in case Apple decides to go with the new connector. Apple's ecosystem is important, but the firm makes a lot of money selling stuff still. If they decide to go with the Lightning port, which I dearly hope they won't, they will also force manufacturers to pay up for using it. Additionally, on a paranoid note, the move could potentially be DRM oriented, but surely that's not the case, right? RIGHT? 

Third—and worth spending a bit more time on—we have ‘other technical reasons’:

Here, a USB-C port (let's hope they don't go with Lightning connector, although knowing Apple that's probably not unlikely) would in the long run actually offer increased compatibility between devices. As the traditional headphone jack is analogue and really just envisaged to transport an audio signal, various ‘hacks’ have been made to its design along the way to allow it to do more. The iPhone, for instance, uses a 4-conductor (so called TRRS) phone connector for its headset to allow for a microphone and control buttons for pause/play and volume. Other vendors have made other design choices. This means that Bose’s quite excellent QuietComfort 20 noise cancelling earpieces come in two different versions, one for Apple devices and one for Android. As an active user of both an iPhone 6 and a Nexus 6p this is surprisingly annoying. I’m begging for the industry to widely adopt USB-C as the standard for all kinds of peripherals, regardless of type. I think Apple should have some credit for paving the way (and taking the bullet, too) and I’m surprised other vendors aren’t following suit—yes, looking at you Samsung and Google.

Another, and for me personally, the primary reason I’m not so sure dumping the headphone jack is such a bad idea is also related to it being analogue. This one, however, has to do with sound quality, something I care about.

As the traditional headphone jack serves the headphones you put into it with an analogue signal, this signal has to be converted from digital to analogue and then amplified before it can leave the phone through the headphone jack. In other words, in the signal chain from wherever in the cloud your music lives to your ear, there has to be a digital to analogue converter (a DAC), an amplifier, and some form of speaker system (such as your headphones). Today, everything except the latter typically lives inside of your smartphone or your computer. Almost without exception, these amplifiers are underpowered, of poor quality, or both, which in turn make them unable to drive anything else than the crappy kinds of earpieces that came with your phone. Hence, even if you have a good pair of headphones, they do not really sound as good as they can on your mobile device.

There are at least two things to consider here. First, there has been trend towards wireless Bluetooth headphones. As often tends to happen, this technology was released before it was ready for prime time. Early implementations of Bluetooth headphones were buggy and the sound quality was terrible. While the technology has come a long way in the last couple of years, Bluetooth still has its limitations and quirks, mainly to do with the fact it uses the same wireless frequency, the 2.4Ghz band, as literally everything else: wireless mice and keyboards, WiFi, microwaves, you name it. Still, Bluetooth audio is becoming a viable alternative to the headphone jack.

Second, in an increasingly broader circle of people that are actually interested in audio quality—even outside of the rather narrow and highly specialized group often referred to as audiophiles—there has been a trend towards getting dedicated external audio units. Musicians are getting devices such as Universal Audio’s Apollo to be able to record, mix, and master music professionally. Connoisseurs of music are buying external DACs and amplifiers to improve the sound quality, such as Meridian’s rather great Explorer2. What these devices have in common is that they are external and that they connect to the computer device through USB, Firewire, or Thunderbolt.

Thus, by removing the digital to analogue conversion and the amplification from the device, Apple actually opens up for a new breed of “integrated headphones” where the DAC, the amplifier, and the headphone itself can be matched to perfection by the maker.

Make no mistake, I’m convinced that this will result in an explosion of rather terrible integrated headphones over the coming years, but I’m also convinced that serious companies can use this to their advantage and come up with well-balanced, well thought out, and of course, well-sounding combos. For the benefit of mankind. Well.


Unapologetic Interfaces

A while back, I wrote a short piece about something I called 'apologetic interfaces', suggesting a new class of interfaces that pay attention to what their users are up to, what they're there to achieve, and seek ways of minimizing the hassle of dealing with unnecessary application maintenance, inclusive of updates, new feature tutorials, notifications, invitations to rate the app, etc.—you know all that stuff that drives us mad when all we're trying to do is to get stuff done.

I firmly believe that apologetic interfaces are the future. We need interfaces that realize that most of them are just that, interfaces. They are conceptually, factually, and by definition, in between us and our work. We need interfaces that realize that when I open up Microsoft Word I do that because I have a sudden need to write something down. Unless there's an earthquake, tsunami, major conflict, or a sudden outbreak of ebola in my area—I don't want to be be bothered with whatever-it-is. Just open the &$%&@# document so I can start to type. Please.

The state of the art, unfortunately, is still quite the opposite—the unapologetic interfaces rule, across platforms and devices. Notifications, update requests, badge icons, embedded tutorials, rating invitations, 'did you know?', and so on indefinitely, are still doing all they can do divert their users' attention from whatever they were trying to get done and paying absolutely no attention what so ever to what the user is doing at the time.

Here's a very telling example. Yes, it's the Wild West. Yes, it needs to change.

Can you hear the difference?

I like Quora, the site where people pose real questions and then interact over actually interesting and often meaningful issues by sharing knowledge and opinions. No selfies, no photos of someone's cute grandkid with ice cream all around his mouth, and no amazing sunsets. Unheard of in the social networking sphere -- what an odd idea!

Here, I stumbled on this question: Can you hear a difference in quality between Spotify's 320 Kbps stream and Tidal's lossless audio stream?

My answer: Yes, absolutely.

This ties in nicely with a long-standing interest (or theory if you like) of mine: how digitalization of analogue things first tends to make the experience worse and inferior to its analogue counterpart. When the digital technology matures and is capable of delivering a similar or even better experience, then people have grown used to an inferior experience and don't see the point. 

In my view, this is exactly what's happening to Tidal's HiFi streaming right now. 

I would argue that if you can't hear the difference between say Spotify and Tidal's FLAC streaming you might have either or all of these three problems -- all of which have to do with the dynamics of the music:

1. Your equipment isn't good enough. If you're listening to your music through your iphone headphones everything tends to sound the same. The argument here, though, is that you don't need to have a $200k amplifier to tell the difference. Just get a pair of decent headphones by AKG, Sennheiser, or others. You don't need to go crazy -- a pair such as the Sennheiser Momentum 2 will work really well just plugging straight into your smartphone or computer (but even a little better with a cheap USB Dac). I suggest you take your smartphone and walk into a hifi shop an try your favorite music with some good headphones. I personally prefer AKG, Sennheiser, Grado, and Audeze, but there are many other good brands too.

2. The music you're listening to isn't recorded and/or produced to sound any better on hifi equipment. A lot of today's music is mixed to sound good on cheap headphones. A lot of today's music is also heavily compressed -- and I mean HEAVILY compressed. Genre is important too. The difference is much easier to spot in acoustic and low-key music than say techno. But even there the quality difference is possible to hear, just listen for fragile sounds such as open hi-hats or ride cymbals. Also, pay attention to details. I bet you can hear things you haven't heard before, such as small imperfections: the guitarist's hands moving over the strings to change chords, the singer's breathing, the drummer fiddling with the kit. There's a rich warmth to high-def music that's hard to explain -- but you'll hear and feel it immediately. 

Try listening to some songs that have good dynamic range: then it's quite easy to tell the difference, such as (these are Tidal links):
- Everybody Hurts, REM
- A Rainy Night in Soho, the Pogues
- Revelation Big Sur, Red House Painters
- See the Sky About to Rain (from Live at the Massey Hall), Neil Young
- Bullet in the Head, Rage Against the Machine

3. You're not used to high-quality music. This last point is a bit controversial, but I firmly believe that many people have never really listened to music produced by a high-quality hifi stereo, so they've got used to the sound of inferior mp3s and think that's how their favorite songs should sound. This is a shame!

The difference between listening to say Red House Painter's Songs for a Blue Guitar album on hifi stereo equipment using a great source versus listening to it through my iphone earbuds is quite honestly like night and day. It's still a great record, but in the same way that watching a good hockey game on a black and white TV is still a good game -- it's just such an endlessly richer experience to watch it live.

Apologetic Interfaces

The MacOS X menu for Bose's SoundTouch music system comes across as almost a little heartbreaking. It is as if it doesn't really believe in itself. Why else would 'Quit' be the first menu option? 

The first steps toward a new breed of "Apologetic Interfaces?"

Picture the creative meeting where this was decided. A few people around a conference table, half-empty coffee cups, a few post-it notes. "Let's see, what would The User want to do..." Silence. After a while, from the other side of the table, "Well... it would be... ehhh.. some users might... like to... ehm...shut down the, ehm, quit." "Ah, excellent! Let's write that down on the whiteboard!" "Anything else?"

This example aside, maybe it's not such a bad thing that applications step it down a little. While it's natural that an application wants to tell the user as soon as possible that, "Hey! Guess what, there's a new version of me out!", the problem is that with all the apps you have on your computer, it's just too much distractive shouting going on all the time from a lot of different places.

If I open up Word, I do so because I either want to write something down or read a document. Unless there's a minor crisis (let's say an earthquake) I DO NOT want Office Update to take focus away from the document that's forming in my head and inform me of "critical update #1.6.28343".

Similarly, if I open up say VLC, especially while giving a talk, I do so because I want to show my audience a lovely little video clip, not give them the breaking news that version 2.1.5 has improved the reliability of MKV and MAD file playback.

An old-school, Unapologetic Interface, but not without finesse

Mindlessly checking for an update the first thing you do when the user opens up the application is just bad, thoughtless design. You open an application because you want to do something with it, right now. There are so many more ways in which an update could be done in a nicer, more humane way that doesn't get in the way of the user's intent.

A simple solution would be to gather the information and download the update in the background. Then just hold off the notification until the user either decides to quit the application or becomes idle. Or update it automatically in the background. Or, let the OS handle it if that's an option. Apple knew about this problem as well, that's why they implemented the App Store and the Notification Center. That's all pretty great, but then again some of the most notorious apps aren't using that. Looking at you, Microsoft and Adobe. 

One of the finesse-less, usual suspects

If you look closely at the picture above, you'll see that VLC comes with a solution to this which isn't without finesse. Clicking that small (and offset) check-box, you can chose to automatically download and install updates in the future (given that you then click 'Install Update'). That's actually a pretty elegant design idea. 

Yet, there are two problems with this approach. First, I doubt a lot of people actually notice this potential. With this kind of interface, you get drawn to the "install update" button. Or, if you're in fact giving a presentation, you just click whatever button you can as fast as possible to get this annoying window out of sight. A more general concern with automatic updates, second, is that new isn't always better. If an app goes from say version 1.4.23 to 2.0, it may actually be wise to stick with the old version for a while and let them figure out the bugs before you update. Or you simply don't like the new look and feel. Or, which is getting increasingly common, version 2.0 really means the same functionality as version 1 but now with ads all over the place. 

So when it comes to software updates, I'm leaning more and more towards update-as-you-quit as the more humane approach, with minor, bug fix updates automatically installed in the background. 

In light of this, maybe SoundTouch's approach could be seen as the humble beginnings of an entirely new breed of interfaces, "apologetic interfaces", characterized by low self esteem and by being aware of their propensity to annoy.

"I'm so sorry for wasting your precious time and valuable screen real estate, Dear User, but before we part I would like to let you know that there is a new me for you. No pressure, just letting you know." 

Come to think of it, too much of that could become annoying as well.

The Fishtank: An Agitational Artifact

For our client ABB Corporate Research, we created a series of alternative designs to contrast the traditional user interface and interaction design of control systems for industrial application. The Fishtank was one of the incarnations of this series. It is an interactive design exploration in the area of industrial control systems.

Conventional industrial control systems, such as ABB’s system 800xA, present the user with a panel view where machines, faceplates, sensor data, labels, etc. are organized and visualized side by side in a two-dimensional space. This design idea echoes the way in which control panels have always been designed; evolving from a non-digital era when each button, lever, label, and output device was physical and thus needed physical real-estate and a fixed location on the panel. Over the years, “the panel” as a way of framing and thinking about control room systems has formed a very strong conceptual idea for control room systems.

This is true to this day, when—at least in theory—a digitalized, computer-based control system could have any kind of user interface. Obviously, the 2D panel has not stayed on because it is a bad idea—on the contrary, there are many benefits to separating different things in two dimensions and giving them a fixed physical location in space. 

However, in this project, we wanted to explore the design space of "the possible" in this area by creating a series of radically different designs. The purpose was not necessarily that the results would aim to replace the traditional control room panel, but rather that they in different ways could come to complement, be different from, and to some extent challenge the panel as a design idea.

A typical problem in modern control rooms is the ever-expanding number of sensors that call for the operators’ attention. Relying on the quasi-physical panel as a design idea, it means that a 2D view of a factory keeps getting larger and larger. To deal with this, you either add more screens to the control room or you let the operators only see a small part of the entire factory on their personal screens.

As an alternative to this, we asked: would it be possible to design an interface in which the panel for the entire factory could fit on only one screen? 

The result from this experiment is the Fishtank prototype. It is an example of what we call an “agitational artifact”, i.e. an interactive artifact ideated, designed, and prototyped to be used using real data in real time—but where the main purpose of the artifact is to allow people to be exposed to a hands-on alternative to what they are used to; something with enough of a critical edge to shake them up a little bit, to make them think.

The Fishtank presents the user with a three-dimensional space. In this 3D space, the entire factory resides in the form of all its faceplates. A faceplate can for instance be a representation of a water tank in the form of the name and ID of the tank and its corresponding sensor data, such as water level, temperature of the water, etc.

The three dimensions in the Fishtank, i.e. X, Y, and Z space, are conceptual dimensions that can be controlled by the user. Hence, the user can decide what each of the three dimensions should represent.

For instance, the Y dimension can be made to represent the number of alarms a particular faceplate has; the X axis can be made to represent time since the last alarm; and the Z axis how far from the ideal or threshold each faceplate’s main value is.

But these conceptual dimensions can be changed easily and in real time to allow the user to interact with and play around with the factory to just monitor or to make certain parts stand out.

Unlike a traditional 2D design, the Fishtank uses movement, interaction, and conceptual dimensions—not fixed location in physical space—as the main sense making vehicle for the user. As such, it is radically different from the way in which control room software such as ABB’s 800xA has evolved and provides the user with a very different, engaging, and fun user experience. 

While an interactive artifact should be experienced hands-on, the video below gives you an idea what using this system is like.


I'm slowly working on this part of the site. Stay tuned... If this was 1998 or so, I would put a black-and-yellow animated gif here, showing a guy with a shovel. But that's not going to happen. Here's a video with a shark from our office instead. 

Work Music: the Case of the Ramada Inn

Ramada Inn by Neil Young is an epic song to carry out work to -- or, as I'm sure my English teacher once would have had me have it, "an epic song to which to carry out work". Even the folks at the Rolling Stone magazine seem to like it, it's at no. 5 on their list of the best songs of 2012.  

In my experience, different kinds of music tends to be good to do different things to, and then obviously some people like classical music, some like jazz, some techno, while others, for no obvious reason, are into Bieber, Cyrus, Timberlake and the likes of them. Yet, regardless of what you like, this song is just a little different for our specific purpose. Let me explain why:

First, it's an old-school, great, great Neil Young song with his signature high-gain open chords and 5-notes-or-so solos. What more do you need?

Second, it's almost 17 minutes long. This is key. This means you can shut down your email, put it on, dive right into almost any task, and often finish it before the song ends. It's like a mini-sprint for yourself. I'm trying to squeeze in at least one such session a day. Among other thing, this actually got me redesigning my homepage, finally. Let's just say that last time I managed that we didn't have iPhones.

 "Every morning comes the sun."