Google Cards: UX Design or Information Design?

It’s been a half year since Google released Material Design. I still see it as a great strategy to bring a vocabulary to designers and users for understanding how UIs work. From that design framework, cards have caught my attention from the first time. I always wonder, are cards about UX or are they really about information design?

Google Now
Google Now’s available cards

Probably, the first card I saw corresponds to the weather card in a web browser, the one that appears when you google about the weather. However, the first time I paid attention to a card was in a plane. I remember seeing a clean and well organized information about my flight in a little box in my phone. Google knew about my flight and it delivered enough information for me to be aware about my flight status. I got very excited, honestly. The first thing that came to my mind was: this is information design!

If we think of physical cards, Google’s cards seem to be limited in terms of interaction. In many Google interfaces, cards don’t flip or move. Static information is mostly presented on one face of the card. However, no fancy interactions are necessary to make a card effective. The effectiveness of card relies on the quality of the information that it presents. In that regard, knowing how to design the content, the information becomes important. Visual design principles like hierarchy, contrast and rhythm are necessary for the synthesis of information. Therefore, theUX becomes into a matter of information design. We designers need to remember that the how and why of composition expressed through several skills and theories related to design—including rhetoric—matter for the design of technology. 


Schematic UI and Car Controls

In a previous post I commented about some examples of flat design for user interfaces (UI) and about the prominent use of the circle—as the basic widget component in such design. Later, I found this novel user interface design for car controls proposed by Matthaeus Krenn.

Video of the UI in action:

From my perspective, this is a great example of how Information Design and HCI/UX Design overlap. In his proposal, Krenn attempts to integrate a gesture-based interaction with a low cognitive load interface. As we observed from the video and images below, he seeked to visually synthesize the information and make the information as least intrusive as possible—for the driving experience.

Schematic UI for car control by Mathaeus Kreen

Schematic UI of car control by Mathaeus Krenn

Schematic UI of car control by Mathaeus Krenn

Schematic UI of car control by Mathaeus Krenn

As we observe from his proposal, the circle is the basic visual unit for this interface. Because of my interest not on flat design but in finding new ways to represent information within UI, I want to understand better what is the design rationale behind these UIs and on what extent they participate in the paradigmatic shift regarding interaction. By observing Krenn’s proposal in conjunction to my previous post, I have the following comments:

  1. The circle seems to be the best shape to represent a manipulable object—within a flat screen—when considering a gesture-based interaction. As I mentioned before, I conjecture that our experiences, in relation to manipulating spheroids since we’re born, influence this type of design rationale. That is, to make the connection of the fingers—something physical and tridimensional—with something abstract and flat, we still need to refer to something in the real world. That is, the metaphorical reference.
  2. The effectiveness of the circle as UI relies on its multidimensionality. The circle not only properly manages time and space due to its geometrical nature. It also creates a connection from the tridimensional world with flat land. Furthermore, it provides a multidimensional means of interaction and information representation for the case of UIs. For instance, for Krenn’s proposal I noted at least four dimensions:
    1. Size (diameter). This is clearly a variable that represents quantity, which goes from zero—the absence of the widget—to the maximum—as wide as we can extend our fingers on the screen.
    2. Tilt. As I observe, the key aspect regarding this variable is having a reference point. When the user decides to tilt the widget, a cognitive model of range is created in the user’s mind at that moment. Yet we may reflect whether the latter adds complexity to the interaction. In this regard, I assume that tilt as an interactive variable is suitable for qualitative range, or ranges that are not require to be that precise. We don’t need that tilting represents a hard/long decision to the user, specially in context of use where the user is saturated by diverse information sources—as it may occur for the case of car controls.
    3. X-value. This variable—that represents values along the horizontal axis—in conjunction with the y-value—vertical axis— determine the center of the circle and hence the current position (x,y) of the widget. What Krenn shows to us is the convenience of decomposing the center into two independent variables. He employs only one axis, but the idea of observing the scale at the side of the screen provides a mental reference for using either on axis or two of them. From Krenn’s video, we can note that setting the origin point (0,0) is critical in terms of both interface and interaction. Krenn’s proposes a good approach by setting this point relative to wherever the user touches the screen at any moment.
    4. Y-value. As it occurs with the x-value, the vertical axis can be used to represent another quantity. In this way the user can set the value of two variables at the same time. Nevertheless, as I’ve experienced with Photoshop for iOS, it’s frustrating to deal with different quantities due to the sensitivity of the screen (or lack thereof) and a finger. As Krenn comments in his video, the design should take in account this issue and validate the interactions. One idea that came to my mind is snapping to values that makes sense. In Krenn’s proposal, the employment of the vertical axis only, in addition to rationale behind the increments/decrements according to the function/velocity of fingers, contribute to validate the interactions in this UI.

I get excited by observing design proposals as the one from Krenn. As I stated before, I think that Information Design plays a key role in the shift of any interactive paradigm. As designers, we should be conscious that we not only interact with products/design since the moment we wake up, but also we consume/interact information by means of our senses. Because of the latter, I remark that is difficult to see the actual boundaries between information and interface. Hence, representing information in a usable fashion and make it part of an interactive aesthetic experience is something really hard. Yet it represents to me a critical aspect that HCI/UX designers should pay more attention and recognize the implications of a matter where form & function cannot be practically detached.

A question to you for reflection purposes:
How would you visually/sensorially redesign all the information you’ve consumed/interact with since you woke up this morning?



We cannot escape from the metaphor

Omar Sosa Tzec sketchnotes - Notes from one of my classes

My personal taste for taking notes is based on a regular sketchbook, a needle point gel pen, and a brush tip marker for shading. Since I’ve seen one of my colleagues using his iPad for taking notes, I’ve wondered how convenient is carrying your information in a single artifact, and how natural the sensation is.

Omar Sosa Tzec sketchnotes - Notes from one of my classes
Example of notes I’ve taken in one of my classes

I discovered that paper is the app for creating sketchbooks à la moleskine in the iPad. Further, I saw that pencil, a stylus to work with this app, was released. It reminded me some of the thick sketching pencils I’ve had, in fact. This is the promotional video of both working together:

I should remark that I have no intention of making any type of advertisement in this post. However, since the app is called paper and the stylus pencil, I couldn’t avoid having some quick thoughts in relation with design and HCI:

  • The metaphor is a great way of naming/advertising a product. Calling an app paper and a piece of technology pencil gives you pretty much idea what to expect and how to interact with.
  • Since technology is constantly evolving, it’s more easy to refer to concepts we have already implanted in our minds. Metaphors operate as smooth means for coming up with innovative designs.
  • However, translating something that we already have/use into a new technological form is easier if the metaphor doesn’t loose meaning in the translation. I think this is the case of paper and pencil.
  • Metaphor-oriented design for HCI involves the conjunction of other designs (or other design thinkings). For instance, designing pencil involves thinking as an industrial designer (in terms of the materials and ergonomics), and paper involves thinking as a graphic designer (in terms of the different visual signs within the interface).
  • Metaphor-oriented design for HCI allows to bring new styles of interaction, and hence more metonymies. For instance, paper has an interesting undo feature: moving (two) fingers in a counter clockwise fashion to rewind within the current sketch.

Since it may look that current HCI designs are more related with creating and enhancing people’s everyday, rather than accomplishing systematic tasks, I see complicated to get rid of metaphors and metonymies for a while. They represent a bridge between what we perceive as technological and not technological. Then, I wonder how current metaphors in combination with new styles of interaction will settle the basis for future metaphors/metonymies of that technology we haven’t designed yet.