One interesting thing about social media is that users can notice behavioral trends about themselves. We can see how our timelines are affected by major events such as the Oscars Ceremony Award or the World Cup. Not only we get retweets and shares, but also new content is generated. Either unpublished or recycled. Pictures, videos, and memes. They’re everywhere within social media. However, as any organism, information gets born,grows, and eventually, it fades out.
Do you remember how popular selfies got after the Ellen DeGeneres’ selfie at the Oscars? Selfies has been part of Facebook, but definitely got burst after her picture. Selfies then started to become annoying. It seems that Instagram and the use of its filters have gone in the same direction. Also, we can add to the list the whining through social media, or the flood of cute cats pictures. On the other hand, it seems now that one function of social media is complementing Google, since their users are now asking about things in order to inform their decisions. Also we can note that social media is becoming an informal marketplace. Therefore, we can see social media as an interface in which multiple contexts affect themselves through the generation, modification, exchange, propagation and eradication of information. Of course, all these actions have an impact back to those contexts. They affect the real world.
The social media and the real world altogether affect the former, at least in terms of content and the usage of such content. Trends are consequence of these user-driveninformation management. And also, users kill those trends eventually, regardless of the actual agency they are supposed to have. Yet, social media, by means of current massive content in each of these contexts, dictates what is on fashion. And eventually when such massive content will not be in fashion anymore. It’s just like the comic strip by the Oatmeal shown below. No one likes selfies (now) (?).
What does this mean, and why do we need to care? There’s no simple answer whatsoever. That’s why many people try to understand the related phenomena from different perspectives, including HCI and Design. However, I really enjoy the idea of seeing that information is alive. It’s somehow organic. We can see how we apparently affect social media content, and how social media content affect us, and hence the real world. The trends have rhetorical implications for us. The Facebook that will be experienced in USA this 4th of July, because of the Independence Day, won’t be the same as the Facebook experienced in Brazil whilst the World Cup keeps going. Our understanding of the world, what shapes our culture, and what modifies our values are subject to this creation and dead of information. And still, I cannot avoid questioning myself, what’s our role, as users, in this phenomenon?
If you want to know how this phenomenon could be related with design, or user experience design, my colleague Azadeh Nematzadeh and I recently presented a paper in the Design Research Society Conference 2014 about some theoretical concepts by which we try to explain this connection. Please, give the paper a look. Thanks!
From my perspective, this is a great example of how Information Design and HCI/UX Design overlap. In his proposal, Krenn attempts to integrate a gesture-based interaction with a low cognitive load interface. As we observed from the video and images below, he seeked to visually synthesize the information and make the information as least intrusive as possible—for the driving experience.
As we observe from his proposal, the circle is the basic visual unitfor this interface. Because of my interest not on flat design but in finding new ways to represent information within UI, I want to understand better what is the design rationale behind these UIs and on what extent they participate in the paradigmatic shift regarding interaction. By observing Krenn’s proposal in conjunction to my previous post, I have the following comments:
The circle seems to be the best shape to represent a manipulable object—within a flat screen—when considering a gesture-based interaction. As I mentioned before, I conjecture that our experiences, in relation to manipulating spheroids since we’re born, influence this type of design rationale. That is, to make the connection of the fingers—something physical and tridimensional—with something abstract and flat, we still need to refer to something in the real world. That is, the metaphorical reference.
The effectiveness of the circle as UI relies on its multidimensionality. The circle not only properly manages time and space due to its geometrical nature. It also creates a connection from the tridimensional world with flat land.Furthermore, it provides a multidimensional means of interaction and information representation for the case of UIs. For instance, for Krenn’s proposal I noted at least four dimensions:
Size (diameter). This is clearly a variable that represents quantity, which goes from zero—the absence of the widget—to the maximum—as wide as we can extend our fingers on the screen.
Tilt. As I observe, the key aspect regarding this variable is having a reference point. When the user decides to tilt the widget, a cognitive model of range is created in the user’s mind at that moment. Yet we may reflect whether the latter adds complexity to the interaction. In this regard, I assume that tilt as an interactive variable is suitable for qualitative range, or ranges that are not require to be that precise. We don’t need that tilting represents a hard/long decision to the user, specially in context of use where the user is saturated by diverse information sources—as it may occur for the case of car controls.
X-value. This variable—that represents values along the horizontal axis—in conjunction with the y-value—vertical axis— determine the center of the circle and hence the current position (x,y) of the widget. What Krenn shows to us is the convenience of decomposing the center into two independent variables. He employs only one axis, but the idea of observing the scale at the side of the screen provides a mental reference for using either on axis or two of them. From Krenn’s video, we can note that setting the origin point (0,0) is critical in terms of both interface and interaction. Krenn’s proposes a good approach by setting this point relative to wherever the user touches the screen at any moment.
Y-value. As it occurs with the x-value, the vertical axis can be used to represent another quantity. In this way the user can set the value of two variables at the same time. Nevertheless, as I’ve experienced with Photoshop for iOS, it’s frustrating to deal with different quantities due to the sensitivity of the screen (or lack thereof) and a finger. As Krenn comments in his video, the design should take in account this issue and validate the interactions. One idea that came to my mind is snapping to values that makes sense. In Krenn’s proposal, the employment of the vertical axis only, in addition to rationale behind the increments/decrements according to the function/velocity of fingers, contribute to validate the interactions in this UI.
I get excited by observing design proposals as the one from Krenn. As I stated before, I think that Information Design plays a key role in the shift of any interactive paradigm. As designers, we should be conscious that we not only interact with products/design since the moment we wake up, but also we consume/interact information by means of our senses. Because of the latter, I remark that is difficult to see the actual boundaries between information and interface. Hence, representing information in a usable fashion and make it part of an interactive aesthetic experience is something really hard. Yet it represents to me a critical aspect that HCI/UX designers should pay more attention and recognize the implications of a matter where form & function cannot be practically detached.
A question to you for reflection purposes:
How would you visually/sensorially redesign all the information you’ve consumed/interact with since you woke up this morning?
“Critical judgments typically have tow key features: they are defended with arguments (compromising both verifiable evidence and reasoning), and they assert that others should agree with them (which does not imply the empirical fact that others necessarily do).”
Bardzell J., Bardzell S., and Stolterman E, 2014. Reading Critical Designs: Supporting Reasoned Interpretations of Critical Design. In Proc. of CHI 2014. ACM.
What’s the difference between Science and Design? What about Art and Craft? Is design about something concrete (an object), a process, an line of thought? Further, by taking User Experience (UX) and Human-Computer Interaction (HCI) as knowledge disciplines, what’s the relation of UX with Science? Does UX belong to Craft or Art? What can we tell about HCI? These are very difficult questions to answer, and they require to take a philosophical stance—at least, I assume—in order to create arguments and hence generate discussion. So, what’s the point of this post any way? Although I think I don’t have the answers to all these questions whatsoever, I would like to share my perspectiveon how these big words relate each other by means of the following schema.
In regard to the description of this relational schema, I would like to start commenting why I took the continuum Art/Craft. In 2008, I wrote this idea in Spanish
El diseño implica arte pero el arte no implica necesariamente diseño.
La ciencia implica diseño pero diseñar no implica necesariamente hacer ciencia.
Aun así, la ciencia implica hacer arte.
The literal translation is as follows,
Design implies Art, but Art not necessarily implies Design.
Science implies Design, but to design not necessarily implies doing Science.
Yet Science implies doing Art.
The last part, “Yet, Science implies doing Art”, seems to make no sense in English. The adequate translation could be,
Yet Science entails Craft.
My point here is that “doing Science” in real life is not that rigid as it looks in paper. To me, it involves both aspects of Craft and Design. Further, this phrase indicates the underlying implications of using a particular language at the moment of reflecting and philosophizing. Regardless, the selection of this continuum is somehow influenced by the perspective of Howard Risatti when comparing Art and Craft—although I don’t share his vision regarding Craft and Design in this “Theory of Craft”.
For the case of Science and Design, I consider the relation between these two as discussed by Nigel Cross and Harold Nelson & Erik Stolterman. As I tried to embed it in my phrase above, I state that it turns out difficult to outline strict boundaries in the relation of Science and Design. All depends on what type of definition, questions, and the place where those questions are made.
The third continuum entails the consequences of Art/Craft and Science/Design in relation to the real world. Thus, I consider—at least—the range that goes from abstraction to actuality. That is, from ideas to things that people can interact with. This continuum is theoretically related with ideas such as the “ultimate particular” and “design inquiry“—as a compound of the inquiries into the real, ideal, and true respectively—by Nelson & Stolterman.
The relational schema presented above doesn’t have the intention of being prescriptive. It corresponds to my personal viewpoint and a attempt to formulate my position as HCI/UX researcher regarding the type of research/discourse generated in my near context. That is, among the faculty and colleagues at Indiana University Bloomington. Further, since I have interest in schemas/diagrams/sketches, I generated it as an example of how schemas may function as a means for argumentation.
My purpose here is for you to take this schema and tear it up. Make it your own.
However, before you go and destroy this relational schema, let me show how it helped me to sketch the answer to the aforementioned issues.
UX and HCI in the relational space
As we observe from the schema above, the relational space is conformed by three axis, each of them representing one of the continuums describe above. Then, I perceive User Experience (UX) as a discipline highly design-oriented, focused on concrete outcomes, and with a high flavor of craft in its practice. I think these qualities make it different from other approaches regarding interactive artifacts-systems such as Software Engineering, ICT, or Computer Science.
On the other hand, I locate Human-Computer Interaction (HCI) in a different place within the relational space. I perceive HCI as more scientific discipline focused on concrete outcomes, yet with certain nuances of craft in its practice. I remark that I’m talking about a general or traditional perspective of HCI. In other words, a practice—and also its research—more emphasized on the first and second waves of HCI.
I consider that HCI influences UX, more than the other way around. Although HCI provides foundations and methods to UX, the latter seems to lack of impact regarding HCI in this fashion. Of course, this discussion could be very extensive and profound. So far, I remark this influence with an arrow, just to indicate that HCI may entail a more traditional approach whereas UX corresponds to the designerly approach.
From my current perspective, UX influences DT since it provides the input to start theorizing about design. The consequences of UX are actual design cases. At the moment (design) researchers start analyzing those cases, a universe of study is created. By picking one planet, system, or galaxy of such universe, the (design) researchers cannot avoid to meet a philosophical situation since there’s an intrinsic relation between the researcher and the piece selected to study. And just as we may observe from the last sentences, the attempt to understand becomes a matter of (design) philosophy.
So far, we’ve observed from above the relations of HCI→UX and UX→DT. The question is now, in terms of DT and HCI, what is the discipline more prominent to influence or affect the other? I want to remark that it’s not my intention to be prescriptive. Based on my experience, I think that DT→HCI marks the relation within the type of research I’m currently involved. That is, DT provides HCI with theoretical foundations, which are in turn employed to generate frameworks.
Not necessarily connected with the latter, (design) methods are located very close to HCI in the path of this connection. Nowadays, more that thinking about their degree of applicability, I think that the so-called design methods could work without a deep—and hence philosophical—understanding of DT. I conjectured the latter based on my early experience with HCI, particularly as an undergraduate and latter getting involved with HCI researchers.
Research as an act of reconciliation
As I mentioned above the relational schema has the purpose of helping myself what’s my position as HCI/UX researcher. The relational schema is limited in order to respond to such statement. However, it provides a means to make an approximation for such goal.
I notice that more than talking about a precise position as (a possible future) researcher within the relational space, I can better reflect on the interrelation of UX-HCI-DT to understand on what research field I can work at. For instance, in the schema below, I picture a research field with big emphasis on the actuality and Art dimensions—although the connection with DT will always be there. Any change on this membrane represents a different framing on what to pay attention as HCI/UX researcher.
There are as many membrane variations as HCI/UX researchers. In my case, I know that my academic/professional past as designer and my current formation as scholar influence on how I frame the research field I’d like to work when I reach the dissertation stage. In this sense, I remark that relevance of the context. My advisor Marty Siegel, my mentor Erik Stolterman, the faculty, my colleagues PhD students from all the tracks, and the master’s students from the HCI/d program have a huge impact on shaping my particular membrane.
Questions come along more often than answers. I guess it’s a natural consequence regarding the formation as scholar. Yet I look forward to create many schemas that help me to understand this journey better. 🙂
My personal taste for taking notes is based on a regular sketchbook, a needle point gel pen, and a brush tip marker for shading. Since I’ve seen one of my colleagues using his iPad for taking notes, I’ve wondered how convenient is carrying your information in a single artifact, and how natural the sensation is.
I discovered that paperis the app for creating sketchbooks à la moleskine in the iPad. Further, I saw that pencil, a stylus to work with this app, was released. It reminded me some of the thick sketching pencils I’ve had, in fact. This is the promotional video of both working together:
I should remark that I have no intention of making any type of advertisement in this post. However, since the app is called paper and the stylus pencil, I couldn’t avoid having some quick thoughts in relation with design and HCI:
The metaphor is a great way of naming/advertising a product. Calling an app paper and a piece of technology pencil gives you pretty much idea what to expect and how to interact with.
Since technology is constantly evolving, it’s more easy to refer to concepts we have already implanted in our minds. Metaphors operate as smooth means for coming up with innovative designs.
However, translating something that we already have/use into a new technological form is easier if the metaphor doesn’t loose meaning in the translation. I think this is the case of paper and pencil.
Metaphor-oriented design for HCI involves the conjunction of other designs (or other design thinkings). For instance, designing pencil involves thinking as an industrial designer (in terms of the materials and ergonomics), and paper involves thinking as a graphic designer (in terms of the different visual signs within the interface).
Metaphor-oriented design for HCI allows to bring new styles of interaction, and hence more metonymies. For instance, paper has an interesting undo feature: moving (two) fingers in a counter clockwise fashion to rewind within the current sketch.
Since it may look that current HCI designs are more related with creating and enhancing people’s everyday, rather than accomplishing systematic tasks, I see complicated to get rid of metaphors and metonymies for a while. They represent a bridge between what we perceive as technological and not technological. Then, I wonder how current metaphors in combination with new styles of interaction will settle the basis for future metaphors/metonymies of that technology we haven’t designed yet.