Archive of articles classified as' "Design"

Back home

The Best of Both Worlds: Semi-transparent choropleth maps in GeoCommons Maker!

December 8, 2008

When we were building GeoCommons Maker! one of the key map design challenges we faced involved producing semi-transparent choropleth maps. Choropleth maps are perhaps the most common type of thematic map and are regularly used to show data that is attached to enumeration unit boundaries, like states or counties. Ever seen a red state / blue state election map? This is a basic choropleth. There are a lot of more sophisticated ways that choropleths can be made to best represent a given data set, for example, by playing around with classification, categorization, choice of color scheme, etc., but we won’t get into those here.

I want to talk about color. Traditionally, choropleth maps are read by looking at the general pattern of unit colors and/or by matching the colors of specific map units to a legend. Other reference data is often removed from the map because it is either, 1) not necessary to communicate the map’s primary message or 2) makes communicating this message more difficult. It could be argued, for example, that other reference map information, like green parks, gray airports, brown building footprints, and blue water distract readers from seeing the general pattern of choropleth colors on the map, which is where the map’s most important message can be found.

For GeoCommons Maker!, we wanted to allow people to make a kind of hybrid, semi-transparent choropleth map that would show both thematic data (colored choropleth map units) AND the rich reference information on popular map tiles (e.g., Google, Microsoft Virtual Earth) without sacrificing map reading and interpretation ability and confidence. We believe that there are lots of times when reference and thematic data can work extremely well together to really benefit a map’s message (e.g., a soils map that shows terrain or a vegetation map that shows elevation). So, we wanted to build this functionality into Maker!, and allow people to make maps that show the best of both worlds.

The Problem with Transparency

The fundamental problem with transparency is that the color of semi-transparent map units can shift due to the visibility of color that lies beneath them. This is not at all surprising, but can make the basic legend matching task difficult, obscure the pattern of color on the map, or just as bad, make patterns appear out of nowhere. Here’s a look at what happens to colors using the same semi-transparent choropleth map units on different backgrounds. These are screen captures from early design mock-ups for Maker!.

The first image shows (hypothetical) opaque choropleth map units with a 7-class color ramp. The next three images show the same units at 50% opacity on top of Google terrain, streets, and satellite imagery. Notice how colors shift when compared to the opaque map at top? See how lightly colored units nearly disappear on the streets map, and darkly colored units nearly disappear on the satellite map? Yikes!

Mock-up of an opaque, 7-class choropleth map for Maker! (Google terrain)

Mock-up of an opaque, 7-class choropleth map for Maker! (Google terrain)

Same mock-up, at 50% opaque (Google terrain)

Same mock-up, at 50% opaque (Google terrain)

Same mock-up, at 50% opaque (Google streets)

Same mock-up, at 50% opaque (Google streets)

Same mock-up, at 50% opaque (Google satellite)

Same mock-up, at 50% opaque (Google satellite)

The Solution to Transparency

We employed three design solutions to ensure that semi-transparent choropleth maps in Maker! would work, despite potential map reading problems: 1) unit boundaries, 2) data probing, and 3) transparency control.

1) Unit boundaries. In Maker’s choropleth maps unit boundaries are color coded but remain opaque, even when unit fill color is semi-transparent. This gives map users some true color information to work with, and should improve their ability and confidence to spot map patterns or match colors to a legend. In other words, while unit fill colors can get you close, unit boundaries can get you the rest of the way there.

Screen-shot from Maker! showing opaque choropleth unit boundaries

Screen-shot from Maker! showing opaque choropleth unit boundaries

Corresponding legend for the above map

Corresponding legend for the above map

2) Data probing. We also took advantage of a relatively common and very helpful interactive map feature known as data probing. Exact values for any choropleth map unit can be obtained by clicking on them. In Maker!, we designed the data probing feature to go one step further and give values for all of the possible attributes associated with each map unit, not just the mapped attribute alone (see the scrolly list, shown in the probing pop-up below).

GeoCommons Maker! Data Probe

GeoCommons Maker! Data Probe

3) Transparency control. Finally, we gave mapmakers a transparency control, as well as a chance to take some responsibility for how well their maps communicate. The transparency control lets mapmakers decide what works and what doesn’t. Given the huge range of possible maps that can be made with Maker!, some user controls like this are necessary (as well as being kinda fun!). Here, transparency can be adjusted for a custom fit with any chosen tile set, color scheme, or other mapped data. Settings on the control (shown below) range from 50-100% opaque.

Screen-shot from Maker! showing the transparency control

Screen-shot from Maker! showing the transparency control

The Best of Both Worlds

Our decision to include semi-transparent choropleth maps in Maker! should give mapmakers and map users the best of both worlds. A semi-transparent choropleth is truly a hybrid map in that it can potentially offer all the advantages of combining rich reference data (i.e., underlying tile sets) with great thematic data (i.e. overlying choropleth units). Hopefully the choropleth maps coming out of Maker! will be easy to read and good looking, too!

2 Comments

Simplicity, not Simple

December 1, 2008

The first time I used Google Maps I knew that the world of cartography had just gotten a lot more interesting. It blew my mind. What really struck me then (and still does today) is that I didn’t have to learn how to use it: It just worked. It didn’t come with a manual, and I didn’t need a class in it. Rather, I would think, “Hey, I wonder if…” and sure enough it did just that. First try. It worked. What was happening was that my expectations of the map and the feedback it gave—and the speed at which it gave that feedback—left me feeling empowered to explore more, rather than frustrated or confused.

This is true of all the best tools in our lives: they make us feel confident (even smart), not intimated or confused or frustrated. And they quickly become completely transparent. The master violinist, photographer, or painter all work so comfortably with their tools that they are able to translate their powerful and nuanced intentions into a physical reality. We’ve all experienced this – when the interface between our cognitive and emotional selves and the world around us disappears and we are able to lose ourselves in a great book or a great movie (you cease to realize you’re sitting in the theatre watching reflected light on a screen or scanning printed characters on page).

When our tools disappear and become transparent, we are at our best. Psychologists call this ‘flow’ and I think it is the singular defining state of creativity: it’s what happens when we are so deeply engaged with the work we love that we lose track of time and need to be prompted by loved ones to stop and eat occasionally. Mozart had this problem, Newton had this too, and to a lesser extent, so do I every time I fire up Google Earth.  I often joke that Google Earth should come with a warning label: You will be here happily for hours. Proceed with caution.

Now reflect on how rare that experience is in the world of software and web interface design. Why is that? Why are we content to create maps that merely don’t crash? Below I outline how we might, as designers, aim a little higher.

The problem is, as design guru Donald Norman points out in his classic must-read The Design of Everyday Things, most people expect to be flummoxed by new technology, that’ll it be hard to learn, that it will be unpleasant at best.

Why else would people refuse to upgrade software despite obvious problems with their current version — because they’ve done the math, and the current flaws are better than having to learn a new version. Indeed, most people blame themselves when something doesn’t work, saying, “I must be stupid because other people know how to use this,” when in fact it’s most likely a counter-intuitive UI and poor or missing affordances that are to blame. The only reason why other folks know how to use it is because they’ve learned, through trial and error, how to work around those flaws. If software elicits “That made no sense, but I guess it worked…I’ll try to remember that for next time,” it is badly designed. If it elicits “I bet I can do x by using y,” you’ve earned your paycheck (ironically, most payroll systems I’ve seen are horrendous).

How do you know this isn't your interactive map or web site?

Success in UI design is not measured by “Did the person get something at the end?” but rather by “Did they grasp what the tool was capable of and how to use it quickly? What was the mental and physical workload required? How many dead ends did they go down before they found success? Did they enjoy the experience?” among other important questions. The TLX scorecard (designed by NASA) and the GOMS test are two such approaches used by savvy designers to score how well people use their tools, not merely if they can use them (or, as we often see, only use them if they’ve taken lengthy training courses).

Case in point: A Big Ten university I know reserves a mandatory full afternoon to show every new employee how to use the phone message system, representing millions in lost productivity. Uh, Houston, I think you might have a problem, and it’s not your employees. This isn’t just design snobbery it’s massive waste of money and time.

THE IMPORTANCE OF FLOW

Let me propose that the best user interfaces (UIs) become transparent because they engender ”flow.“ For me, this is the holy grail of good design. The 9 components of flow, based on Csíkszentmihályi’s work (see his TED Talk) in the 1970s, can be used as a scorecard for any UI we design or use:

  1. Clear goals (expectations and rules are discernible and goals are attainable and align appropriately with one’s skill set and abilities).
  2. Concentrating and focusing, a high degree of concentration on a limited field of attention (a person engaged in the activity will have the opportunity to focus and to delve deeply into it).
  3. A loss of the feeling of self-consciousness, the merging of action and awareness.
  4. Distorted sense of time, one’s subjective experience of time is altered.
  5. Direct and immediate feedback (successes and failures in the course of the activity are apparent, so that behavior can be adjusted as needed).
  6. Balance between ability level and challenge (the activity is neither too easy nor too difficult).
  7. A sense of personal control over the situation or activity.
  8. The activity is intrinsically rewarding, so there is an effortlessness of action.
  9. People become absorbed in their activity, and focus of awareness is narrowed down to the activity itself, action awareness merging.

HOW DO WE GET THERE?

There is no simple magic formula, I suspect, for what defines a good interactive map (or any user interface) but I think some salient qualities include the following:

1. Give Them Something Quickly. Nothing boosts confidence like finding the power button immediately or sliding your credit card in the reader the correct way the first time (aside: omg card reader designers— after 30 years this is the best you can do? Seriously?!!). I’ve always liked Apple’s “welcome” movie that plays the first time you start up your new Mac – all you’ve done is find the power button but already you feel like you’re off to a good start. A blinking c: prompt is no way to greet your clients.

2. Adapt to the Skill-Level of the User. Maps and software should be smart enough to adapt to the level of the user, from offering pro-level shortcuts to revealing more advanced features as needed — not when they can’t be used and simply clutter up the screen and taunt the user with ”shame you don’t know how to activate me cause I’m all grayed-out.”

3. Understand Affordances. All interfaces, from airport signs to lawnmowers, live or die by them. If you are a designer, you need to eat, breathe, and sleep this stuff for the world already has too many ”Norman Doors.“ Named after Donald Norman (who talks about them in his aforementioned book), these are doors that have a horizontal bar across them and thus suggest that you’re suppose to push them, when in fact you have to pull them (RULE: horizontal bars to push, graspable vertical handles to pull). In the panic of a fire, such design flaws take on new significance.

4.    Eliminate Features Ruthlessly. My two current favorite UIs are the Google Search Engine and the iPod. Both are the model of simplicity and both burst onto the scene and quickly dominated their markets by doing the incredibly counter-intuitive thing: offer the user less. They both removed a whole bunch of features the competition thought were essential. Why? Because we are all swamped with too many choices in our lives (see “The Tyranny of Choice”) and simpler tools are (1) faster to learn, (2) faster to use, (3) cheaper to make, and (4) and less likely to break.

Unlike just about every product ever, over time the iPod has in fact gotten less complicated: The Gen 1 iPod had twice as many buttons as the Gen 3. The iPod shuffle? It got rid of the screen! Heck, the shuffle even eliminated the power jack and let folks recharge directly through the headphone jack (I still don’t know how they did that one). And it sold like hotcakes because it was cheaper to make and easier to use — they killed the features that were underused and kept just the stuff that really mattered to most casual users (pro users can still buy the more powerful models of iPod). And most folks learned, hey, you know, I really didn’t need all that stuff after all, and I just saved a bunch of money…and that is a happy customer.

5.    Build a Well Labeled Emergency Exit. If the user feels like at any moment they might break it, they won’t venture far. The best UIs increase confidence by providing reassuring feedback that progress is being made and encourage the user to keep going. We’ve all been stopped by cryptic warning messages that leave us feeling unsure of whether to proceed or cancel (I usually cancel, unless I’m feeling dangerous or it’s not my computer). Good ideas include a big reset button, unlimited undos, and lots of sign posts such as “Before we close the window, would you like me to save your work for you?” More advanced users can turn off such warnings once they’re weaned off of them. I have archiving software that asks me three times in three ways if I really, truly want to erase a drive and overwrite it. Given the cost of a mistake (10 years of my entire digital life) this five seconds of forced careful thinking is a suitable insurance premium.

Ben's pick for UI of the week - Swedish light swithces.

Ben's pick for UI of the week: Swedish light switches.

I asked Andy, Ben, and Dave to name their current favorite UIs and Ben suggested these cool light switches in his apartment in Sweden. He writes “(1) really big and square, (2) in the dark, just start smacking the wall, and (3) feels really good to smack the wall in the dark.” Hard to argue with that. Compared to the light swithces we have here in North America these are easier to use (less effort) and result in less fumbling (faster to use, fewer mistakes) and they remind us even the humble light switch can be improved with some careful thinking.

2 Comments

The trouble with pies

November 13, 2008

I made mention of 3D pie charts in an earlier post and thought I’d outline exactly why they are such a bad idea. As both a teacher and designer I campaign hard against “chart junk” and the needless and confusing eye candy tricks that software companies create to clutter-up our lives. I know these companies need to offer something to try and convince us to upgrade to the new version, but let’s be clear: drop-shadowing every element on the page, or adding an outer glow to the text isn’t going to make your message any clearer, and will most likely distract from the very thing you’re trying to show. My design philosophy can be summed up as:

In cartography, aim to be clear, not cool.

Anything that doesn’t contribute to the message, or worse, distracts from it, probably doesn’t need to be on the page. Since maps are small and the world is large, every inch on the page and every pixel on the screen has to count and we can’t afford to waste any of them. Draw what you need, and no more. Fans of Edward Tufte, Presentation Zen (recent post on ‘chart junk’), or old-school cartographer’s like J. K. Wright and Arthur Robinson will recognize all of this – this is hardly a new message. But it is one that still needs to be heard, apparently. For a quick overview of many of these arguments I’d strongly recommend reading John Krygier‘s excellent post “How useful is Tufte for making maps” (his 20 Tufte-isms is a great crash-course in Tufte).

Consider this graph (from here):

This pie chart commits a half dozen design mistakes and is a grrat example of chart junk

Pure chart junk: This pie chart commits a half dozen major design mistakes rendered it as little more than visual junk food (looks tasty at first, but isn't that good for you).

As an information graphic, let’s step back and think about what a pie chart is suppose to do and how it works at a perceptual level: Pie charts are used to tell us (1) how much of something exists, and (2) how much that is compared to the other categories. ‘How much’ is encoded by the size of the pie (or segment) and ‘relatively how much’ by the internal angles of the pies and/or their relative sizes. To extract data from the graphic you have to be able to quickly visually compute both areas and angles.

The bad news: Years of testing has shown that most of us are really bad at estimating the areas of even simple shapes – just try visually estimating how much carpet you’ll need for a room, especially if the room isn’t square if you don’t believe me – and we’re pretty bad at eyeballing and comparing angles too.

Not convinced, you say? Looking at the pie chart above:

  • What percent of the total does DP Tech have?
  • Is that more or less than IBM?
  • How much more/less?

Now think about presenting those data as a boring old table: DP Tech 4%, IBM 5%. Done. Simple. Think about the difference in mental workload, and the confidence you have in your answer when the data are presented as a 3D oblique pie chart and when they’re numbers sitting in front of you. This problem is so commonplace (and yet ignored) that most folks resort to putting numbers on pie charts because the graphic itself is not sufficient, which is waste of ink, their time and mine. If you have that little confidence in your charts, just give me the numbers!

Here’s a rule of thumb I like to use:

If a map/graphic needs ‘crutches’ like number labels and can’t stand on its own, don’t use it. It’s the difference between “A Tale of Two Cities” and “A Tale of Two Cities: A Novel.”

People read the simple 2D pie faster, with greater accuracy AND greater confidence.

People read the simple 2D pie faster, with greater accuracy AND greater confidence.

The same data is presented three different ways above, and each change made to “enhance” the simple 2D pie chart makes it worse because the two basic perceptual tasks – ‘how big is something’ and ‘what are the angles’ – are much harder to perform when the pie is lying down. This is exactly what happens when design decisions are made in a vacuum and based simply on “it looks cooler this way” rather than an understanding what we need from a graphic to make it readable/work.

Problem 1: Adding the 3rd dimension adds no new information to the graphic. That’s bad because it is wasted ink (that could be doing real work) and it requires the tilting of the pie so the designer can show off the 3D effect. If the height (z-dimension) added some additional data (i.e., a second data variable), it might be worth adding, although I would caution against that since we’re even worse at estimating volumes than we are at estimating areas (which is why “how many jelly beans in the jar” contests or “how big a moving van do I need?” continue to challenge us – we’re terrible at numerically estimating volumes beyond the crude level of “bigger / smaller”).

Problem 2: On both oblique pies the scale is not consistent across the graphic. In other words, the same pie segment will look larger or smaller to you the observer simply based on where it lands in the circle…things closer to us will look larger, even if they’re not. This couldn’t be more counter-productive when we’re simultaneously asking the viewer to estimate areas. This is an absolute rule: If you expect people to judge the size of things, don’t change the scale of the objects on them.

Problem 3: Splitting the pie apart makes matters worse because the further objects are from each other, the harder it is to compare them (which is why we like to hold things side-by-side if we want to carefully compare them). Why? The extra time and effort it takes for your eyes to search for and acquire the now-separated segments uses-up your precious (visual) working memory and requires more eye trips back-and-forth to make the same estimation. Cognitive scientists call those back-and-forth trips extraneous cognitive load which cut the available brainpower (working memory) that can be devoted to the real task of comparing segments (intrinsic and germane cognitive load).

Solution? Simple: Stop using oblique, exploded, jazzed-up 3D pie charts. 2D work better, are easy to read, faster to read, and easier to make. Importantly, they also can be drawn much smaller and remain legible – as cartographers we’re always looking for ways to do more with less ink. If your powerpoint slides feel naked without fancy transitions and giant 3D graphics, you’d do better to work on the substance of the talk, rather than bury your good ideas in a pile of chart junk.

I’ll leave with one of my all-time favorite spoofs – the Gettysburg Address as Powerpoint.

1 Comment

A new kind of election map

November 8, 2008

Update, Dec. 22: A few variations of the map technique are posted here.

2008 election results with population

We spent some of our spare time last week exploring data from the 2008 presidential election and thinking of some interesting ways to visualize it. Above is one map we put together.

One thing we sought to do was present an alternative to cartograms, which are becoming increasingly popular as post-election maps. Cartograms are typically offered as an alternative to the common red and blue maps showing which states or counties were won by each candidate, wherein one color (presently, red) dominates the map because of the more expansive—but less populated—area won by one candidate. Election cartograms such as the popular set by Mark Newman distort areas to reflect population and give a more accurate picture of the actual distribution of votes. A drawback of cartograms that we’re very aware of, however, is that in distorting sizes, shapes and positions are necessarily distorted, sometimes to the point of making the geography virtually unrecognizable.

Our map is one suggestion of a different way to weight election results on the map while maintaining correct geography. What we’ve done is start with a simple red and blue map showing which candidate (Republican and Democrat, respectively) won each county in the lower 48 states. Then, to account for the population of those counties (or, the approximate distribution of votes), we’ve adjusted opacity. High-population counties are fully opaque while those with the lowest population are nearly invisible. Against the black background, the highest concentrations of votes stand out as the brightest.

We’ll let viewers be the judge of its cartographic effectiveness, but we hope you’ll at least agree that it looks pretty cool!

Click on the image at the top of the post to view a larger version, or see it in a Zoomify viewer, or download the full size (suitable for printing).

24 Comments

I’m not here to make your data look pretty.

October 21, 2008

“Good design is clear thinking made visible” -Edward Tufte

Geographic Visualization, the Artist Formerly Known as Cartography, derives much of its power to speak because it is visual. We humans are voracious abstract visual thinkers: just try not seeing the characters in front of you as words that denote meaning. Or eevn wehn the wrods are sellped wonrg, our bainrs jsut power thugroh fnie, in part because we don’t just see letters, we mentally ‘chunk’ information into high-level structures shaped in large part by a clever bit of programming called prior experience. In fact, we can only read as fast as we do because we don’t read individual letters but groups of them called words, and beyond that at the highest level, because languages have understood rules that make certain combinations of letters and words impossible allowing our brains to filter-out the ridiculous and focus on the likely. This, however, is both a blessing and a curse since we can process information very, very quickly (hitting a 95 mp/h fastball), but we often only see what our brains tells us we should expect to see (why trick pitches work). As a result, words, maps and other graphic representations have an expressway into our consciousness, often imparting vast amounts of data in mere glance. We can’t help it – it’s literally how we’re wired.

While the eye-brain system is a masterpiece of evolution, it also has these well-known limitations and pitfalls. Optical illusions are one such example, including the one below by MIT professor Edward Adelson which is one of the best I’ve ever seen: It beautifully illustrates how our brains automagically discount the actual gray of the squares (their real lightness) in order to keep the logic of the checker-floor true (it discounts the shadow cast by the cylinder, our own built-in ‘image correction software’). Here’s the proof.

The brain sees what it expects to see

The brain sees what it expects to see: Squares A and B are exactly the same shade of gray. For real.

Within visualization we worry about both Type 1 and Type 2 errors; seeing things that aren’t there, and missing things that are. Given both the power of graphics to speak so clearly to us and the very real limitations of ‘visual thinking’, it behooves us to not only use such power wisely, but to also understand as Alan MacEachren notes, how and why maps work.

Maps, Schmaps

When some visiting speakers come to my department and learn I’m ‘the cartographer’ it’s amazing how many times their next comment is “I know my maps are bad” and smile or chuckle, but with the unspoken “but that doesn’t really matter to my message/findings/purpose.” Can you imagine if I confessed that I was ‘the statistician’ and they said “I know my stats are totally wrong” and brushed it off with a smile? This is especially disturbing when these maps are so often the central piece of evidence offered up by these speakers (“as you can see here on the map, there is a clear correlation between…”). It’s not so much that this is bad graphic design that worries me; It’s that this is bad science.

I’ve seen many researchers take years to painstakingly collect and verify their data. Science is by design a very slow and thorough process and it has to be to ensure that our knowledge claims are correct. But after taking sometimes years to collect the data, I’m astonished when I see brilliant scientists content to present their findings using clunky maps and graphics that showcase how bad software defaults are and little else (don’t get me started on 3D pie charts!). They look as if they were slapped together in 20 minutes and that saddens me because their work deserves better than this, especially when one considers that these images so often become the public face of this data. Indeed, many famous maps and graphs are produced and reproduced for years, long after the original paper they were attached to have been forgotten. As Edward Tufte has demonstrated time and time again, better designed graphics would make their arguments clearer, more convincing, the data richer and more nuanced.

Why Design Matters

To be clear: Good cartography is more than making data pretty. It’s a recognition that the best data in the world can be diminished–or worse, distorted–if the map is clumsily executed. It’s a recognition that the map is the intuitive and flexible interface between our data and the knowledge we seek to gleam from those data. We may live in a glorious digital age, but let’s face it, those 1’s and 0’s we’re so good at collecting don’t really come alive until we translate them into images and maps and graphs that are representations of data, those data themselves being representations of the real thing.  Maps should not, thus, be confused with reality (although they are often assumed to be perfect mirrors of reality).

Most importantly, good design and good map-making is an understanding that the graphic choices we make fundamentally change what our data say, and thus, what we think we know about the world. If we’re sloppy about how we choose to represent our data (and by proxy, the world), then we’re being sloppy about the knowledge those images create inside our heads. This is why relying on software defaults, the one-size-fits-all-needs approach to design, is something we at Axis Maps have worked so hard to fight.

When maps are offered-up in the dual role of both ‘evidence’ of our knowledge claims, and the means by which we explain those knowledge claims to others, should they not be subject to at least the same standards that would be applied to any other part of the scientific process (e.g., data quality, statistical significance)? Maps are the ultimate executive summary: caveat emptor.

I’ll leave with a quote from the delightful blog Presentation Zen (August 30th, 2006):

To many business people, design is something you spread on the surface, it’s like icing on a cake. It’s nice, but not mission-critical. But this is not design to me, this is more akin to “decoration.” Decoration, for better or worse, is noticeable, for example — sometimes enjoyable, sometimes irritating — but it is unmistakably *there.* However, sometimes the best designs are so well done that “the design” of it is never even noticed consciously by the observer/user, such as the design of a book or signage in an airport (i.e., we take conscious note of the messages which the design helped make utterly clear, but not the color palette, typography, concept, etc.). One thing is for sure, design is not something that’s merely on the surface, superficial and lacking depth. Rather it is something which goes “soul deep.”

2 Comments