We are very pleased to announce the launch of indiemapper.com. Indiemapper is a project that is very near and dear to our hearts. When we were starting as a company or even before that at the University of Wisconsin, we constantly talking about the tools available to us as cartographers. Talking might be putting it lightly… we were complaining.
The same things were coming up time and time again. Why is it so hard to make a simple map from digital data? Why did we need to keep PC’s around when all of our design work was done on Macs? Why was all the current software so expensive when we were only using 10% of its total functionality?
At the same time, we were building some great online tools built for cartography. Ben was building TypeBrewer to help map-makers understand and make better choices with typography. Mark had built ColorBrewer a few years earlier when he was back at Penn State. Dave was working on bringing usable UI controls to temporal and geographic visualization in BallotBank. These programs were built on expert content, usability and accessibility. Why weren’t web-based tools like this available within a the map-making environment?
Flash forward to the spring of 2008. Indiemapper began as a proof-of-concept built by Andy and Zach Johnson during their free time. They wanted to see just how much of the cartographic capabilities of GIS could be moved online using Flash. As it turns out, quite a lot. Originally, we thought that this would be a great code repository from which we could draw ideas and code to build into maps we were making for clients. Then it all came together. Indiemapper was so close! Andy’s original work proved it could be done. We could finally build the application that we ourselves had been wanting for all these years!
Indiemapper is still in development and there’s a lot we’re still learning about the final product. We’re re-coding that original prototype from the ground up to make it robust enough for professional cartographers in a production environment. We’ve redesigned the UI and built in expert choices (colors from ColorBrewer, type from TypeBrewer, data classification, etc) to make it easy for novice map-makers to produce great looking maps quickly.
We know that there are lots of people like us who are frustrated by the current available tools because of their price or their functionality. We’re confident indiemapper is for you and will find a place in your mapping workflow.
Recently, we took on a nice little print mapping project for a few hotels located in downtown Madison, Wisconsin. The project involved making a one-sided, page-sized map showing hotel locations and the locations of a few points of interest in the area. The idea was that hotel guests could use the map to find their way around downtown as well as get a sense for where they were staying in relation to the university, interstates, airport, etc. The map was to be printed in grayscale, plus 3 spot colors (red, yellow, and blue).
Before starting out, we discussed the possibility of sharing the project with those interested in seeing all the stuff that goes into designing a map like this. The map design process is notoriously difficult to articulate and we’re keen on the idea of making pieces of it more transparent, where possible. One option was to screen capture the hotel map as it appeared in the production software at regular time intervals from blank page to finished product. So, here is a sequence of 116 images, originally captured at 10-minute intervals, compiled to show the evolution of the hotel map in just under 2 minutes. Clearly, not all maps are made in the same way, but this should expose some of the kinds of design decisions made in a relatively simple project like this.
Watch the larger version of Map Evolution (990 x 766px) — best for seeing change in map details.
After posting our election map last month, we received a number of excellent comments and suggestions. It’s late, but I thought I’d finally post the couple of variations of the map that I’ve managed to find time to put together. The maps below do two things differently from the original:
Vary the brightness of counties by population density rather than total population. This was a frequent suggestion. I think it has a few of its own drawbacks too, but it looks pretty good.
Different color schemes. Just for fun, I’ve used the purple color scheme that has become common in recent elections. I also liked the suggestion in one comment to saturate colors by margin of victory, so I’ve done that too. In these, full blue would be total Obama domination (Obamanation? Obamadom?), full red would be the same for McCain, and gray is an even split.
No snazzy posters this time. Just a few map snapshots.
First, the original colors mapped by population density, as posted in the comments on the original post.
The purple color scheme. First by total population:
And by population density:
Margin of victory by total population:
Margin of victory by population density:
Apologies for any trouble seeing the images. It’s tricky to find a brightness that will look right on every screen.
Perhaps the most basic capability of any custom interactive map we make is the ability to pan and zoom the map. That is, after all, the way to make something that might be the size of a wall poster in print fit on a computer screen and still be readable.
On my personal site I have posted a very basic tutorial and example of ActionScript code for a simple version of the way I typically code panning and zooming. If you’re looking for a starting point for panning and zooming, check it out.
Based on my own experiences, if you’re looking for basic ways to improve upon that minimal functionality, consider these:
Tweening zoom changes
Replacing vector graphics with raster while moving the map (faster performance)
Dynamically drawing and placing symbols on the map
Drawing geographic data (shapefiles, kml, etc.) into a pan/zoom map
When we were building GeoCommons Maker!one of the key map design challenges we faced involved producing semi-transparent choropleth maps. Choropleth maps are perhaps the most common type of thematic map and are regularly used to show data that is attached to enumeration unit boundaries, like states or counties. Ever seen a red state / blue state election map? This is a basic choropleth. There are a lot of more sophisticated ways that choropleths can be made to best represent a given data set, for example, by playing around with classification, categorization, choice of color scheme, etc., but we won’t get into those here.
I want to talk about color. Traditionally, choropleth maps are read by looking at the general pattern of unit colors and/or by matching the colors of specific map units to a legend. Other reference data is often removed from the map because it is either, 1) not necessary to communicate the map’s primary message or 2) makes communicating this message more difficult. It could be argued, for example, that other reference map information, like green parks, gray airports, brown building footprints, and blue water distract readers from seeing the general pattern of choropleth colors on the map, which is where the map’s most important message can be found.
For GeoCommons Maker!, we wanted to allow people to make a kind of hybrid, semi-transparent choropleth map that would show both thematic data (colored choropleth map units) AND the rich reference information on popular map tiles (e.g., Google, Microsoft Virtual Earth) without sacrificing map reading and interpretation ability and confidence. We believe that there are lots of times when reference and thematic data can work extremely well together to really benefit a map’s message (e.g., a soils map that shows terrain or a vegetation map that shows elevation). So, we wanted to build this functionality into Maker!, and allow people to make maps that show the best of both worlds.
The Problem with Transparency
The fundamental problem with transparency is that the color of semi-transparent map units can shift due to the visibility of color that lies beneath them. This is not at all surprising, but can make the basic legend matching task difficult, obscure the pattern of color on the map, or just as bad, make patterns appear out of nowhere. Here’s a look at what happens to colors using the same semi-transparent choropleth map units on different backgrounds. These are screen captures from early design mock-ups for Maker!.
The first image shows (hypothetical) opaque choropleth map units with a 7-class color ramp. The next three images show the same units at 50% opacity on top of Google terrain, streets, and satellite imagery. Notice how colors shift when compared to the opaque map at top? See how lightly colored units nearly disappear on the streets map, and darkly colored units nearly disappear on the satellite map? Yikes!
The Solution to Transparency
We employed three design solutions to ensure that semi-transparent choropleth maps in Maker! would work, despite potential map reading problems: 1) unit boundaries, 2) data probing, and 3) transparency control.
1) Unit boundaries. In Maker’s choropleth maps unit boundaries are color coded but remain opaque, even when unit fill color is semi-transparent. This gives map users some true color information to work with, and should improve their ability and confidence to spot map patterns or match colors to a legend. In other words, while unit fill colors can get you close, unit boundaries can get you the rest of the way there.
2) Data probing. We also took advantage of a relatively common and very helpful interactive map feature known as data probing. Exact values for any choropleth map unit can be obtained by clicking on them. In Maker!, we designed the data probing feature to go one step further and give values for all of the possible attributes associated with each map unit, not just the mapped attribute alone (see the scrolly list, shown in the probing pop-up below).
3) Transparency control. Finally, we gave mapmakers a transparency control, as well as a chance to take some responsibility for how well their maps communicate. The transparency control lets mapmakers decide what works and what doesn’t. Given the huge range of possible maps that can be made with Maker!, some user controls like this are necessary (as well as being kinda fun!). Here, transparency can be adjusted for a custom fit with any chosen tile set, color scheme, or other mapped data. Settings on the control (shown below) range from 50-100% opaque.
The Best of Both Worlds
Our decision to include semi-transparent choropleth maps in Maker! should give mapmakers and map users the best of both worlds. A semi-transparent choropleth is truly a hybrid map in that it can potentially offer all the advantages of combining rich reference data (i.e., underlying tile sets) with great thematic data (i.e. overlying choropleth units). Hopefully the choropleth maps coming out of Maker! will be easy to read and good looking, too!
The first time I used Google Maps I knew that the world of cartography had just gotten a lot more interesting. It blew my mind. What really struck me then (and still does today) is that I didn’t have to learn how to use it: It just worked. It didn’t come with a manual, and I didn’t need a class in it. Rather, I would think, “Hey, I wonder if…” and sure enough it did just that. First try. It worked. What was happening was that my expectations of the map and the feedback it gave—and the speed at which it gave that feedback—left me feeling empowered to explore more, rather than frustrated or confused.
This is true of all the best tools in our lives: they make us feel confident (even smart), not intimated or confused or frustrated. And they quickly become completely transparent. The master violinist, photographer, or painter all work so comfortably with their tools that they are able to translate their powerful and nuanced intentions into a physical reality. We’ve all experienced this – when the interface between our cognitive and emotional selves and the world around us disappears and we are able to lose ourselves in a great book or a great movie (you cease to realize you’re sitting in the theatre watching reflected light on a screen or scanning printed characters on page).
When our tools disappear and become transparent, we are at our best. Psychologists call this ‘flow’ and I think it is the singular defining state of creativity: it’s what happens when we are so deeply engaged with the work we love that we lose track of time and need to be prompted by loved ones to stop and eat occasionally. Mozart had this problem, Newton had this too, and to a lesser extent, so do I every time I fire up Google Earth. I often joke that Google Earth should come with a warning label: You will be here happily for hours. Proceed with caution.
Now reflect on how rare that experience is in the world of software and web interface design. Why is that? Why are we content to create maps that merely don’t crash? Below I outline how we might, as designers, aim a little higher.
The problem is, as design guru Donald Norman points out in his classic must-read The Design of Everyday Things, most people expect to be flummoxed by new technology, that’ll it be hard to learn, that it will be unpleasant at best.
Why else would people refuse to upgrade software despite obvious problems with their current version — because they’ve done the math, and the current flaws are better than having to learn a new version. Indeed, most people blame themselves when something doesn’t work, saying, “I must be stupid because other people know how to use this,” when in fact it’s most likely a counter-intuitive UI and poor or missing affordances that are to blame. The only reason why other folks know how to use it is because they’ve learned, through trial and error, how to work around those flaws. If software elicits “That made no sense, but I guess it worked…I’ll try to remember that for next time,” it is badly designed. If it elicits “I bet I can do x by using y,” you’ve earned your paycheck (ironically, most payroll systems I’ve seen are horrendous).
Success in UI design is not measured by “Did the person get something at the end?” but rather by “Did they grasp what the tool was capable of and how to use it quickly? What was the mental and physical workload required? How many dead ends did they go down before they found success? Did they enjoy the experience?” among other important questions. The TLX scorecard (designed by NASA) and the GOMS test are two such approaches used by savvy designers to score how well people use their tools, not merely if they can use them (or, as we often see, only use them if they’ve taken lengthy training courses).
Case in point: A Big Ten university I know reserves a mandatory full afternoon to show every new employee how to use the phone message system, representing millions in lost productivity. Uh, Houston, I think you might have a problem, and it’s not your employees. This isn’t just design snobbery it’s massive waste of money and time.
THE IMPORTANCE OF FLOW
Let me propose that the best user interfaces (UIs) become transparent because they engender ”flow.“ For me, this is the holy grail of good design. The 9 components of flow, based on Csíkszentmihályi’s work (see his TED Talk) in the 1970s, can be used as a scorecard for any UI we design or use:
Clear goals (expectations and rules are discernible and goals are attainable and align appropriately with one’s skill set and abilities).
Concentrating and focusing, a high degree of concentration on a limited field of attention (a person engaged in the activity will have the opportunity to focus and to delve deeply into it).
A loss of the feeling of self-consciousness, the merging of action and awareness.
Distorted sense of time, one’s subjective experience of time is altered.
Direct and immediate feedback (successes and failures in the course of the activity are apparent, so that behavior can be adjusted as needed).
Balance between ability level and challenge (the activity is neither too easy nor too difficult).
A sense of personal control over the situation or activity.
The activity is intrinsically rewarding, so there is an effortlessness of action.
People become absorbed in their activity, and focus of awareness is narrowed down to the activity itself, action awareness merging.
HOW DO WE GET THERE?
There is no simple magic formula, I suspect, for what defines a good interactive map (or any user interface) but I think some salient qualities include the following:
1.Give Them Something Quickly. Nothing boosts confidence like finding the power button immediately or sliding your credit card in the reader the correct way the first time (aside: omg card reader designers— after 30 years this is the best you can do? Seriously?!!). I’ve always liked Apple’s “welcome” movie that plays the first time you start up your new Mac – all you’ve done is find the power button but already you feel like you’re off to a good start. A blinking c: prompt is no way to greet your clients.
2.Adapt to the Skill-Level of the User. Maps and software should be smart enough to adapt to the level of the user, from offering pro-level shortcuts to revealing more advanced features as needed — not when they can’t be used and simply clutter up the screen and taunt the user with ”shame you don’t know how to activate me cause I’m all grayed-out.”
3.Understand Affordances.All interfaces, from airport signs to lawnmowers, live or die by them. If you are a designer, you need to eat, breathe, and sleep this stuff for the world already has too many ”Norman Doors.“ Named after Donald Norman (who talks about them in his aforementioned book), these are doors that have a horizontal bar across them and thus suggest that you’re suppose to push them, when in fact you have to pull them (RULE: horizontal bars to push, graspable vertical handles to pull). In the panic of a fire, such design flaws take on new significance.
4. Eliminate Features Ruthlessly. My two current favorite UIs are the Google Search Engine and the iPod. Both are the model of simplicity and both burst onto the scene and quickly dominated their markets by doing the incredibly counter-intuitive thing: offer the user less. They both removed a whole bunch of features the competition thought were essential. Why? Because we are all swamped with too many choices in our lives (see “The Tyranny of Choice”) and simpler tools are (1) faster to learn, (2) faster to use, (3) cheaper to make, and (4) and less likely to break.
Unlike just about every product ever, over time the iPod has in fact gotten less complicated: The Gen 1 iPod had twice as many buttons as the Gen 3. The iPod shuffle? It got rid of the screen! Heck, the shuffle even eliminated the power jack and let folks recharge directly through the headphone jack (I still don’t know how they did that one). And it sold like hotcakes because it was cheaper to make and easier to use — they killed the features that were underused and kept just the stuff that really mattered to most casual users (pro users can still buy the more powerful models of iPod). And most folks learned, hey, you know, I really didn’t need all that stuff after all, and I just saved a bunch of money…and that is a happy customer.
5. Build a Well Labeled Emergency Exit. If the user feels like at any moment they might break it, they won’t venture far. The best UIs increase confidence by providing reassuring feedback that progress is being made and encourage the user to keep going. We’ve all been stopped by cryptic warning messages that leave us feeling unsure of whether to proceed or cancel (I usually cancel, unless I’m feeling dangerous or it’s not my computer). Good ideas include a big reset button, unlimited undos, and lots of sign posts such as “Before we close the window, would you like me to save your work for you?” More advanced users can turn off such warnings once they’re weaned off of them. I have archiving software that asks me three times in three ways if I really, truly want to erase a drive and overwrite it. Given the cost of a mistake (10 years of my entire digital life) this five seconds of forced careful thinking is a suitable insurance premium.
I asked Andy, Ben, and Dave to name their current favorite UIs and Ben suggested these cool light switches in his apartment in Sweden. He writes “(1) really big and square, (2) in the dark, just start smacking the wall, and (3) feels really good to smack the wall in the dark.” Hard to argue with that. Compared to the light swithces we have here in North America these are easier to use (less effort) and result in less fumbling (faster to use, fewer mistakes) and they remind us even the humble light switch can be improved with some careful thinking.
I made mention of 3D pie charts in an earlier post and thought I’d outline exactly why they are such a bad idea. As both a teacher and designer I campaign hard against “chart junk” and the needless and confusing eye candy tricks that software companies create to clutter-up our lives. I know these companies need to offer something to try and convince us to upgrade to the new version, but let’s be clear: drop-shadowing every element on the page, or adding an outer glow to the text isn’t going to make your message any clearer, and will most likely distract from the very thing you’re trying to show. My design philosophy can be summed up as:
In cartography, aim to be clear, not cool.
Anything that doesn’t contribute to the message, or worse, distracts from it, probably doesn’t need to be on the page. Since maps are small and the world is large, every inch on the page and every pixel on the screen has to count and we can’t afford to waste any of them. Draw what you need, and no more. Fans of Edward Tufte, Presentation Zen (recent post on ‘chart junk’), or old-school cartographer’s like J. K. Wright and Arthur Robinson will recognize all of this – this is hardly a new message. But it is one that still needs to be heard, apparently. For a quick overview of many of these arguments I’d strongly recommend reading John Krygier‘s excellent post “How useful is Tufte for making maps” (his 20 Tufte-isms is a great crash-course in Tufte).
As an information graphic, let’s step back and think about what a pie chart is suppose to do and how it works at a perceptual level: Pie charts are used to tell us (1) how much of something exists, and (2) how much that is compared to the other categories. ‘How much’ is encoded by the size of the pie (or segment) and ‘relatively how much’ by the internal angles of the pies and/or their relative sizes. To extract data from the graphic you have to be able to quickly visually compute both areas and angles.
The bad news: Years of testing has shown that most of us are really bad at estimating the areas of even simple shapes – just try visually estimating how much carpet you’ll need for a room, especially if the room isn’t square if you don’t believe me – and we’re pretty bad at eyeballing and comparing angles too.
Not convinced, you say? Looking at the pie chart above:
What percent of the total does DP Tech have?
Is that more or less than IBM?
How much more/less?
Now think about presenting those data as a boring old table: DP Tech 4%, IBM 5%. Done. Simple. Think about the difference in mental workload, and the confidence you have in your answerwhen the data are presented as a 3D oblique pie chart and when they’re numbers sitting in front of you. This problem is so commonplace (and yet ignored) that most folks resort to putting numbers on pie charts because the graphic itself is not sufficient, which is waste of ink, their time and mine.If you have that little confidence in your charts, just give me the numbers!
Here’s a rule of thumb I like to use:
If a map/graphic needs ‘crutches’ like number labels and can’t stand on its own, don’t use it. It’s the difference between “A Tale of Two Cities” and “A Tale of Two Cities: A Novel.”
The same data is presented three different ways above, and each change made to “enhance” the simple 2D pie chart makes it worse because the two basic perceptual tasks – ‘how big is something’ and ‘what are the angles’ – are much harder to perform when the pie is lying down. This is exactly what happens when design decisions are made in a vacuum and based simply on “it looks cooler this way” rather than an understanding what we need from a graphic to make it readable/work.
Problem 1: Adding the 3rd dimension adds no new information to the graphic. That’s bad because it is wasted ink (that could be doing real work) and it requires the tilting of the pie so the designer can show off the 3D effect. If the height (z-dimension) added some additional data (i.e., a second data variable), it might be worth adding, although I would caution against that since we’re even worse at estimating volumes than we are at estimating areas (which is why “how many jelly beans in the jar” contests or “how big a moving van do I need?” continue to challenge us – we’re terrible at numerically estimating volumes beyond the crude level of “bigger / smaller”).
Problem 2: On both oblique pies the scale is not consistent across the graphic. In other words, the same pie segment will look larger or smaller to you the observer simply based on where it lands in the circle…things closer to us will look larger, even if they’re not. This couldn’t be more counter-productive when we’re simultaneously asking the viewer to estimate areas. This is an absolute rule:If you expect people to judge the size of things, don’t change the scale of the objects on them.
Problem 3: Splitting the pie apart makes matters worse because the further objects are from each other, the harder it is to compare them (which is why we like to hold things side-by-side if we want to carefully compare them). Why? The extra time and effort it takes for your eyes to search for and acquire the now-separated segments uses-up your precious (visual) working memory and requires more eye trips back-and-forth to make the same estimation. Cognitive scientists call those back-and-forth trips extraneous cognitive load which cut the available brainpower (working memory) that can be devoted to the real task of comparing segments (intrinsic and germane cognitive load).
Solution? Simple: Stop using oblique, exploded, jazzed-up 3D pie charts. 2D work better, are easy to read, faster to read, and easier to make. Importantly, they also can be drawn much smaller and remain legible – as cartographers we’re always looking for ways to do more with less ink. If your powerpoint slides feel naked without fancy transitions and giant 3D graphics, you’d do better to work on the substance of the talk, rather than bury your good ideas in a pile of chart junk.
Update, Dec. 22: A few variations of the map technique are posted here.
We spent some of our spare time last week exploring data from the 2008 presidential election and thinking of some interesting ways to visualize it. Above is one map we put together.
One thing we sought to do was present an alternative to cartograms, which are becoming increasingly popular as post-election maps. Cartograms are typically offered as an alternative to the common red and blue maps showing which states or counties were won by each candidate, wherein one color (presently, red) dominates the map because of the more expansive—but less populated—area won by one candidate. Election cartograms such as the popular set by Mark Newman distort areas to reflect population and give a more accurate picture of the actual distribution of votes. A drawback of cartograms that we’re very aware of, however, is that in distorting sizes, shapes and positions are necessarily distorted, sometimes to the point of making the geography virtually unrecognizable.
Our map is one suggestion of a different way to weight election results on the map while maintaining correct geography. What we’ve done is start with a simple red and blue map showing which candidate (Republican and Democrat, respectively) won each county in the lower 48 states. Then, to account for the population of those counties (or, the approximate distribution of votes), we’ve adjusted opacity. High-population counties are fully opaque while those with the lowest population are nearly invisible. Against the black background, the highest concentrations of votes stand out as the brightest.
We’ll let viewers be the judge of its cartographic effectiveness, but we hope you’ll at least agree that it looks pretty cool!
Click on the image at the top of the post to view a larger version, or see it in a Zoomify viewer, or download the full size (suitable for printing).
I love ColorBrewer. All of us here at Axis rely on it almost daily and it’s helped us to make nice looking maps quickly; and that’s what good tools do, they make their users look really good at their jobs.
7+ years later, ColorBrewer is due for some changes and Cindy Brewer has been kind enough to ask us to hold the scalpel. Nothing major. Same great color schemes (of course), but a new interface and some new functionality to help ColorBrewer’s 2000 visitors per week get the most out of the experience.
We’re in the early stages of planning this project but we though we would open this up for some discussion amongst the ColorBrewer-using, Axis Maps Blog-reading masses.
QUESTION: What would you like to see in the new version? What should remain untouched? What do you love? What do you wish was done better?
Let us know your thoughts in the comments. Thanks!
A few months ago I started on a little side project to visualize presidential campaign speeches spatially. My idea was to collect speeches by the 2008 US presidential candidates, generate a word cloud of the most common words in each, and each word cloud on a map in the location where the speech was given. We’ve seen a number of text visualizations and analyses, sometimes in-depth, during this campaign, but so far not by geography that I can recall. (See those from Martin Krzywinski, and TheNew YorkTimes with help from Many Eyes, for just a few examples.) Are the candidates speaking to different issues in different parts of the country? Are they talking about jobs in Michigan and immigration in New Mexico? Are they pandering to everyone, everywhere they go? (Can we call this project PanderViz?) Visualizing campaign words on a map might answer such questions.
We hoped to develop this idea into a sophisticated interactive map in which a user could search for words, filter speeches by date, and so on. Other work has kept us from doing that before the election next week, but it seems worth showing some screenshots from what I did manage to get done originally.
I went to the official websites of the Obama and McCain campaigns, where the text of speeches is transcribed, and ran the speeches through a simple PHP script to count words and record the location of the speech. This week I revisited the sites to catch up on speeches since the summer. These sources have their drawbacks, of course. For one, although as prepared speeches they contain perhaps the most carefully chosen words for a particular audiences, they do not represent the complete vocabulary used on the campaign trail. Also, Obama’s team has been more diligent in posting speeches, it seems, providing close to 80 speeches since May, compared to about 30 for McCain, a disparity that makes comparison between the two candidates a bit difficult.
As far as I got with the capabilities of this map was generating scale-dependent word clouds (I’ve written more about those on my personal site) and searching for individual words to display proportional symbols representing the frequency of use. With less than a week until election day, we might as well get out of it what we can, so I’ve generated a series of maps of word clouds and individual word frequencies.
The whole series is long—obnoxiously long for a blog page—so it’s at a separate page, linked below. Enjoy, and please comment if there’s an interesting word to look up that I didn’t think of!