Lots of maps are coming out that document when, where, and how stimulus money is being spent through the ARRA, like these at the Foundation Center. With all of the reporting, accountability, and transparency required of ARRA grant recipients, I’m sure we’ll only be seeing a lot more of these in the future. Recovery.gov directs traffic to states’ Web sites where some of this data is appearing. I’m looking forward to seeing more and more mash-ups and interactive maps and graphics as developers and designers get their hands on this stuff and data from other sources that track stimulus money.
For now, we decided to get involved by putting together a static map that shows where our ARRA tax dollars are going for energy-related programs administered by the DOE. As underlying layers, the map shows states’ historical energy consumption trends and their projected trends required to meet consumption goals set for 2012.
I’m sure we could all talk about the politics around ARRA funding and energy consumption and how this might or might not be shaped by patterns that the map does or doesn’t show. But to me, a few of the most interesting things about this map are related to its design:
1) Encoding data in state boundaries
I’ve always been attracted to National Geographic political reference maps, with their countries each outlined in a different color. On those maps, outline color clearly helps distinguish one place from another. Plenty of other maps use enumeration unit outlines to represent data, too, like those that categorize administrative boundaries using line weight, dashes and dots, etc. I wondered what was to stop the application of this idea to a thematic map? Why not try to take it one step further and encode numerical data, as opposed to nominal data, in unit outlines? I haven’t seem many examples of this.
The main limitations here are line weight and unit size. Line weight has to be heavy enough so that color can be seen and read. For my map, this seemed to work best above around 4 pts. Only thing is, as enumeration units get smaller, the outline can eat up more interior space and obscure the presence of a second data set, which in this case is the historical energy consumption trend, encoded using unit fill color. So, I had to cheat a little bit with some small states and states with small pieces (e.g., Delaware and Maryland) and decrease the line weights a bit under 4 pts. I don’t see this approach working very well with really small enumeration units like US counties, unless the map scale is really huge.
2) Color selection
The challenge here was to select colors for three data sets (historical energy, projected energy, and ARRA money) that not only encoded data properly but were harmonious (i.e., not competing or ugly). The historical energy data set has a natural midpoint around zero, so it needed a diverging color scheme. On the other hand, the projected energy data, having no midpoint, required a sequential scheme (thanks to ColorBrewer 2.0 for both sets of specs). Proportional rings for ARRA money just needed to be readable and look nice on top of the other colors.
Here are some earlier attempts at getting color right. In my first try, I used a grayscale sequential ramp for the historical data (state fill color), matching the middle value to the map’s background for a pseudo-diverging ramp feel. But this seemed overly subtle and downplayed the importance of clearly distinguishing states with decreasing and increasing energy consumption trends.
So, my next try was to replace the grayscale ramp with a true diverging ramp. Yuck. The mix of red outlines and fill colors bothered me on an purely aesthetic level. Other diverging ramps with other hues in them produced similarly ugly results.
The final colors for historical energy consumption trends (blue-white-red) seem to best emphasize the data’s midpoint, with red doing its part to connote “alarm” in the states with a poor track record. The projected energy consumption data set is now lower down in the visual hierarchy (shown using a grayscale color ramp on state outlines), but this seems to be acceptable compromise. Using gray prevents these two ramps from competing for attention or overlapping and confusing the map reader. From my perspective, at least, it also results in an (yes, subjective) improvement in overall color harmony.
Other thoughts about the ARRA funding map? Please add them to the comments.
Google Earth is amazing. As we’ve commented here before, it continues to blow our minds and has also done wonders for the popularity of maps. And let’s be honest, it looks super cool. There is no doubt that Google Earth is much sexier than that boring old atlas collecting dust on your shelf: It’s interactive, seamlessly integrates distributed data sources, animates the surface of the earth over time, facilities virtual communities, can be customized by both developer and user, etc, etc. It’s hard to not be impressed.
So all of our maps should be in Google Earth, right?
In fact, despite recent efforts to create a suite of thematic mapping approaches, Google Earth is a terrible environment for presenting many kinds of thematic maps. I’d go so far as to say that the 3D prism maps and 3D graduated symbol maps we see popping up in Google Earth are pure chart junk, of the kind Tufte warned us about repeated for past 25 years.
Chart junk takes what should have been a simple-to-read graphic and makes extracting information (1) slower, (2) more difficult, and (3) more prone to reading errors, because of excessive ornamentation and unnecessary design additions—like adding a 3D effect that communicates nothing in and of itself but simply “looks cool.” This is not idle speculation: Research consistently shows chart junk and “redundant ink” hurt otherwise fine graphics.
Want to see for yourself? Download these two example KML/KMZ files from blog.thematicmapping.org and run them in Google Earth. While you’re looking at them try to extract numbers or compare places: KMZ File 1 | KML File 2
“BUT THEY LOOK COOL”: A TECHNOLOGY IN SEARCH OF A PROBLEM
As Abraham Maslow said, “If the only tool you have is a hammer, you will see every problem as a nail.” This seems to be the case with virtual globes and the developers who love them and insist that any and all kinds of thematic data belong there. Instead, I’d challenge us to take a step back and ask,
WHY DO WE MAKE THEMATIC MAPS?
For a long time folks like Robinson, Dent, and MacEachren have been arguing that thematic maps exist to support two basic tasks: (1) the ability to extract numbers/facts about specific places (e.g., 15C in Paris) and (2) the ability to judge those values in geographic relation to other places (e.g., 5C warmer than London, about the same as Milan). In other words, we want both specific details and overall patterns to be obvious on our thematic maps. And we want all of that AT A GLANCE.
The problem with digital globes (as with all globes) is you can’t see half the planet and, due to curvature, really only about a 1/3 of the planet clearly at once. Which leaves us with a conundrum: If you’re only mapping a small place (e.g., a country), why do you need to have it on a globe? And if you have a global dataset, why would you allow your readers to only ever see ½ the data at once? They can rotate the globe (more on this later) but they’ll never be able to see the entire dataset at once. That makes understanding overall patterns very difficult, and asking folks to “remember” half of a global dataset while they spin the globe to the other side is far, far beyond the meager limits of our working memory. If you’re not convinced, just try it.
KNOW YOUR HISTORY
What makes these recent developments even more frustrating is that in the 70s and 80s, with the advent of digital map making, cartographers flirted with, and largely rejected, faux 3D prism maps and 3D graduated symbol maps (like the two examples above) since they suffered from several limits:
visual occlusion (not all of the map can be seen at once since some places hide others)
people suck at estimating volumes, especially of complex shapes (e.g., try estimating the size of moving van you’ll need for your home)
mental rotation of complex shapes is extremely hard, so hard that it is often used as a measure of intelligence in IQ tests.
Many a thesis and dissertation was written in the past 40 years demonstrating these limits to human visual processing.
The nice thing about Virtual Earths is that you can rotate them, so the problem of visual occlusion is solved, right? Yes and no. Yes, interactivity and the ability to rotate the globe can help reveal hidden places, but no, these virtual globes introduce a significant extraneous cognitive load because the user must now think about controlling the globe (not always easy with a mouse) while also trying to focus on the thematic content. In fact, adding a complex task, like visually acquiring the Google Earth controls and then trying to figure out how to move/scale/reposition the globe between two other tasks effectively “flushes” short-term working memory. It’s a kind of mental sorbet, which is why giving folks something distracting to do is a common trick in memory tests (they lose their train of thought). Why would we deliberately do this to our map-readers?
BIG PROBLEM: INCONSISTENT SCALE
In the examples above it is really hard to judge relative sizes. Why? Because the scale of the symbols is constantly changing, and the ones closer to the viewer are much larger (and at a different scale) than the ones far away. Given that it has been long established in cartography that people are terrible at estimating sizes, and even worse at estimating volumes, it is utterly inane to compound this failure by drawing the symbols at different scales. Of course it is worse than this: Rotating the globe slides each symbol through its own scale transformation path, changing in size with every pixel the maps are moved.
This is an absolute rule:If you want to give people the best chance to judge the relative sizes of objects, they should all be drawn at the same scale.
STILL NOT CONVINCED? LET’S DO SOME USER TESTING
TASK #1: As quickly as you can, how does Nepal compare to Uzbekistan?
TASK #2: As quickly as you can, find all of the other places on the map similar to Nepal? Which place is most similar? Which one least?
Hard, isn’t it? To be honest, it shouldn’t be: A regular 2D classed choropleth map or proportional symbol map would make short work of those questions. So what did we gain by extruding the countries up into space? Not much that I can see.
The Lack of a zero-line referent makes it hard to judge absolute magnitudes.
The “fish eye lens” effect mean each prism is viewed from a different angle than its neighbors, making comparison just a little bit harder as we have to mental account for these differences in our estimates.
It is hard to judge the height of something when you are staring directly down at it. This matters because height is the visual variable that does the “work” in this graphic—it’s how the data are encoded visually. Why obscure the very thing map-readers need to make sense of the graphic (e.g., the side-view height of each polygon)?
I need to be convinced of two things: (1) something is fundamentally wrong with our proven and highly efficient planimetric thematic maps, and (2) that reprojecting this data onto a virtual globe somehow solves those problems. Otherwise, we truly have a cool new technology in search of an application, and that’s just putting the cart before the horse.
Some suggestions: First, unless the 3rd dimension communicates something and isn’t merely redundant data already encoded in the colors, sizes, etc., do not include it (for all the reasons outlined above). Second, if you want folks to perform “analytical map reading tasks” such as estimating relative sizes, distances, or densities, keep scale constant. Third, do not obscure parts of the map behind other parts if that isn’t inherently relevant to the data (e.g., this is fine for terrain visualization). Fourth, and most importantly, do some user testing before presenting a new technique as the best thing ever: It’s how research works and why it is important.
So what things are Google Earth (and other Virtual Globes) good for? The consensus around here is (1) to engender, quite powerfully at times, a qualitative “sense of place” or “immersion”; (2) for virtual tourism (e.g., sit on top of Mt Everest) or virtual architecture/planning; and (3) to perform a kind of viewshed analysis and see what can and cannot be seen from locations (line-of-sight). All of those are inherently 3D-map reading tasks in which the immersive, 3D nature of the map is important. By comparison, population data (one number per country) is NOT inherently 3-dimensional and is only made to suffer when dressed-up in prism maps and 3D figurines.
Cartography, like all good design, is about communicating the maximum amount of information with the least amount of ink (or pixels). The world is just too complex and interesting to be wasting our ink/pixels on non-functioning ornamentation.
Do you need a cartography degree to make maps? As the only trained cartographer on the panel, they just couldn’t wait to ask me this question (could I really say that Stamen’s “non-cartographers” shouldn’t be making maps?). I gave the popular answer, “No,” but with a caveat: “You just need to care about cartographic design.” Elegant design and clear communication are universal to all aspects of design. Cartographers have a slight leg up in the map game because we’ve been using our design chops to get good at applying these universal concepts to maps, but concepts like subtle use of color, visual heirarchy and map / UI composition can directly be applied from graphic design to map design. Incidentally, this is the hardest stuff to teach to cartography students. However, there is a lot of cartographic design that is uniquely geographic. Issues like projections, thematic symbolization and generalization don’t exist outside of maps and largely exist because of the challenges of representing a complex world on a small, flat piece of paper. These same issues remain even moving from paper to the computer screen, but unfortunately they are largely ignored. On a preachy note, I think it is our responsibility as cartographers to CONSTRUCTIVELY engage ourselves with the new mapping discourse.
What’s with neocartography? Neocartography is a tricky definition (one that I think is changing every day) so take the coward’s way out and define it as:
But “Where2.0″ covers it pretty well. Location (that’s the where) is EVERYTHING. It’s an on-demand (that’s the 2.0) reference-map world where apps need to know WHAT you’re looking for so they can tell you WHERE it is. A lot of cartographers (especially those educated in Geography) probably feel disengaged with the new movement because they are looking for “Why3.0.” We want to make thematic maps that explain the world instead of just locating a tiny part of it. And unbelievably, with two people on the panel who helped build it, we never showed off Geocommons Maker and its thematic mapping to the audience. We could have started the Why3.0 movement then and there!
What about the 9,000 lb Google shaped elephant in the room? Instead of listening to me prattle on about projections and choropleth classification schemes, it seemed like the audience would rather hear what Google, represented by Elizabeth, their Maps UX Designer, had to say about mapping. Me too. Even though we are both making maps on the Internet, our issues couldn’t be more different. Where we can agonize over cartographic and UI issues, Google constantly needs to consider issues of scalability. With their maps viewed by millions of people (horrible problem to have, right?), design decisions take on massive significance. The UI and interactivity set worldwide expectations on what an interactive map should be (look at panning / zooming controls on all the major map providers to see their influence). They’ve become masters of the universal elements of cartographic design but have not addressed (or have been constrained by) the uniquely cartographic issues. Because Google sets the tone for mapping on the web, the web-mapping community has believed that these issues cannot or should not be dealt with.
Anything else? Just a couple things:
Cloudmade and OpenStreetMap are going to be huge. They are going to improve the state of cartography on the web and engage both experts and the public with mapping in entirely new ways.
GPS is coming to social networks. This is going to be MASSIVELY HUGE. In 3 years, “location-aware” won’t be a buzzword anymore, it will be an assumed feature. There is going to be insane amounts of spatial data and I, for one, cannot wait to face all of the display challenges it’s going to pose.
Stamen kicks ass and they’ve set the bar high for top-shelf online mapping. It’s hard to share a stage with Mike Migurski when he has such awesome maps and visualizations at his disposal. What a show-off.
It was great to meet Elizabeth and some of the Google Maps team. I wish I could have pried more Google secrets from them but they’re too tight-lipped.
Andrew Turner at FortiusOne is one of the most plugged-in, active people working in the neocartography field. Thanks to him for putting together a great panel and keeping us in line.
Everyone at SXSW had an iPhone.
Everyone communicated via Twitter.
Favorite quote: “The difference between unemployed and self-employed is only in your head.”
Favorite panel: How to Give Better Presentations – To unfairly summarize, be gimmicky to get people’s attention, play to their emotions, and don’t split their attention between what they see and what they hear.
I honestly cannot recommend this conference enough. Getting to be around the leaders in the technology field was an unbelievably energizing experience. I met some wonderful and inspiring people and I could feel the world changing over those five days.
Recently, we took on a nice little print mapping project for a few hotels located in downtown Madison, Wisconsin. The project involved making a one-sided, page-sized map showing hotel locations and the locations of a few points of interest in the area. The idea was that hotel guests could use the map to find their way around downtown as well as get a sense for where they were staying in relation to the university, interstates, airport, etc. The map was to be printed in grayscale, plus 3 spot colors (red, yellow, and blue).
Before starting out, we discussed the possibility of sharing the project with those interested in seeing all the stuff that goes into designing a map like this. The map design process is notoriously difficult to articulate and we’re keen on the idea of making pieces of it more transparent, where possible. One option was to screen capture the hotel map as it appeared in the production software at regular time intervals from blank page to finished product. So, here is a sequence of 116 images, originally captured at 10-minute intervals, compiled to show the evolution of the hotel map in just under 2 minutes. Clearly, not all maps are made in the same way, but this should expose some of the kinds of design decisions made in a relatively simple project like this.
Watch the larger version of Map Evolution (990 x 766px) — best for seeing change in map details.
After posting our election map last month, we received a number of excellent comments and suggestions. It’s late, but I thought I’d finally post the couple of variations of the map that I’ve managed to find time to put together. The maps below do two things differently from the original:
Vary the brightness of counties by population density rather than total population. This was a frequent suggestion. I think it has a few of its own drawbacks too, but it looks pretty good.
Different color schemes. Just for fun, I’ve used the purple color scheme that has become common in recent elections. I also liked the suggestion in one comment to saturate colors by margin of victory, so I’ve done that too. In these, full blue would be total Obama domination (Obamanation? Obamadom?), full red would be the same for McCain, and gray is an even split.
No snazzy posters this time. Just a few map snapshots.
First, the original colors mapped by population density, as posted in the comments on the original post.
The purple color scheme. First by total population:
And by population density:
Margin of victory by total population:
Margin of victory by population density:
Apologies for any trouble seeing the images. It’s tricky to find a brightness that will look right on every screen.
When we were building GeoCommons Maker!one of the key map design challenges we faced involved producing semi-transparent choropleth maps. Choropleth maps are perhaps the most common type of thematic map and are regularly used to show data that is attached to enumeration unit boundaries, like states or counties. Ever seen a red state / blue state election map? This is a basic choropleth. There are a lot of more sophisticated ways that choropleths can be made to best represent a given data set, for example, by playing around with classification, categorization, choice of color scheme, etc., but we won’t get into those here.
I want to talk about color. Traditionally, choropleth maps are read by looking at the general pattern of unit colors and/or by matching the colors of specific map units to a legend. Other reference data is often removed from the map because it is either, 1) not necessary to communicate the map’s primary message or 2) makes communicating this message more difficult. It could be argued, for example, that other reference map information, like green parks, gray airports, brown building footprints, and blue water distract readers from seeing the general pattern of choropleth colors on the map, which is where the map’s most important message can be found.
For GeoCommons Maker!, we wanted to allow people to make a kind of hybrid, semi-transparent choropleth map that would show both thematic data (colored choropleth map units) AND the rich reference information on popular map tiles (e.g., Google, Microsoft Virtual Earth) without sacrificing map reading and interpretation ability and confidence. We believe that there are lots of times when reference and thematic data can work extremely well together to really benefit a map’s message (e.g., a soils map that shows terrain or a vegetation map that shows elevation). So, we wanted to build this functionality into Maker!, and allow people to make maps that show the best of both worlds.
The Problem with Transparency
The fundamental problem with transparency is that the color of semi-transparent map units can shift due to the visibility of color that lies beneath them. This is not at all surprising, but can make the basic legend matching task difficult, obscure the pattern of color on the map, or just as bad, make patterns appear out of nowhere. Here’s a look at what happens to colors using the same semi-transparent choropleth map units on different backgrounds. These are screen captures from early design mock-ups for Maker!.
The first image shows (hypothetical) opaque choropleth map units with a 7-class color ramp. The next three images show the same units at 50% opacity on top of Google terrain, streets, and satellite imagery. Notice how colors shift when compared to the opaque map at top? See how lightly colored units nearly disappear on the streets map, and darkly colored units nearly disappear on the satellite map? Yikes!
The Solution to Transparency
We employed three design solutions to ensure that semi-transparent choropleth maps in Maker! would work, despite potential map reading problems: 1) unit boundaries, 2) data probing, and 3) transparency control.
1) Unit boundaries. In Maker’s choropleth maps unit boundaries are color coded but remain opaque, even when unit fill color is semi-transparent. This gives map users some true color information to work with, and should improve their ability and confidence to spot map patterns or match colors to a legend. In other words, while unit fill colors can get you close, unit boundaries can get you the rest of the way there.
2) Data probing. We also took advantage of a relatively common and very helpful interactive map feature known as data probing. Exact values for any choropleth map unit can be obtained by clicking on them. In Maker!, we designed the data probing feature to go one step further and give values for all of the possible attributes associated with each map unit, not just the mapped attribute alone (see the scrolly list, shown in the probing pop-up below).
3) Transparency control. Finally, we gave mapmakers a transparency control, as well as a chance to take some responsibility for how well their maps communicate. The transparency control lets mapmakers decide what works and what doesn’t. Given the huge range of possible maps that can be made with Maker!, some user controls like this are necessary (as well as being kinda fun!). Here, transparency can be adjusted for a custom fit with any chosen tile set, color scheme, or other mapped data. Settings on the control (shown below) range from 50-100% opaque.
The Best of Both Worlds
Our decision to include semi-transparent choropleth maps in Maker! should give mapmakers and map users the best of both worlds. A semi-transparent choropleth is truly a hybrid map in that it can potentially offer all the advantages of combining rich reference data (i.e., underlying tile sets) with great thematic data (i.e. overlying choropleth units). Hopefully the choropleth maps coming out of Maker! will be easy to read and good looking, too!
Update, Dec. 22: A few variations of the map technique are posted here.
We spent some of our spare time last week exploring data from the 2008 presidential election and thinking of some interesting ways to visualize it. Above is one map we put together.
One thing we sought to do was present an alternative to cartograms, which are becoming increasingly popular as post-election maps. Cartograms are typically offered as an alternative to the common red and blue maps showing which states or counties were won by each candidate, wherein one color (presently, red) dominates the map because of the more expansive—but less populated—area won by one candidate. Election cartograms such as the popular set by Mark Newman distort areas to reflect population and give a more accurate picture of the actual distribution of votes. A drawback of cartograms that we’re very aware of, however, is that in distorting sizes, shapes and positions are necessarily distorted, sometimes to the point of making the geography virtually unrecognizable.
Our map is one suggestion of a different way to weight election results on the map while maintaining correct geography. What we’ve done is start with a simple red and blue map showing which candidate (Republican and Democrat, respectively) won each county in the lower 48 states. Then, to account for the population of those counties (or, the approximate distribution of votes), we’ve adjusted opacity. High-population counties are fully opaque while those with the lowest population are nearly invisible. Against the black background, the highest concentrations of votes stand out as the brightest.
We’ll let viewers be the judge of its cartographic effectiveness, but we hope you’ll at least agree that it looks pretty cool!
Click on the image at the top of the post to view a larger version, or see it in a Zoomify viewer, or download the full size (suitable for printing).
I love ColorBrewer. All of us here at Axis rely on it almost daily and it’s helped us to make nice looking maps quickly; and that’s what good tools do, they make their users look really good at their jobs.
7+ years later, ColorBrewer is due for some changes and Cindy Brewer has been kind enough to ask us to hold the scalpel. Nothing major. Same great color schemes (of course), but a new interface and some new functionality to help ColorBrewer’s 2000 visitors per week get the most out of the experience.
We’re in the early stages of planning this project but we though we would open this up for some discussion amongst the ColorBrewer-using, Axis Maps Blog-reading masses.
QUESTION: What would you like to see in the new version? What should remain untouched? What do you love? What do you wish was done better?
Let us know your thoughts in the comments. Thanks!
This is an exciting time to be a cartographer. Cartography has changed more in the past 5 years than in the previous 50, and the field is in the midst of an unprecedented revolution that has forever altered what maps can do, and how and why we use maps. How far have we come? I now see teenagers using on-demand, customizable maps rendered in real time from multiple, distributed data sources on their cell phones that automatically geotag and upload photos to their blogs while they sit on the bus. Five years ago, heck, one year ago this would have been science fiction, now it’s just a collection of geoservices on a $200 phone. As a result, mapping technology has quickly outpaced mapping theory and practice.
While much attention has (rightly) been focused on the technology that is enabling these amazing advances (Google Earth, mash-ups), I think the equally significant change is why people are making maps and the role maps now play in our everyday lives.
Take “pocketcasting” for example, the next step in social networking, where folks geo-broadcast their locations so they can see where their friends are at any given moment allowing unplanned meetings (“I’m at this cafe!” as a kind of mass, voluntary, geo-voyeurism). This adds a degree of instantaneous spatial awareness to our social lives that would have been impossible without the serendipitous convergence of technologies like GPS, wireless networks, and customizable on-demand maps. Other new ways the public is using maps include monitoring traffic conditions in real-time or using Google’s wonderful streetview to check-out a potential new home virtually. One thing is clear: Maps have become fully integrated into the fabric of our lives in ways we couldn’t have imagined a few years ago.
Beyond the popularity of these maps, however, has been the complete blurring of the distinctions between map maker and map reader, data provider and data user. It is precisely this tectonic shift in the world of cartography that underlies the philosophy of GeoCommons Maker!, the product we’ve been jointly developing with the powerhouse team at FortiusOne, described by the O’Reilly Radar as “a Flickr/Swivel/YouTube/Scribd of geodata.” Maker is at the vanguard of the democratization of cartography and the promise of Web 2.0 services that eliminate the need for expensive software/data for most casual ‘citizen cartographers’ and allows people to make great looking maps quickly while guiding them through the process. We here at Axis Maps feel strongly that powerful tools (e.g., desktop GIS) aren’t much good if they don’t provide guidance – it’s like giving the keys of an F-16 to someone who doesn’t know how to fly. Furthermore, while an F-16 is amazing, few folks actually need one. Same with $30,000 mapping software.
One of the reasons we like Maker! is that it empowers people – who otherwise would never be able to participate – to make their own maps and start publishing, sharing, and commenting on geographic data and the things we learn from those data. High-end, professional cartography is not going to disappear, and the world will always need premium map products (such as National Geographic Atlases or legally-binding land surveys). The same is true of professional authors and photographers; neither blogging nor Flickr have eliminated the need for these professionals, rather they have opened up these activities to a much larger group and drawn people into the process, rather than relegate them merely to being spectators to the process.
One thing is clear: As the GeoWeb/Web2.0 revolution continues, we need to move beyond paper map thinking and starting seeing maps much more broadly as services that can be integrated with other services. As a professional cartographer this means to me that the “rules” of cartography established through a century of study and practice are now up for grabs at the very moment mapping finds itself in a multi-billion dollar spotlight from both the private and public sectors. Some of the biggest companies in the world (Google, Microsoft, Yahoo!) are betting a big chunk of their digital future on maps and the central role they want cartography to play in their digital empires. With the backing of these companies, digital, on-demand maps have gone from technological curiosities to everyday tools worth billions of dollars. This begs the question: Where is mapping headed and what might our maps do for us in 10 years?
Further questions we need to think carefully about (these are the sorts of questions that keep us up at night!!)…
How much of what we have learned about static maps—both in practice and theory—holds true when these maps become animated, interactive, and customizable?
What are the relative merits of 2D versus 3D?
How do we keep users from becoming disoriented and lost in 3D immersive maps?
What are the perceptual limits of animation and for what kinds of map reading tasks (e.g., rate estimation, change detection) are animated maps especially well-suited (and how could those tasks be better supported)?
How can we reduce the problem of “split attention” in immersive and visually-rich environments like Google Earth?
How can we create intelligent Web-based software that is both easy to use and powerful? To what degree can the map-design process be automated to further the democratization of map-making? How can we help novices to think like experts?
What should our map interfaces look like and why? How does the map interface structure the user’s experience? How do we know if our map interfaces work?
Who benefits from these billions being invested in mapping?
How does this technology change the way we do business and the way we interact with each other?
What are the limitations and liabilities of decentralized data structures and technologies that run on volunteered geographic data?