Quick tip about Javascript architecture

Most of the work we do here at Axis involves coding Javascript interactive maps. Frequently, the map will have elements controlled by UI components and/or charts and/or timelines. Calling each of the map functions after a UI component gets interacted with (or chart/timeline/etc…), can get exponentially convoluted if you aren’t careful.

If you are fairly new to coding you may not be familiar that code can have design patterns. Just like there are cartographic principles that help with everything from layout to typography to color, there are coding principles that help with code clarity, maintainability, and efficiency.

A good book to refer to on code design patterns for Javascript is Learn JavaScript Design Patterns by Addy Osmani. The particular pattern I found recently useful is Publish/Subscribe (nested under the Observer pattern in the above book). There are a couple of ways to implement it, but since we use jQuery in most everything anyways, I’ll talk about jQuery’s implementation.

jQuery has the ability to register and listen for custom events. How this works in practice is fairly easy. First you need to sort out what events are actually going to happen. These aren’t events like ‘click’, but custom events like ‘yearchange’ or ‘dataupdated’ or ‘mapmove’. Once you have that set, whenever you create a component that is going to be affected by one of these custom events, you register it on the document.

function createMap() {
  //Create map code
  $(document).on('yearchange:', function(newyear) {
    //update map with new year information

Then on any and all UI elements that affect the year (menus, animation controls, etc…), all you need to do is call $(document).trigger('yearchange:', newyear). The great thing about this is that you can attach listeners on multiple different components (e.g. maybe the timeline and the chart and the map all get updated with a yearchange) and the UI element doesn’t need to know a thing about any of them.

Couple of notes:

  • There is a colon after the word yearchange. That is completely optional, but is a way to future-proof your custom event in the off chance that jQuery (or a jQuery plugin) decides to use the same event name. You could also achieve this by using slashes and creating a nesting effect (e.g. /yearchange, /yearchange/increase, /yearchange/decrease, etc…) which might be helpful to group events if you have a lot of them.
  • Why register on the document element? Because of the way Javascript handles bubbling through the DOM, you can always be sure that if the listener is on document, it’s going to be heard; plus it keeps separation of concerns (another code design thing). If you put the listener on the map object for example, then the UI element needs to know where the listener is (breaking the entire reason for this system which was that the UI didn’t need to know anything).

That’s it. If you have a several different components being updated or changed by several other components, consider the Pub/Sub pattern and if you are using jQuery already in your project, consider using custom events.

D3 web maps for static cartography production

We’re a mapping company that’s most at home on the web. So when a more traditional static cartography job came along, we even took that to the web.

The ongoing project involves producing a set of four maps for each of thirty-three different countries, showing GDP along with flood and earthquake risk. To make 132 maps, clearly we need to set up a good, easily-repeatable workflow. Because the map themes and symbologies are the same for each country, essentially we need a set of templates into which we can throw some data and get an appropriately styled map that we can export for refinement in Illustrator, and finally layout in InDesign.

Ordinarily that’s where GIS software like QGIS or ArcGIS comes in. But using GIS can be a slow, repetitive task. In the interest of creating a faster process suited to our purpose specifically, we thought it would be easier to develop our own web-based tool to handle the parts of the workflow normally handled by GIS. Fortunately, it’s easy to do such a thing with D3.

Here’s a shot of the tool. It looks rough, certainly, but it does its job: quickly get a country map as close as possible to final form, ready to be dropped into our Illustrator and InDesign templates.

D3 tool for static maps

In contrast to GIS, there are only a couple of moving parts here. All we have to do is select a country, adjust the map position and scale, and choose the map theme. Everything else is automated. For me, someone who usually accomplishes cartography via web development, it was a lot easier and faster to build this simple customized tool than to figure out how to get GIS to do exactly what we want. It’s also easy to keep up with any change requests, which usually only mean a few lines of code.

Below is a bit more about how this thing works, which leads to some tips we’ve learned about using a D3 web map in a static cartography workflow.


Data here are in two forms: topojson for the map layers (boundaries, cities, lakes, etc.) and CSV for the numeric data. All are loaded at runtime, where the CSV is joined to Admin 1 (province-level) geodata. We avoided pre-joining the CSV to the shape data because that would make updates more complicated, i.e., saving a new topojson file for any little data change.

Page setup

It’s important to get sizes and positioning exactly right so that the exported map drops perfectly into our templates without the need for any transformations. Our maps are A4 size and we’re working in millimeters, but of course in the browser we need pixel dimensions. It works out to be 841.89 x 595.28 pixels. Illustrator converts measurements at 72 pixels per inch, which is a lower resolution than modern screens, and thus the map appears suspiciously small in the browser—but don’t worry, it all turns out right in the end!

It’s also important to note the lat/lon bounds of the map in case outside geodata needs to match. In this case we generate a hillshade separately using GDAL. The extent values listed above the map—easily derived from D3’s projection.invert() method—plug right into the GDAL commands, producing a hillshade image that will fit perfectly on this map.

Map layout

D3 really shines with projecting and drawing geographic data, and this tool takes full advantage of that. It goes through the standard routine of drawing GeoJSON data to SVG paths, putting each data layer into a named <g> element that will later show up as a layer in Illustrator. We use the Mercator projection (sorry, cartographic hard-liners, it was by request) and use D3’s geo centroid() and bounds() functions to center the map on the country of interest and scale it approximately to fit. After that we can manually adjust position or scale to get the layout just right, fitting the country into some guides we added on top of the map.

Importantly, in addition to the standard map-drawing code, we use the projection’s clipExtent() method to clip everything to the page extent. Otherwise, when imported to Illustrator, we’d have a whole world’s worth of extraneous paths outside the artboard. While setting the clipExtent is enough to chop off polygons and anything drawn directly with the path generator, hiding labels takes a little extra effort. We have to check manually whether they’re outside the clip extent and hide them if so.

  .each( function(d){
    var c = path.centroid(d);
    if ( !isNaN(c[0]) && !isNaN(c[1]) ){
          x: c[0],
          y: c[1],
          opacity: 1
    } else {

Thematic symbols

D3’s scale functions really come in handy for these maps’ thematic layers. For the most part, we can create scales that have a constant range (such as the purple color scheme or the proportional symbol size) and set the domain to according to a particular country’s data if needed. GDP data (the choropleth map) is classified using the Jenks algorithm in Simple Statistics.

For the most part we leave legends to the Illustrator end of things. Values of the choropleth class breaks are simply listed in the containing HTML page, and then entered into the template in Illustrator. We do draw a simple proportional symbol legend because while the maximum size is constant, the size of good sample values (such as 1 or 5) changes from map to map. Flood and earthquake maps use bar chart symbols that, like the choropleth map, simply get some explanation in text.

Flood symbols

Mapping via code provides a big advantage if you’re looking for complex symbols. D3 in particular makes it easy to draw symbols tied to data—with almost total freedom of form.


Luckily, there’s not a lot to explain here. SVG Crowbar, created by New York Times Graphics people specifically with D3 in mind, does a great job of exporting the SVG map to a layered file ready to use with Illustrator. We just have to make sure our various map layers are properly grouped and have id attributes, and we end up with a nice and neat file.

Map layers

A command line

Finally, a nice thing about doing the basic map work in the browser is that it’s easy for a web developer to do advanced things by entering code right in the console, without having to be familiar with more specialized code that might be used in GIS or other tools. We didn’t do much of that here, other than as a shortcut to set zoom levels, but theoretically we could invoke, change, or extend any other functions of the page to do things that might be needed in unique cases.

Adjusting the map in the console

Coder-friendly mapping

In closing, it merits noting that this kind of workflow might be good for us (and people like us) because JavaScript is much of what we do on a day-to-day basis. Everything that I’ve said is “easy” with D3 is, well, very difficult if you’re unfamiliar with this kind of coding. A good philosophy and habit is to use the right tool for the job, but it’s also wise, for expediency’s sake, not to dismiss just using the tool that you know. Sometimes the right tool is the easy tool.

Cartography with an actual blindfold

Last month, Andy gave a talk at the OpenVis Conference entitled Blindfolded Cartography. Essentially, things to look out for when designing maps (especially interactive ones) so that when real data comes in, the map/design/page doesn’t get wonky. Things like too long text, missing data, skewed data, etc… The last week or so, we’ve been working on adding accessibility features to one of our projects and wanted to share a few thoughts and lessons on Cartography with an actual blindfold.

First off, what are accessibility features and why should you care? Accessibility features are features and general design patterns that allow people with disabilities to view and interact with your content. This can range from simply allowing keyboard navigation all the way up to screen readers. Now, why should you care? We can get fairly pedantic about whether a font size should be 16pt or 18pt, should this light brown text be #f2e1cb or #f8f1e7, should this div have a margin of 20px or 22px? If we as developers take that much care about things that a lot of users aren’t going to consciously notice, shouldn’t we take at least some care with things that some of the users are really really going to notice?

So what categories of accessibility should you be aware of? There are 4 areas that I’m going to talk about; I’m sure there are more, but these cover the majority of cases that I’m currently aware of. Color, keyboard navigation, ARIA, and Screen Readers. Also, I should note, I am not an expert in this. What follows are the notes of a developer brand new to the area of accessibility.


Color can cover a whole host of issues, but the main areas to watch out for are color blindness and low contrast. About 10% of all males have some form of color blindness, of which the most prevalent is red/green (protanopia) color blindness. That’s a big percentage of your users. Simple fix – don’t put red and green right next to each other. As a mapping company trained in classical cartography techniques, this one is pretty much embedded in our blood and bones. Other versions of color blindness do exist though. It’s up to you to decide how many versions you want to design for (since the more you cover, the less options you have for design). A good site to figure out which colors you can use is Colorlab.

Color lab example image - magenta for red green color blind people
Magenta vs Gray for Red/Green color blind people

A color tool for Cartography specifically and which has colorblind options is ColorBrewer (which Axis Maps hosts).

Color brewer example image
Color Brewer

The other major category related to vision is contrast. Many users have a harder time distinguishing low contrast color pairings. For example, #ccc color text on a #999 background. You can check contrast between colors using the Color Contrast Check tool.

Contrast checker image - #999 vs #ccc
Contrast Checker – #999 vs #ccc

Testing: The best way to test for color blindness issues is to have a someone who is actually color blind look at the site (and since 1 out 10 men are, there is a good chance you know someone who is). Barring that, run the various colors through Colorlab and the Color Contrast Check tool.


There are two types of people who use the keyboard only; the visually impaired and those with a disability making use of a mouse impossible (for those of you wondering, as I did, how you can use a keyboard but not a mouse, many alternate input devices emulate a keyboard but not a mouse – e.g. Sip and Puff devices).

There are a couple of key points to keep in mind when designing for keyboard users. Most browsers are already pretty good at keyboard navigation, IF the HTML is organized in a logical order. While this isn’t a problem when writing HTML for the most part, where you can run into issues is with adding pieces via Javascript. It’s really easy to add a bunch of divs with JS and as long as they look in the right place, ignore the fact that they are not keyboard navigable. One major gotcha, div is not keyboard navigable by default. You have to add the attribute tabindex="0" to make it navigable. a links and button elements are keyboard navigable by default so those don’t need an extra tabindex attribute. Best practice is to make all clickable areas that go somewhere a links and all clickable areas that trigger functionality button elements. Unless absolutely necessary, don’t use div for buttons/links.

tabindex=0 simply adds the element to the keyboard navigation index. Sometimes, especially with adding elements via Javascript, the default tab order ends up not really being a logical one. In these cases, you can override the tab order by changing the tabindex to any positive numerical value. A tabindex=1 will come before tabindex=2 and so on. This is not considered good practice, but in production, sometimes this ends up being the only way without a complete rewrite of the code.

Testing: Keyboard navigation is really easy to test. Just use the site with only the keyboard. Can you access all the functionality? Can you get to all the links and do they work by pressing space or enter? Are all the buttons/divs/links/etc… in a clear and logical tab order?


ARIA stands for “Accessible Rich Internet Applications” and is designed to let screen readers and other assistance software help make interactive pages usable by all users. ARIA tags are usually element attributes such as aria-hidden and role. Like other element attributes such as id and src, there are attributes for pretty much every element and they apply differently. I’ll mention a few that are good to know about:

  • aria-hidden – makes a particular element hidden from screen readers. This is useful if, for example, you have a description that is truncated for display reasons, but is expandable through mouse interaction.

Two divs. One with a class of hidden making it all but invisible to the naked eye, yet still there. The other one is the normal div that everybody sees, but with an aria-hidden="true" attribute to hide it from screen readers so they don’t repeat themselves.

<div class="full-description hidden">Blah blah, this is the full description and is not truncated with ellipsis</div>
<div class="description">Blah blah...</div>

Hidden class that reduces the element to 1px by 1px. The clip and overflow: hidden hides everything that goes out of the 1px by 1px box.

position: absolute !important;
margin: 0 !important;
padding: 0 !important;
height: 1px;
width: 1px;
clip: rect(1px 1px 1px 1px); /* IE6, IE7 */
clip: rect(1px, 1px, 1px, 1px);
overflow: hidden;
  • As seen above, the use of the .hidden class is very important. This allows screen readers to “see” something, even though people can’t see it.
  • Using a hidden class is different than using display: none or visible: hidden. Changing the display to none or the visibility to hidden takes the element out of the screen reader’s view.
  • roleARIA-role w3 page – this one can get complicated as there are four different categories that can be used, but the idea is that you can tell the screen reader what the element is for. The four categories are:
    1. Abstract Roles – Do not use.
    2. Widget Roles – Define what the element is to be used for. This is helpful when, for example, you have div operating as a button. Most of the time, you should just mark it up as a button, but if you can’t, then you can assign role=”button” and it acts as a button to screen readers.
    3. Document Structure – Defines what structure the element has. This is so the screen reader can take special care with the pronunciation and navigation of that element. For example, the math role tells the screen reader that mathematical formulas are in coming and to take appropriate action (such as saying “eh-ex-squared” instead of “ax-squared” when ax2 is encountered).
    4. Landmark Roles – Define the different areas of the document (such as navigation, main, search, etc…) so the screen reader can jump between them easily.
  • alt tags – This is pretty well known, but I’ll reiterate. Put descriptive alt tags on all images so that the screen reader knows what the picture is about. Make sure it’s short though, no need to wax eloquent on all the little details of the logo.
  • links – This is probably the one I’m most guilty of. Don’t have links where the only linked text is “Click here.” In context, this is fine. But out of context this doesn’t make any sense. And many screen reader users navigate by pulling up a list of links in which only the text of the actual link is available, so the links very well may be spoken out of context. Don’t say “click here”, say “Link to Annual Report PDF”. Oh, and don’t leave the full link in visible text or the screen reader will read “double-u, double-u, double-u, dot …”.

Testing: Testing ARIA tags is mostly about testing things with a screen reader. The testing part of the Screen Reader section below has more information.

Screen Readers

And now to the most well known accessibility genre, the screen reader. Screen Readers work by reading off the text from a web page. How they do this is different for each one and can devolve into a similar situation as the early 2000’s with web browsers – each one its own special flower. There is some standardization and it is getting better, but you will still run into situations where different screen readers read the same markup differently (especially when Javascript gets involved).

The four main ones that are relevant to interactive web sites are JAWS, NVDA, VoiceOver, and ChromeVox. JAWS, the most widely used screen reader, is a commercially licensed piece of software for Windows, especially popular in academia and government. NVDA is an open-source program for Windows. VoiceOver belongs to Apple and is bundled with OSX. ChromeVox is a Google created plugin for Chrome that is really helpful for developers (though because of its integration with Chrome, it does create some headaches as seen below).

Each screen reader sees the world slightly differently. This can create some head bashing moments as the notes below make clear. They were filed as apart of a Github bug report for the accessibility upgrade I was working on this week. The list was a long list of images where each one, on click, would open up to a carousel widget to view more images from that spot. The carousel widget had a left and right HTML button that were supposed to be navigable by keyboard arrows, mouse clicks on display arrows, and keyboard “enter” when the focus was on a display arrow.

Tabbing through list

  • Chrome – reads each section when the section is focused, reads header
  • FF – reads all the sections when the entire list is focused (i.e. on normal tabbing through), reads each section when focused, does not read header when tabbed to, but does read header when shift-tabbed to
  • IE – reads each section when the section is focused, reads header


  • Chrome – does not read the arrow aria-label, reads the title and description of each carousel image after interacting with the arrow
  • FF – reads the arrow aria-label, when the focus is shifted to a new arrow (i.e. at the beginning or end when the current arrow is hidden) reads the new arrow, otherwise, doesn’t read anything on interaction
  • IE – same as FF

Screenshot of the carousel  with the problem arrows
Screenshot of the carousel with the problem arrows

For the life of me, I couldn’t get Chrome to read the aria-label on the arrows, until I took the ChromeVox tutorial and realized that screen readers don’t just use tab to navigate. They use the arrow keys, the plus/minus keys, and a whole host of other shortcut keys. Tab is just for interactive elements (and apparently Chrome/ChromeVox was not reading the aria-label for interactive elements). This is by design. Keyboard only users just need tab to move through the interactable elements. They can read the non-interactive elements already. Screen Readers use other keys to move through the non-interactable elements such as paragraph blocks.

Which brings me to the final point, Screen Readers are complicated pieces of software that work in ways that are very different conceptually than how the majority of users see and interact with a web site. The best way to test your site is to get someone who knows how to use a screen reader properly, to test it. Failing that, make sure that you are at least somewhat versed in how your chosen screen reader actually works. It’s more than just tabs.

Testing: ChromeVox is a free plugin for Chrome that is really easy to setup and use. Make sure that you go through the tutorial first though as there are facets of screen reader navigation that are not necessarily intuitive for people used to mouse navigation. VoiceOver is included with OSX so that should be easy enough to include in testing if you are using a Mac. NVDA is free software as well, though for Windows. By using one or all of these, you should run into any bugs that will affect screen readers. Things to look for: can you hear all the content? Is anything behind an “interactive wall” that is not accessible with keyboard/screen reader? Do all the images have alt tags so you know what is going on without seeing the actual image?


Accessibility concerns affect a small but significant portion of users. If things are designed with them in mind, that can only be a good thing. Many of the above suggestions, with the possible exception of catering to color blindness, shouldn’t change a single thing on how your page looks, feels, and interacts. But by following them, more people will use and enjoy whatever content is there.

And if you really want to get a handle on how this stuff works in practice, do as Mercator did and use an actual blindfold:

Blindfolded Mercator

Blindfolded Cartography

This is an adaptation of a talk I gave at the 2015 OpenVis Conference in Boston, expanded in a few spots and abbreviated in others. You can see slides and, ugh, video from the talk.

At Axis Maps, a rough napkin sketch of our projects often looks like this:

Map viewer sketch

We’re tasked with designing and building interactive maps of data that, for one reason or another, we can’t yet see. Sometimes the client doesn’t have data ready soon enough. Sometimes we expect data to change or be added after our work is done. Sometimes it’s just too vast for us to lay eyes on all of it. Whatever the reason, this reality of web mapping is a significant departure from what we knew back in cartography school.

Traditional cartography allows—encourages, perhaps—perfectionism. We were taught attention to detail, and to find and tell the story in our data. Crafting beautiful, effective maps and telling stories with them is one thing when you can obsess over every detail; it’s quite another when you have to do it as though blindfolded. (And I don’t mean sea monsters and mapping unknowns; I mean mapping real things that are known but somehow inaccessible to the cartographer.)


As always in cartography, this is all about making compromises. We make design choices that we know are not are not ideal for most cases, but are at least acceptable for all cases. What follows is a list of some areas where we’ve found the “blindfold” particularly challenging, and some compromises that we or others have found helpful. And an occasional shrug where we haven’t found good answers yet. Much of this is about design, some of it is about code (JavaScript), some of it isn’t even about maps, and none of it is perfect.

Data classification

John Nelson has a succinctly handy explanation and demonstration of why data classification decisions are important on choropleth maps. One way of dividing data into bins can produce a starkly different map from another way, potentially in a functionally and aesthetically bad way.

Data classification example by John Nelson
Data classification example by John Nelson

We want to devise a classification scheme around several questions: Will the map be useful and look good? Are the class breaks meaningful? Are they understandable? Are they pretty numbers? As usual, this isn’t too difficult when dealing with a single data distribution. But we need satisfactory answers to those questions for any data set, without our manual intervention. And real data… well, real data are always messy.

Data in designs vs data in reality

Two common solutions are Jenks optimal breaks, where class breaks are calculated to optimize both grouping and spacing, and quantiles, in which each bin contains the same number of values. Both offer some assurances of visible spatial patterns, but both have drawbacks. For example, optimal breaks can be hard to understand, and quantiles can inappropriately group or split values.

Optimal breaks and quartiles

One compromise we’ve used can be described in two parts:

1. Unevenly distributed percentile breaks. Quantiles are friendly, readable, and somewhat reliable, so we often err toward them. Instead of dividing data into evenly sized classes, though, we might break down some of the classes further to separate or highlight extremes or important breakpoints, for example the top 5% in the histogram below.

A compromise classification

2. Use percentiles of unique values, not all values. Datasets often contain many duplicate values; when quantiles are calculated properly, a single value could span several classes. (A common example is having a whole bunch of zero values.) To avoid weird map legends and to better bring out geographic patterns, we might base our classification on the distribution of only unique values, avoiding duplicates.

// basic example
function getValueAtPercentile( data, percentile ){
  return data[ parseInt( (data.length - 1) * percentile ) ];
// using underscore.js
var dataArray = _.sortBy( [ 0, 0, 12, 23, 2, 5, 0, 5, 19, 0, 0, 0, 33, 9, 25, 0 ], Number );
var uniques = _.uniq( dataArray, true );
getValueAtPercentile( dataArray, .25 ); // = 0
getValueAtPercentile( uniques, .25 ); //  = 5


Expect holes in the data. Assume that sometimes data will be bad or missing. Catch these missing values in code, and, importantly, design for them. “No data” is data too.

Andy Kirk gave a talk called “The Design of Nothing: Null, Zero, Blank” at last OpenVis in 2014, covering (among other things) some no-data scenarios. It’s worth a look:

There are two sides to handling missing data. On one side is code that won’t break if a null value is thrown at it. Basically this means a lot of error handling and bailing out of functions if they’re passed null values. But it also requires some understanding of the data format. A missing value could come through in a variety of ways.

// no data might be...
// &c. &c.

Be careful to avoid the JavaScript gotcha of equating zero with “no data”—usually not the same thing in reality.

// a tempting way to catch "no data" in javascript
if ( data ){
  // yay, we have data!

// but watch out!
var data = 0; // zero is real data
if ( data ){
  // zero won't get us in here :(

The other side is design. One common scenario is missing data in a choropleth map. We like using texture to indicate missing data. It’s distinct, not easily confused with colored values; and it’s explicit, keeping “no data” as part of the story instead of hiding it. If you’re working with D3, check out Textures.js for easy texturing.

choropleth textures

Another common case is gaps in time series data. Again, it’s good to be explicit. Interpolation can be misleading. For example, I like the dashed line in the iOS Health app. It explicitly indicates a gap in data without losing continuity.

iOS health - gaps in data

On maps, one thing we’ve tried is using a special symbol to indicate a gap in data. Below is a frame from an interactive animated map where proportional symbols remain on the map even when there’s no data for the current year, showing the most recent available data but with a different, hollow symbol.

Gaps in time series data on a map


Text in an interactive map setting is more than just labels. Marty Elmer has written nicely about prose on maps, and how it’s ubiquitous yet overlooked. Text, too, can be an unknown part of map data—one that can ruin otherwise beautiful layouts.

The big lesson from experience here is to restrict the space allotted to text. If you’ve designed for a short sentence, assume you’ll be fed a novel, so make sure text doesn’t overflow all over everything. CSS properties like max-height and overflow:auto can help. For shorter labels, truncation and abbreviation (such as via text-overflow:ellipsis may be useful, but also take advantage of what you do know about the data to be clever about this. For example, in the chart below we knew that some labels would be for congressional districts. If we simply chopped off those labels after a few characters, they’d all be the same, so instead we put the ellipsis in the middle and retained the district number at the end.

Label abbreviations

One last thing: mind your number formatting! It’s easy to forget to design label formats for things like singular versus plural or different orders of magnitude, in which case you end up with some funny-looking labels.

Bad number formatting

The Lorem Ipsum Map

All of this, broadly, is about the challenge of designing around missing or fake data. The “lorem ipsum map” is a phrase used by Rob Roth and Mark Harrower (2008), as a caution against designing user interfaces around a placeholder map. In studying the usability of a map we made for the UW-Madison Lakeshore Nature Preserve, they found some negative reactions to the cold, metallic UI (which we designed first), in contrast to the green and fuzzy map (which we didn’t add until later). It’s hard to offer solid advice about this, other than to suggest trying to understand at least the basic nature of the unknown data and what it might look like. ¯\_(ツ)_/¯

Instead of designing around a blank spot, however, we often generate fake data. That of course can be a challenge itself, because how do we design somewhat realistic fake data when we don’t know what the real data look like? Carolyn Fish has a good term for this goal—”smart dummy data“—and an interesting anecdote about challenges she faced trying to make OpenStreetMap data stand in for totally inaccessible (to her) classified data.

All that said, fake data can be an excellent test of code. In some ways, the more unrealistic the better. If the code can handle ridiculous data scenarios, it can probably handle the real data.

The Human Touch

Big picture time. The ultimate goal, really, is to approximate good, human design when we’re forced to do it through rules and algorithms applied to unknown data. I’m going to keep pointing to Daniel Huffman to explain why this matters, at least until he makes good on his promises to follow up on four-year-old articles.

To illustrate how enormous a task handmade design can be in the web cartography age, consider The Essential Geography of the United States of America by David Imus. You probably saw this map a few years ago when a Slate writer declared it “The Greatest Paper Map of the United States You’ll Ever See.” Indeed, it’s a lovely map with tons of manual details.

The Essential Geography of the United States of America

The map is 1:4,000,000 scale, and according to the Slate article, it took nearly 6,000 hours to complete. 1:4,000,000 is approximately zoom level 7 in the standard slippy map scheme. Just for kicks, let’s extrapolate and say David Imus wanted to design a fully multiscale web map (zoom levels 0 to 18) with the same attention to detail. How long would it take him? I’ll spare you the details of my calculation: it’s 2,803 years and 222 days. And that’s only the United States. Im-freaking-possible, in other words. That’s why we need algorithms.

Big Design

This gets into what I want to call “big design.” So-called “big data” isn’t just a computational challenge; it’s a design problem too. How do we accomplish good human design when there’s simply too much data for a human to see?

The problem is perhaps unique to cartography, where at some level we want to see almost every piece of data we have, not aggregates and distillations. For most of us that means a map based on OpenStreetMap data down to the highest detail levels. With 2 million users mapping all over the world, quality and consistency can’t be 100% guaranteed. Our challenge is to come up with design rules that ensure the map looks pretty good everywhere, at all scales. Yikes.

Nicki Dlugash of Mapbox spoke to this very topic at NACIS 2014. To summarize her advice:

  1. Prioritize areas you care about for your specific use case.
  2. Prioritize the most typical or common examples of a feature.
  3. Look for the most atypical examples of a feature to develop a good compromise.

To help you along the way, there are things taginfo, where you can explore OSM tags and get a sense of things like geographic variation; or things like this view in Mapbox Studio where you can inspect many places at once.

Mapbox Studio "places" view

And of course the nice thing about OpenStreetMap is that you can always fix the data if you find problems!

Good Luck

Well, I hope that assortment of tips and problems has been useful. Good luck as you stumble through the dark!

Blindfolded Mercator

Three mobile map design workarounds

We recently built a map for Napa Valley Vintners, a nonprofit trade association with 500+ members located in one of the most well known wine growing regions in the world. The map is meant to help people locate wineries, plan a visit to the region, and then send directions and a trip itinerary to a phone for easy navigation while driving. Two versions of the Web map were developed, one for the desktop and another for mobile phones. We ran into a couple of issues while working on the mobile version, in particular, that were solved with a design workaround or three. They’re relatively small things, but should have a positive impact on how users experience the map.

Browser chrome – minimize before entry

The browser chrome is all the stuff that appears around a window, taking away screen space from what you’re trying to look at. On a phone, when maximized, it usually appears as an address bar at the top and tool bar at the bottom for various things like page navigation, sharing and bookmarking. The bottom bar can obscure content, which isn’t great if you want to use that space for other things, like parts of a map interface, or if you simply want more map area showing on the screen. Typically, the browser chrome minimizes automatically when scrolling down a normal web page via tap+drag down. However, because our map is fixed to the viewport, scrolling down pans to the south instead, and the chrome doesn’t minimize.

What’s the workaround? How do we minimize the chrome so that when users get to the map they as much of it as possible? By forcing people to scroll down a separate introduction page. The page doubles as a loading screen and also includes some branding, a background image, and an attribution line. When the map finishes loading, a big up arrow with the words “Pull up to view map” appears. Pulling up, which scrolls down, sends people to the map with a minimized chrome.


The HTML is structured with a few special elements:

   <div id="loading">
      // 100% width / height
      // loading screen content
   <div id="treadmill">
      // absolutely positioned div with height of 100000%
      // no matter how much you scroll, you'll never reach the end
   <div id="wrapper">
      // div containing all the map content
      <div class="pane">
         // each UI is given its own div

There’s only two bits of javascript you need to get it to work:

$( window ).scroll( function( e ) {
   // wait until the map is loaded to allow scroll
   if( load === false ) return false;

   // check for scroll distance
   if( $( window ).scrollTop() &gt; 100 ) {
      // transition the wrapper into view with CSS
      $( '#wrapper' ).addClass( "visible" );

$( ".pane" ).scroll( function() {
  // prevent scrolling on the UI frames and scroll their contents instead 
  return false;

Of course, it can come back later! For example on our map, entering a search term will maximize it again. Complicated matters, Safari on iOS doesn’t report it’s window height differently when the state of the chrome changes so it’s not something we can detect in code. For this reason, we decided to position key map ui elements, like the winery “list” button, higher in the window so as not to be obscured. Sure, a little scroll from the header would shrink it again, but it’s not obvious.

Geolocation – ask if the browser can ask

Geolocation is great for plotting your location along a route, fixing your location to the center of the map while driving, or plotting a route from your location to a winery. But none of this works unless you first grant the browser permission to track the phone’s physical location. The problem is that the browser only asks once. Furthermore, that request doesn’t tell people the advantages of accepting. If the one browser request is denied, you get none of the good stuff… ever. Or, at least not until you reset your security preferences, which isn’t obvious and is too complicated to explain in a short help message.

What’s the workaround here? How can we 1) encourage people to allow their location to be tracked, and 2) if it’s denied, still ask for it again later to see if they change their minds? Essentially, we did it by asking if the browser can ask for permission in the first place. In other words, we used a custom notification that both explains the advantages of allowing your location to be tracked and asks if you would like to enable GPS. If the custom notification is denied, we can ask again next time… the browser hasn’t blocked anything because it hasn’t asked anything and the user is sent to the map. If the custom notification is accepted, the user is asked again by the browser. Yes, it is an additional click, but we see this as better than the alternative of a user potentially not having geolocation enabled at all.


Data Probe – animation indicates more

Every time a winery point is tapped on the mobile map we display a set of attributes, such as name, address, phone, hours, and appointment details. These show up in a small data probe panel fixed to the bottom of the screen. In addition, the panel includes a button that generates a mapped route and step-by-step directions to the selected winery. Together, the details and button take up all the space available in the data probe panel. A lengthier text description and logo also exist, but flow off the bottom of the screen. This information is accessible by swiping up from the small data probe. The problem was that in user testing no one knew to swipe up and this extra information was rarely even discovered.

We needed a visual cue that would make it clear that more information existed without using up any space in the data probe itself. There was no room for an obvious up arrow here. However, we did find a nice bounce animation at animate.css that served as a good workaround. With each tap of a point, the probe panel bounces up and down a couple times from the bottom of the screen, then stops. It’s a subtle way of showing that more information can be accessed by swiping up.


Swiping up yields the full details for the winery:


Lessons Learned

With a huge push towards responsive web design, this is something we’re going to be doing more and more as users expect to have content tailored for whatever screen they’re using. Mobile maps present a lot more challenges than a standard webpage of long text content. Elements can’t reflow and reposition with a grid scaffolding and some basic CSS width queries and you often feel like you’re fighting against the tendencies of the browser. However, by making mobile design a key part of the total map design process, you can force yourself to think about how mobile users use the map differently; which features are important to them and which should be left for the desktop only. It may feel like building two separate products but the results mean that everyone gets what they need wherever they are.

Device testing

PS – Get ready to buy a bunch of extra phones off eBay because each one is a unique little snowflake which makes testing a nightmare.

Mass Observation Basemap

We recently worked on a project with Adam Matthew Digital mapping British diarists in the UK who wrote during the first half of the 20th century. Most of the 500+ entries deal with observations about the impact of WWII on everyday life.  The map allows users to search and filter diarists by gender and date of birth or view them in smaller groupings by theme (e.g., “Political Opinions”, Women in Wartime”, “London During the Blitz”). Brief bios and excerpts from the diaries themselves are also viewable, as shown in the screenshot of the interface below:


One of the fun challenges of the project was designing a tiled basemap that resembles other maps from the time period in terms of look and feel. We needed something that would help set the tone and mood for the project without using an actual historical map made in the 1940s. Although we’ve used actual historical maps for other projects, in this case doing so would have caused too many accuracy issues around the locations of the diarists. Instead we turned to TileMill, where we brought together modern-day data from OpenStreetMap, the Ordnance Survey, and Natural Earth and applied our own design.


After spending some time online browsing maps made in 1930s-40s England and the UK, such as those out of the British Ordnance Survey, we narrowed down a few of the salient design elements we found. Of course, we saw a lot of variability, but typography seemed to have one of the greatest impacts in terms of evoking the look and feel we wanted. Serif fonts with good contrast between thick and thin strokes were common, as was the liberal use of italic for map labels. Map symbol colors were similarly impactful. They ranged across the spectrum but were generally desaturated and flat on light-colored backgrounds (very few gradients, glows, drop-shadows or other effects). We took note of a few other elements too, like line weights, which tended to be thin (not overly chunky), and line styles, which varied greatly–we found all kinds of dashes, dots and dash-dot combinations.

One other element having a big impact on look and feel was the paper the maps were printed on. Textures in the maps gave the impression of something old and used. They also added a tangible and personal quality, which seemed a perfect fit for our map of diary entries. Of course, tiled basemaps with textures that look worn or folded are not new  (e.g.,  Geography Class map) but we couldn’t resist making a version of our own since it seemed so applicable for this project. Here is a closer look at the paper texture at zoom levels 9-11:

We tried a bunch of different fonts for the map (e.g., Cochin, Magneta, Playfair, Essay) and settled on Latienne Pro in the end. It’s not quite as high contrast as some of the others, but thicker strokes help it hold up well at smaller sizes on screen. And its available on TypeKit!


The color palette we arrived at is made up of mostly desaturated colors that become even more so when made semi-transparent on our light-colored background, as is the case with the national parks (green) and urban areas (yellow) on the map.


Finally, our background texture was made using scanned images of folded paper and cardboard. After a good bit of trial and error, we worked up a seamless version of the image in Photoshop that could be repeated throughout the map.texture

FatFonts World Population Map

We’re excited to announce that we’re bringing the work of some of our friends to the Axis Maps Store. We’ve always been interested in new ways to encode and represent data. Also, typography. We ♡ typography big time. It was both of these interests that lead us to the work of Miguel NacentaUta Hinrichs, and Sheelagh Carpendale and the FatFonts project. From their site:

Fatfonts are designed so that the amount of dark pixels in a numeral character is proportional to the number it represents. For example, “2″ has twice the ink than “1″, “8″ has two times the amount of dark ink than “4″ etc. You can see this easily in the set of characters below:

Blockfont onelevel cut1

This proportionality of ink is the main property of FatFonts. It allows us to create images of data where you can read the numbers, and represent tables that can be read as images (like the example below).


It’s a fantastic technique for cartographers (especially when making static maps) because the ink density allows you to see geographic patterns while the digits give you the exact values for each grid cells. This bi-visual technique essentially replicates an interactive data probe on a static map.

If you’re interested in using FatFonts in your own maps, you can download the font faces from the project home page.

The FatFonts World Map

If you want to get your hands on your very own FatFonts map, the World Population map is now available at the Axis Maps store. They were a natural fit and quantitative cousin to our typographic maps. Like the typographic maps, the FatFonts maps show the big picture from afar, but reveal their extensive detail as you move closer. From Miguel’s blog:

The poster is made using an equal area projection of the world, and it represents data collected by CIESIN and others. Each grid in the main map, which represents an area equivalent to 200 by 200 km has a 2-level digit FatFont digit in it. That way we can know, with a precision of 100,000 people, how many humans live there.

Since the number of dark pixels of a FatFont digit is proportional to the number that we are representing, we can calculate how many people each black pixel represents. For an A1 poster in the main area of the map at 600 pixels per inch, each pixel represents approximately 1880 people! FatFonts with the orange background are up by an order of magnitude, so there the ink of a pixel represents approx 18,000 people.

It’s a fascinating map that clearly illustrates how unevenly population is distributed across the globe. Particular areas of interest are highlighted and given higher-resolution treatment:


The FatFonts World Population map is now available from the Axis Maps store. All profits will be re-invested to fund more FatFonts research.

2nd edition of the Boston typographic map

It’s been five years since we first introduced our map of Boston. The popularity of that map started us down a course of typographic mapping that has since expanded to 11 cities. Given the special place that Boston has in our hearts, we felt it was well deserving of a redesign. And to prove just how much we love this city, we started from scratch. In other words, we started at the beginning with a blank canvas, imported a fresh copy of Open Street Map geography and place names, determined a new map extent and layout, and chose new fonts, colors, and character styles. The new Boston, 2nd edition poster is still 24 inches wide by 36 inches tall, but looks and feels dramatically different.

Boston 2nd ed. typographic map

Here are some of the most notable changes:

Extent and Map Scale

The 2nd edition covers a much larger area than the original version. We now include the entire city of Boston, plus a number of surrounding towns. Cambridge and Somerville are still present, of course, but we now extend farther north into Medford and Malden. The extension southward is even greater, covering the area all the way down to Dedham, Milton, and the Blue Hills Reservation.

Fitting more territory on the same-sized poster meant a smaller scale map, which in turn called for some design modifications. For example, because streets were crunched closer together at the new scale we reduced font sizes. The smallest text on the map is that of residential streets — now just 6pt in size. It’s still readable yet small enough to prevent excessive overlap in the denser parts of the city. We also removed the underlying neighborhoods layer that existed in the original version. Again, due to the smaller scale, it cluttered the map and was mostly buried by streets anyway.

Area Features

One of the most visually striking design changes was to the area features on the map, including water, parks and “institutions” (a catch all for universities, airports, or other important not-green areas). Solid color fills behind reversed-out white text gives them a bolder look, compared to the original map. Also, the high contrast between these areas and the white map background gives the map more definition overall and makes some features like small parks, narrow rivers, and jetties, easier to see. Achieving strong figure-ground contrast between land and other area features has always been a challenge with typographic mapping because there is only so much ink that a letter can hold. This method of treating areas with solid background fills seems to help the map visually and functionally while staying true to our aim of covering everything with text.

There are a couple of smaller changes worth noting, too:

Wavy water text

We’ve liked the wavy water text style ever since we first applied it to the Chicago map. Variation in text size, placement along curving paths that parallel one another, and the use of opacity masking to give the appearance of overlapping waves all come together to represent the flow and movement of water better than any other technique we know. We hope the redesigned water text in the 2nd edition map does the same.


Boston is famous for its major underground and underwater tunnels. The 2nd edition Boston map is the first to employ a text style specifically for tunnels. Where highway text goes subsurface, we simply reverse its fill and stroke color. For example, Interstate 90 is represented by purple text with a white stroke but where it becomes Ted Williams Tunnel it is reversed and becomes white text with a purple stroke. I can’t help but compare the bright white tunnel text to the bright lights of cars being turned on as they go beneath the surface.

Below are images of the two maps side by side, each covering an area that is approximately 13 by 13 inches on the printed poster. The 2nd ed. covers a lot more ground in the same amount of space, has a bolder look due in part to its color-filled areas, has wavy water text, and a couple of new headlight-style tunnels. Let’s say goodbye to the original Boston map and hello to the 2nd edition!

Boston 2nd ed. typographic map detail
Boston 2nd ed. typographic map detail
Original Boston typographic map detail
Original Boston typographic map detail

AskCHIS NE map: All D3 Everything

Last week saw the launch of AskCHIS Neighborhood Edition, a product of The California Health Interview Survey and the UCLA Center for Health Policy Research, with whom we worked to develop map and chart components for this new interactive tool. The short story of AskCHIS NE is that it is a tool for searching, displaying, and comparing various California health estimates at local levels such as zip codes and legislative districts. Take a look at it if you feel like setting up an account, or you can watch a demo at its launch event (demo begins at 14:00).

AskCHIS Neighborhood Edition

The long story is interesting too, as this is fairly sophisticated for a public-facing tool, so we’d like to share a few details of how the map and charts were made.

We built that big component with the map, bar chart, and histogram. It lives in an iframe, and as a user makes selections in the table above, various URL hash parameters are sent to the iframe telling it what data to load. The map and bar chart then make requests to a data API and draw themselves upon receiving data. Almost everything here uses D3, either for its common data-driven graphic purposes or simply for DOM manipulation. As usual, this much D3 work made it a fun learning experience. And it once again expanded my awe for Mike Bostock and Jason Davies and their giant brains: more than once during this project, I asked about apparent TopoJSON bugs far beyond my comprehension, and each time they fixed the bug almost immediately.

Davies/Bostock mega-brain

Tiled maps in D3

The map shows one of six vector layers on top of a tiled basemap that is a variation on Stamen’s Toner map, derived from Aaron Lidman’s Toner for TileMill project. Normally we use Leaflet for such tiled maps, but our needs for the vector overlays were complex enough that it made more sense to adapt the basemap to the vector layers’ framework (D3) rather than the other way around. Tiled maps in D3 are pretty painless if you follow the examples of Mike Bostock and Tom MacWright. The six vector layers are loaded from TopoJSON files and drawn in the usual fashion in the same Mercator projection as the basemap.

Dynamic simplifcation

The most interesting technical detail of the map (to me, anyway) is that it uses dynamic scale-dependent simplification. This is a nice design touch, but more importantly it ensures that the map performs well. Zip codes need to have reasonable detail at city scale, but keeping that detail at state level would slow panning, etc. tremendously. We took (once again) one of Mike Bostock’s examples and, after some trial and error, got it to work with the tiled map. Here’s a stripped-down example. Simplification is based on the standard discrete web map zoom levels. As the user zooms, the vector layers redraw with appropriate simplification when those thresholds are crossed, with basic SVG transformations for scaling in between those levels. Keep your eye on the black-outlined county in the gif below, and you should be able to see detail increasing with zoom.

Dynamic simplification

Pooled entities

One of the more sophisticated capabilities of this tool is combining geographies for pooled estimates. For example, maybe you want to look at Riverside and San Bernardino Counties together as a single unit:

Pooled entities

The API does the heavy lifting and delivers pooled data; our challenge was to display pooled geographies as a single entity on the map. Although TopoJSON seems to support combining entities (after all, it does know topology), I had no success, apparently because of a problem with the order of points. Instead we use some trickery of masking, essentially duplicating the features and using the duplicates to block out interior borders, as outlined below. If you wonder why we couldn’t just skip step 2, it’s because our polygons are somewhat transparent, so the interior strokes would still be visible if we simply overlaid solid non-stroked polygons.

Combining SVG shapes while preserving transparency

Image export

The biggest source of headaches and discovery was the image export function. Users can export their current map or bar chart view to a PNG image, for example the map below. (Click it.)

Map export

I learned that converting an HTML canvas element to PNG data is built-in functionality, but getting this to work in a user-friendly way was not so simple. For one thing, our map is rendered as SVG, not canvas. Luckily there is the excellent canvg library to draw SVG to canvas, so we build the entire layout above in SVG, then convert the whole thing to canvas (except the basemap tiles, which are drawn directly to the canvas). Pretty cool. Thanks, canvg!

The nightmares begin when trying to trigger a download, not just display an image on screen. Firefox and Chrome support a magic “download” attribute on <a> elements, which does exactly what we want, downloading the linked content (image data, in this case) instead of opening it in the browser. If only everyone used those browsers! Cutting to the chase, we couldn’t find any way to ensure the correct download behavior across browsers without sending the image data to a server-side script that returns it with headers telling the browser to download. The final task was showing a notification while the image is being processed both client-side and server-side, which can take long enough to confuse the user if there is no indication that something is happening. Failing to detect the download start through things like D3’s XHR methods, we ended up using a trick involving browser cookies.


Phew! Turns out making a static graphic from an interactive is complicated! But it was a valuable experience—subsequently I’ve even used JavaScript and D3 deliberately to produce a static map. It works, so why not? (Most of this dot map was exported from the browser after drawing it using this method.)

ALL the D3

For all of the above, it’s hard to post code snippets that would make sense out of context, but we hope that the links and explanations are helpful, and are happy to talk in more detail if you are working on similar tasks. There’s a lot more going on than what I’ve described here, too. Suffice it to say, D3 is a pretty amazing library for web mapping!

Offline Web-Maps

Recently I said something silly to a client:

…and because we’re not using a database, you’ll be able to run this map offline on your local computer.

Great! They give lots of presentations where they want to show off the map and this means they wouldn’t have to rely on conference wifi, the most overstretched resource on the planet. We delivered the completed map to be deployed on their site along with the instructions:

Just open index.html in your web-browser and you’ll be up and running.


Did you try Chrome instead of IE?

That ol’ chestnut? Still nope.

Why was it working for me but not for them? We developed this map running the files locally so what was the difference between our machines and theirs?

The Development Webserver

Everyone doesn’t use one of these? If you develop stuff for the web, you most likely have some kind of webserver running on your computer (MAMP, WAMP, LAMP, et al) so when you view a project in the browser, the URL looks like:


The page is loaded over the same protocol used when it is on the web, it’s just not looking outside your actual computer to do so. However, if you just open any old HTML file in the browser, you’d get something like:


This local file protocol means your browser is loading your file without it being initially processed by a webserver, which opens it up to some potential security issues that make the browser say “nah… not loading that guy”. If it’s a really simple page (static HTML, CSS), no problem. It is a problem, however, if you’re running any XMLHttpRequests to asynchronously load data. In D3, these requests look something like:


or any of these. In jQuery, this is the guy you’re looking for:


The Fix

Once I’ve located all the XMLHttpRequests on the page, it’s a relatively simple job to replace them. In this map, I have this block of code that loads a topojson file:

d3.json("json/states.json", function(error, us) {
      .data(topojson.feature(us, us.objects.states).features)

This loads the states.json file and as soon as it is received, runs the function that takes an error and a variable called us. The second variable contains the contents of the loaded file and that’s the important one for what we’re doing.

There’s two changes to make. The first is to open states.json and at the beginning of the file, add:

var us =

This takes the proceeding JSON object and declares it as the variable us, the same variable used in the success function for the d3.json request. Now, delete the first line of the request (and the closure at the end) so you’re code becomes:

   .data(topojson.feature(us, us.objects.states).features)

The new code is still looking for the us variable, but now it isn’t wrapped in the d3.json function that waits for the XMLHttpRequest to finish before executing. It will just run as soon as it gets to that line in the flow of the code. Because of that, you need to make sure that data is available when it gets there.

In your index.html file add the modified states.json file like it was a regular javascript file:

&lt;script src="json/states.json"&gt;&lt;/script&gt;

before the javascript file that executes the code. Just put it at the top of the <head>, below your CSS files. You don’t need to worry about it slowing down the load time of the page because none of these files are being transmitted over the web.

You now have a set of files that will work wherever you send them. Anyone can open your index.html file in a browser and see your map as you intended.

Some Caveats

  1. Watch out for convenience functions like d3.csv() that have conversions built into them. This method only works for files that are already Javascript objects and can be declared as variables. If you must have your data stored as CSV, you’ll need to alter the file to become one long string and parse it yourself.
  2. The variable you use in the external file will be global. Be careful with that. You can’t reuse any of those variables in your code so it’s worth a quick search to make sure the names don’t come up again.
  3. There’s a reason we don’t load our data like this all the time. It will be slow and gross.
  4. The latest version of Firefox appears to be cool with XMLHttpRequests over file:/// so if you’re lazy, just tell your clients to use that instead.