Continuing our recent theme of terrain-related side projects, a few days ago I finished (or, decided to quit working on) a shaded relief map of New Hampshire’s White Mountains that I’d been pecking at from time to time for a few months. Most of our work is with interactive, web-based maps, and although we occasionally get to do more traditional static cartography (with hillshades, even), sometimes the kind of slow, singularly designed cartography we remember from our pre-web days has to be done just for fun.
It’s satisfying to see a map come together piece by piece, as in the above animation showing the main steps and layers in producing this map. Cartography is rarely a matter of throwing data into software and getting a map in return; rather, a single map usually involves multiple tools and data sources, and a lot of attention to small details. (The same is true of web maps, by the way: we write a lot of code for small design details that push beyond defaults.)
There’s no single way to make a shaded relief map, but here’s how this one came together:
Download a good digital elevation model from the National Map.
Set up a QGIS project with land cover data. Reduce it to only a few colors (mainly, evergreen forest and “everything else”) and export it with dimensions matching the relief image.
In Photoshop, add land cover, then the relief layer with a “multiply” blending mode.
Heavily blur the land cover so that it’s not harsh and pixelated. It becomes a subtle base layer, not an essential piece of data.
Add water lines and polygons (via Census TIGER/Line) to QGIS, style, export, and add to Photoshop above land cover.
Use some Photoshop tricks to make relief highlights a bit brigher and warmer-colored, and shadows a cooler color.
Generate and label contour lines from the DEM using QGIS, then export and add them as a Photoshop layer.
Add roads (from OpenStreetMap via Geofabrik’s extracts) to the QGIS project. Export and style them with Illustrator, and place the .ai file as a layer in Photoshop underneath the relief. (Shadows thus fall on roads as they would in real life.)
Label all the peaks, physical features, and towns one by one in Illustrator (no GIS data involved), and place them into Photoshop.
Then just a bit of cropping and cleanup, and it’s done! That list, of course, vastly oversimplifies things, but it gives a good idea of everything that goes into a map. Labeling, for example, is hugely important and takes a lot of time to do right.
Perhaps my favorite touch, briefly visible in the animated sequence, is number 7 from the list above: adding extra punch to the relief map’s highlights and shadows. Daniel Huffman also covers something like this (along with much more!) in his walkthrough of terrain mapping in Photoshop. A brightened, warmer tone is applied to the light side of mountains at high elevations, while shadows are given a blue tint. Not only does this seem to boost the illusion of depth, it also better evokes the temperature and appearance of a warm sun and cool shadows in reality. The effects are applied lightly, but they make a difference.
It’s been fun to practice this kind of cartography and learn new things along the way (Blender is great!), while more deeply studying a region that is somewhat dear to me. Here’s the full final product.
A short while ago we received an inquiry about making a tool to draw a simple topographic contour map of any given place in the world and export an SVG file with the lines. There are good global terrain maps with contour lines—Google Maps has them, for example, as do many Mapbox styles—but the interest here was in extracting only the contour lines, for external use. Although the request turned into something else, we were still intrigued by the idea.
“Sounds too hard,” I first thought. The question marks were:
How can we load good elevation data for anywhere in the world? I know how to find some good DEMs, but not on-demand in a web app, and I only know good data in the US.
How the heck do you draw contour lines? That has always been a desktop or command-line GIS operation for me.
You can do a handful of things here:
Find the place you want to map
Choose the contour line interval (in meters or feet), and the thicker index line interval (if any)
Specify line colors and weights
Use a solid color or hypsometric tints as a background fill
Color elevations below sea level with different bathymetric colors
Draw maps as basic contour lines or with a stylized raised, illuminated look
Export to GeoJSON, PNG, or SVG
Give it a try and let us know if you find it useful for anything! Have a look at the source code too if you’re interested in how it works, which is broadly described below.
Global elevation data
The first big task is finding global elevation data and loading it in the browser without a huge hassle. We have a good archive of SRTM data and briefly thought about writing server functions to deliver it, but my mind had been glossing over a much easier route despite having used it in the past: Mapzen (RIP) terrain tiles.
Terrain tiles are raster map tiles, with the same size and numbering scheme as any ordinary web map tiles, that contain elevation data encoded as RGB color values. The type we use look something like this:
They look insane because they’re not meant to be viewed directly. Instead, a short formula decodes the red, green, and blue value of a pixel to an elevation value, which we can then use as we please. I plopped an invisible canvas tile layer into Leaflet to load the necessary terrain tiles as the map is moved around. After they load, they’re drawn to a canvas from which we can read those RGB values, and thus store a big table of elevation values for the visible map area.
Fortunately, despite Mapzen’s demise, their work on terrain tiles lives on, as the whole set is available via Nextzen or Amazon S3. Mapbox (still alive) also offers terrain tiles. Although the quality of data varies from place to place, these datasets represent work by some dedicated people to piece together the best data they can for most of the world—much better than trying to do that ourselves!
Drawing contour lines
Great, we have elevation data. Now we just need to draw contours.
I do not know how to do this. I do not pretend to know how to do this. I understand a basic hand-drawn method, but my real-world method is to ask GDAL to do it.
d3-contour returns the contours as GeoJSON, which is quite handy because D3 is also good at consuming GeoJSON and spitting out drawable shapes for canvas. The contours and visible map are based on screen coordinates, not geographic coordinates, but D3 doesn’t care. To export as a usable GeoJSON file, we can use Leaflet’s conversion methods to get back to geographic coordinates.
To recap, then, whenever the map is moved and redrawn, it does the following:
Load terrain tiles
Draw tiles to an invisible canvas and decode to elevation values
Get contour line thresholds based on user options and the current range of elevation values
Get contour polygons with d3-contour
Draw contours to canvas with the specified style options
When style options change, it only needs to redraw the canvas. If the line interval changes, it needs to re-calculate contours but doesn’t need to reload elevation data. If the map moves, it needs to do everything.
Stylized maps from contours
This little tool contains one slightly fancy style, the illuminated contours. These are essentially Tanaka-style contours, where each contour line appears to be raised above the previous one, and illuminated from one direction. They look kind of three-dimensional, like layers of wood cut and stacked up. (Talented people have made plenty of real-life physical maps of that sort.) You can produce these with things like ArcGIS or QGIS, where the methods may be smarter and aware of the aspect of each line segment, but here it’s just a trick with drop shadows. Until now I didn’t know that standard canvas rendering methods include drop shadows! There’s a light stroke around the whole polygon, but it’s obscured on one side by a drop shadow on the fill.
But the stylistic possibilities with contour lines don’t need to stop at contours themselves. I’ve been playing around with some maps that use contour lines as an intermediate step in deriving the final style, while not necessarily appearing on the map themselves.
One example is an attempt at hachures. Contour lines serve as starting points for shorter strokes, which travel downhill perpendicular to the contour, stopping at the next contour line. Contours are somewhat visible as gaps in the map, but are not drawn. I haven’t exactly perfected this, but perhaps it’s an improvement on earlier derailed work with faux-hachures that were based on a grid.
Or we can get carried away with hachures just for aesthetic purposes, starting at contours but letting the strokes flow farther downhill, coalescing and being colored by the general direction in which they flow.
Finally, there are always trippy animations. This one does show actual contour lines, but it’s not exactly an ordinary map. Making useful things is great, sure. But making wacky pretty things is more fun!
We recently got to work with a professional testing company on one of our larger projects and I was blown away by how they handled front-end testing. My approach to front-end testing was always “bang on it enough with enough different devices and you’ll find most of the bugs.” That approach works (mostly), but it’s stupid (totally).
A little while later, I was asked by a client to create a testing plan for a medium sized project. Using the experience of working with the pros (and safe in the knowledge I wouldn’t be the one actually doing the testing), I put together a quick plan. I wanted to share my experiences here because while traditional cartography prepared me for meticulously editing a map, front-end testing is something that was (and honestly still is) a bit of a mystery to me.
What does it do?
The testing plan starts with a thorough rundown of everything the map does.
We use the word map pretty liberally ‘round these parts. Here, it will mean the entire application from map to non-spatial graphics, to controls.
I made a quick flow chart that shows the hierarchy of functionality in the project, listing major controls plus their options (and sub-options).
It’s a pretty thorough list but I’ve left off some of the foundational level stuff like panning, zooming, and basemap tile loading. If there’s any confusion about setting up the hierarchy, it’s probably laid out for you to see right in your UI. We have a tab for all the big stuff.
Modes and Tasks
Listing out the functionality should be really straightforward, especially this late in the development process. What’s a bit trickier is getting the difference between modes and tasks.
Only one mode can be active at a time. They usually change the underlying structure of the map or display a different set of graphics. Not every map has multiple modes. Depending on the setup of your system, the order could be significant to testing.
Tasks are what a user can do while in each mode. These tasks can be the same across modes or differ between them. The order tasks are performed will almost always be significant to testing.
The Perfect Plan
With the functionality all scoped out, you could theoretically create a testing plan that tests every task in every order across every mode and execute that plan on every browser across every desktop operating system and mobile device. However, that would be a bad idea.
Instead, use your functionality list to create a handful of testing scenarios. These scenarios should:
Cover all of the modes at least once and all of the tasks multiple times
Allow for some randomization where the tester can select a dataset or geography at random
Also allow for randomization on the order certain tasks are performed
Most importantly, be grounded in the reality of your actual users as you understand their workflows and potential uses of the map.
When making a plan for testing your datasets, consider how they were created. Were they created by hand? Conversely, are there any weaknesses in your data generation script that may require manual intervention? If so, instead of randomizing your datasets, create a plan that mandates all of them are tested. If you are randomizing, it’s worth making a note about your datasets and geographies that are outliers. This particular plan for public health project in California requires extra attention to Los Angeles county since it has the highest density of census tracts which could (and did) lead to bugs not present in other areas.
The following testing scenario is based off a hypothetical user that has a specific idea of the indicators they’re interested in, but isn’t sure about which geography to perform their analysis.
Load the map and select any geography
Create a custom score using more than 5 indicators
Perform steps 3 - 5 in any order
Change the color scheme
Click any map unit to view details
Rank the custom score using any geography larger than what is currently selected
Switch to a different geography
Select a geographic unit and export
The other scenarios are based around other (assumed) types of users including:
Total novice users exploring the functionality
Users targeting a specific geographic area with no specific indicators in mind
Expert users working extensively with the data uploading and exporting features
Doing the Testing
Now that the formal(ish) testing is ready to begin, we’re probably not going to be the ones doing it. This is because:
It’s generally not a good idea to test your own work. You definitely know how to operate it properly and you’ll be more forgiving of an issue that’s not quite a bug but definitely not performing as expected.
Off-loading front-end testing to the client is a good way to keep costs down. They usually have motivated staff at varying levels of involvement in the project (and a diverse set of desktop and mobile hardware setups) who are happy to help.
The problem with external testing, especially with first-time testers, is getting effective bug reports and getting them into a system where we can take action on them. To help with this, we created a bug form on Airtable based around Tom MacWright’s excellent guide to bug reporting. It let’s us collect bug reports from a large number of testers and guide them through giving us the information we need.
We then use Zapier to connect to Github so each new submission creates an issue that we can fix using our regular development workflow. The client PM is given access to the bug database on Airtable where they can track our progress through the fixes without needing to have full access to our issues (and occasional salty language) on Github.
imagineRio, produced with the Humanities Research Center at Rice University, is one of our most ambitious digital humanities projects. It tracks changes in the development of Rio de Janeiro over the past 500 years. It renders historically and spatially accurate maps combined with iconography, historical maps, and urban plans to give a sense of what Rio looked like and how it imagined itself from its first settlement to the modern day.
Powering imagineRio is iRio, a Node application that facilitates:
Data management and conversion from the shapefiles used by the Rice team to collect data and PostGIS where the data is eventually stored.
Tile rendering and caching with options to select the year and layer visibility.
An API to request metadata and vector data across the entire database.
When our friends at Rice wanted to launch 2 additional maps, it made sense to re-use the iRio platform; not only because the system was already built and tested, but also because their team was already familiar with the intricacies of the data required to power the map. With this knowledge and their existing best practices, they were able to quickly produce new datasets for new maps of the Rice University Campus in instituteRice and Beirut, Lebanon in diverseLevant.
Having now built 3 iterations of the same concept, it’s tempting to look back and examine how the design evolved. What changed, and why? Generally speaking, the changes fall into two main areas:
Interface look and feel
Look and Feel
The look and feel of an map interface is often connected to trends and styles found on other apps and sites across the Web. These naturally shift over time, which is why it’s not uncommon to look back at an old project, cringe, and wonder why in the world you used all those gross color gradients, drop shadows, and chunky icons. Fashion aside, look and feel can imbue a project with its own unique identity, which can factor into people connecting with it and returning to it over time.
With ImagineRio, we were going for a clean, modern, contemporary look and feel. At the time, this meant doing things like:
Maximizing map space. We talked about wanting to make the map as immersive as possible on a flat computer screen and allow it to fill the entire browser window. Even some interface elements that might otherwise have obscured the map, such as the header and timeline areas, were made semi-transparent, allowing the map to be seen through.
Interface and basemap colors. We chose grayscale colors for interface components so they would not draw attention away from the map. White and gray values were assigned to every panel, button, text, highlight, and hover state. The basemap was intended to be more colorful than the UI, but not too colorful. It consists of a mostly of flat, desaturated palette of earth tones. We wanted colors to look natural and not feel out of place or time, whether mapping the year 1500 or 2015.
Font selection. To stay in line with our design goals, we chose modern san-serif fonts, like Helvetica and Arial, for their legibility as well as their clean, smooth and organized look and feel. On one hand they tend to lack much of a unique personality, but on the other they avoid drawing undue attention to themselves. Much like our color selections, we wanted to avoid fonts that made too strong of a statement.
When we began work on instituteRice our goal again was a fresh, clean and modern look and feel. However, it being the second application of the project, and knowing that a third was on the way, the idea of instilling some unique identity into the map was a bigger factor than before. InstituteRice, and deiverseLevant soon after, needed their own look and feel, something contemporary and modern but at the same time noticeably different from each other and imagineRio. This resulted in some design changes, such as:
Updated splash screen. In imagineRio, the splash screen mainly consisted of a large project title and “View the map” button. The redesign for instituteRice and diverseLevant now include an introduction to the maps, a quick way to jump to defined points in time, and a preview of the basemaps themselves. First impressions are important. As successful as imagineRio is, we knew that a more meaningful, positive first impression when arriving at instituteRice and diverseLevant could reflect their quality and sophistication, as well as set expectations that might encourage map use.
Interface accent colors. Like imagineRio, grayscale colors were used in the instituteRice and diverseLevant interfaces, with one notable exception. A unique accent color was assigned to each, serving to brand the maps and signify a difference between them. For institueRice, blue was chosen from a set of client branding materials and then incorporated into the timeline, buttons, text and map highlighting. DiverseLevant followed suit with its own accent color, pink, selected from a significant historical map in the collection.
Interface shapes. The instituteRice and diverseLevant interfaces differ in some more subtle ways as well. For example, in instituteRice circular buttons and circular thumbnail images are used throughout, whereas in diverseLevant, these same elements are styled as rounded rectangles. It’s a small difference, but small changes add up and help reinforce the unique identity of each map.
Custom basemaps. Not strictly an interface component, but a basemap can be a strong identifying feature itself. Who doesn’t recognize the Google basemap when they see it? While a single accent color can be associated with a brand, so can many small stylistic elements used in combination. In instituteRice and diverseLevant, rather than using the same basemap styles as in imagineRio, we created entirely new cartography for each. The instituteRice basemap borders on realism (thanks to an amazing tree dataset provided by the campus arborist), but also due to the use of more saturated natural colors, textures, and shadows. The diverseLevant basemap on the other hand was inspired by the historical map from which we drew an accent color, borrowing its otherwise pale color palette, yellow roads, and pink buildings, and even mimicking its hand-lettered labels with a strong font choice. A significant departure from imagineRio!
Layout is a balancing act. The various elements of a map all need to be worked into a browser window to create something that’s usable and engaging. There are, of course, many factors to consider, relating to users, devices, and a huge range of possible project-specific constraints. In almost every project, as pieces begin to take shape and start competing for space and attention, compromises are necessary to balance it all together.
While designing imagineRio, we tried to prioritize important tasks in the layout. Here are a few of the layout decisions we dealt with:
Navigation bar. Navigating through time was perhaps the number one user task, which led us to position the timeline and date stamp directly above the map near the top of the page where it was easy to see and access. The timeline was also flanked by a dropdown selection menu and a search input box. Although these cut down on space for the timeline, we felt that since they were all related, it was more important to group them together in a single area. (Search, for example, acts on map features in the currently selected year only—not all years).
Map legend controls. It was decided early on that layer control and map customization were very important for understanding and using the map. GIS-ish layer controls, though not our favorite design pattern, gave users the ability to toggle the visibility of geographic features and highlight groups of features via the legend. Historical map images, shown as thumbnails in the legend, can similarly be toggled on and off.
The imagineRio layout underwent a number of changes when adapted to instituteRice and diverseLevant. The biggest changes were driven by new thoughts about what was important for users to see and do with the map, as well as what aspects of the content were seen as most engaging. Some smaller changes were also made that attempted make better use of map and interface spaces.
Image browser. Probably the most substantial layout change stemmed from the decision to prioritize viewing historical iconography over customizing the main map via the legend controls. In imagineRio, iconography was somewhat hidden behind points on the map, visible only when hovering the mouse over them. This made access somewhat difficult and browsing could be slow and tedious. In instituteRice and diverseLevant a new, separate panel for browsing iconography was added along the bottom as a filmstrip of image thumbnails. Clicking on a thumbnail focuses the map on the location where it originated and opens the image at a larger size. With the new emphasis on iconography, it made sense that the panel would be expanded when users first arrive at the map, rather than the legend.
Data probing upgrade. In imagineRio, two stage data-probing provided users with a thumbnail image when hovering the mouse on map points and then a much larger, light-boxed image when clicking on a point. In instituteRice and diverseLevant, we sought to improve on this by adding a third, intermediate stage which included a medium-sized image. When clicking on a map point in these maps, instead of entering a lightbox, a new data probe appears anchored to the upper right corner, with the associated point on the map (and image view cone) positioned next to it. This was intended to limit the amount of work it takes to enter and exit a lightbox and encourage more rapid exploration while displaying images at a modest size. In addition to this, diverseLevant includes step buttons with the intermediate data probe, giving users the ability to skip directly from one point-image pair to the next and easily tour all of the content on the map. Like the image browser, this upgrade was driven by the prioritization of viewing historical iconography as a primary map use task.
The focus here has been on the desktop, but similar design changes occurred across the mobile versions of the maps as well, to both look and feel and layout. The new image browser on mobile, for example, was added as a modal behind a thumbnail button that floats above the map. The image data probe, which once went straight to a lightbox, now includes a middle step that displays images in the bottom portion of the screen.
We’re in the business of custom cartography, where each project is completely new and different. This was a somewhat unique experience in that three related maps were built using the same map framework. That left design as the main variable. Layout and look and feel changed significantly with instituteRice as some aspects of the design were prioritized differently and a fresh look and feel was needed after 5 years. We got more efficient between instituteRice and diverseLevant which shared interface designs much more closely, making for quicker and cheaper development time. Even so, our custom cartography instincts kicked in with custom basemaps and other small changes to the user interfaces that gave each of the later maps a distinctive look and feel.
We faced a challenge along the above lines earlier this year when we set out to visualize usage of rotavirus vaccines produced by Merck. Simple-sounding on the surface, it involved some tricky design and back-end work, notably because weekly data by zip code over ten years means more than 17 million data points: a ton of data for a web map to be loading.
First, a brief overview of this map, which is at https://merck.axismaps.io/ or in a video demo below. It shows the percent of eligible children receiving a vaccine each week over approximately ten years at state, county, or zip code level. More detailed numbers are found in a chart at the bottom and by poking around the map. And that’s about it. Simple, right?
This project involved several prototypes to work through design decisions. Although in the end it became a fairly straightforward choropleth and point map, the client and we wanted to explore some map types that we thought might best show the spatial and temporal patterns. Early on we had a request for the map to appear such that the entire United States, even unpopulated areas, are covered, to avoid suggesting that there are areas that the vaccines hadn’t even reached. To this end we tried binning into grid cells, but that comes with a couple of problems.
There are places that zip code tabulation areas don’t touch—because nobody lives there—so ensuring no blank space means a certain minimum grid cell size, which may or may not be a good resolution for the more populated parts of the country.
At one point we experimented with variable cell sizes, where each cell contained approximately the same number of zip codes. Big squares mean sparse populations, and small cells mean dense population, where there are a lot of zip codes in a small area. I’m still a little intrigued by this idea, but cartographically the effect is kind of opposite of the intended representation: all cells are meant to be “equal” in a sense, but the larger, sparser cells carry a lot more visual weight.
A second problem with binning is that it requires aggregations that depend on actually having the necessary data. In this case, we had vaccination rates already aggregated to geographies like zip codes, but we did not have the actual number of vaccinations and the total number of eligible children. Without those, we weren’t able to display actual vaccination rates in a grid. Instead it was something like “percent of zip codes with rates above 50%,” so for example if a cell had 100 zip codes and 40 of them had vaccination rates above 50%, the map would show the cell as 40%. This is a bit too convoluted and may not do a great job at showing real spatial patterns anyway.
Data overload: time
As previously mentioned, weekly data for thousands of geographies over ten years is a boatload of data, way too much for a simple web map to load up front. The default county map would be well over a million values, and that’s one chunky CSV. A more efficient way to handle animated data is to deal with change in values, not values themselves. If a value for a county doesn’t change from one frame to the next, there’s no need to store data for that frame for that county. By pre-processing all the data with some fancy SQL to pull out changes, we can cut down significantly on the amount of data being sent to the map and improve rendering performance.
For states and counties, we use a 10-class equal interval classification, and for zip codes only two classes. Whenever a unit’s vaccination rate moves it from one class to another, we store the date (week), FIPS code, and new class number. If it changes to, say, class 8 and stays that way for six weeks, we don’t end up with six rows of data, but rather just one with the week when the class changed. A snippet of data looks something like this:
Detailed data with actual vaccination rates is loaded on demand through a simple API to get values for a specific geographic entity.
To further reduce file size and smooth out the animation, we mapped 12-week rolling averages instead of single-week snapshots. The data tends to be unstable and noisy when and where there were lower populations of eligible children, so our hope was that averaging values over time would present a better picture of trends, while also resulting in fewer rows of change in our final CSV.
Data overload: space
Besides the attribute data load, a national map at the level of something like zip codes means too much geographic data. For one, it’s another file size problem; for another, it’s a legibility problem.
Legibility concerns led us to the zip code point map. At a national scale, even county polygons are pushing it in terms of crowding, and most zip code polygons are definitely too small to be discernable. Thus we make you zoom in pretty far before zip codes resolve to polygon representations; before that they’re shown as centroid points, which are still crowded on a national map but are a bit easier to pick out.
Most of the map is drawn as SVG using standard D3 methods, but the zip code point layer is an exception. This many points, some 33,000, do not perform well as vector graphics and instead are drawn to a canvas element. It means some extra work to account for things like interactivity (we can’t just attach mouse handlers and have to search for nearby points on mouse move), but it’s worth it to avoid completely choking on rendering.
At the scale where we do show zip code polygons, the problem remains that this is a ton of geographic data. For this we built a simple Node vector tile server that sends the polygons in tile-sized chunks as topojson (and caches them to S3). We calculated and stored zip code centroids in a PostGIS database ahead of time, then can get the tiles by querying for centroids that fall within a tile’s bounds. We use centroids instead of polygon intersections so that each polygon is only drawn on one tile—it’s fine if it spills out into other tiles on the map as in the highlighted example below, but we don’t want it being drawn multiple times.
On the front end, when the user zooms past a scale threshold, the map switches to a standard web Mercator map (using d3-tile) onto which we can load zip code tiles as the map is panned. (As a bonus we can also easily load reference basemap tiles underneath to help with orientation.)
A few things we learned about animating a ton of data over time:
Animation can be hard to follow especially with so many data points. Explore ways to aggregate data (both spatially and temporally) that might be better than exact data values at showing trends. They may not work out, but it’s worth investigating.
Instead of loading all values, try loading only changes in values to cut down on file sizes; let exact values be retrieved in smaller doses on demand.
Generalize! Different scales call for different complexities of geometry, and this can go beyond polygon simplification to things like collapsing polygons to points.
Don’t be mystified by vector tiles! It’s not too difficult to make your own vector tiles for excessively detailed geodata.
About 4 years ago we wrote a post about setting up a map server with Mapnik and PostGIS. It’s still one of the most popular posts on the site but it’s VERY OLD. I wanted to update it with a slightly easier install method and some newer software. What’s in the stack? I’m glad you asked!
Unlike the previous guide, this one won’t cover basics of Linux and the command line. It’s also written for a Red Hat Enterprise Linux (RHEL) 7.2 server instead of Ubuntu. Let’s do it.
Ever since the San Francisco map sold out over the holidays we’ve been eager to get it reprinted and back up for sale. Of course, before doing so, we couldn’t resist making a few changes to refresh and update the design. The new version, pictured above, is the third in six years. Read down the page for a quick rundown of what’s new, or skip it and go straight to the typographic maps store where you can check out the map of San Francisco and our collection of other typographic cities.
For the past few weeks, we’ve been working through the soft launch of imagineRio, a project we’ve been working on for a couple of years with Rice University. Fun fact: The Portuguese translation of imagineRio is imagináRio which directly translates to imaginary. There’s more background information about the project on the Rice Humanities Research Center website, but in short, the goal of the project was to create a platform to display spatially and temporally accurate reference of Rio de Janeiro from 1500 to the present day. The current front-end for the project uses these maps to display a range of iconography, including maps, plans, urban projects and images of the city (with viewsheds).
The project has numerous technical challenges (which of course pale in comparison to the challenge of digitizing all that historical data), but I just wanted to focus on one of them for this post: data probing and feature identification on a raster map. I’ve always considered data probing in the browser to be something that is exclusive to vector maps. Raster maps are just a collection of pixels. We don’t know the features that are there so we can’t interact with them. Usually that’s OK. Interactive maps are vector thematic data on top of raster base tiles, right? Not always (and yes, we’ll talk about vector tiles another time, this project started 2 years ago):
What if the thing your map is about is the type of thing usually reserved for basemaps (roads, buildings, natural features, etc)?
What if you need more map rendering oomph (compositing, labels, etc) than the browser can provide?
What if your dataset is just too big for the browser to handle as vectors?
I wanted to title this post: You Won’t Believe This Cartographer’s 4 Weird Tricks for a Nicer Map. That seemed like a bit much (plus the length of this post got away from me so it’s now more Longreads than Upworthy), but the sentiment isn’t entirely untrue. Design (big-D Design—I would’ve capitalized it even if it didn’t start the sentence) is an intimidating and amorphous topic. Academic cartography provides good guidelines for thematic cartography, but interactivity and user-interface design are often “I know it when I see it” type of things. What follows are 4 quick design concepts and techniques that can be applied in many situations to improve the look and feel of an interactive map.
These concepts were taken from a map we made for the Eshhad project tracking sectarian violence in Egypt. It’s a relatively straightforward map with:
A point dataset with a handful of attributes of various types (date, categories, short / long text, URLs)
A Leaflet implementation with basemap tiles
A responsive design for mobile
These are 3 very common circumstances for an interactive map, which should make these tips transferrable to a wide variety of projects.
The overview map uses value-by-alpha to display the results. Each district is colored according to the party that won the most seats. Transparency is controlled by the number of seats won in that district (not the number of seats available). Because Egypt uses a proportional system representation for each district, a party wins seats in proportion to how much of the vote they won. This leads to lots of ties, especially in the individual results list where the districts are very small with only 2 - 4 seats up for grabs, and many candidates running unaffiliated with any political party.