21.11.2017; Vortrag
Analyzing processes of transformation and representation in modern web maps using a reverse engineering approach
01069 Dresden
Critically analyzing maps produced by others has long been an important part of cartographers' work. In the age of online publishing and sharing through social media and the resulting daily influx of countless maps and geovisualizations of unknown origin, verifying the reliability, trustworthiness and adherence to cartographic best practices of such "neocartographic" products is arguably as important as ever, but comes with new challenges. Both the sheer quantity of maps coming online every day, as well as new qualities of these cartographic products (interaction, dynamically generated content, maps being updated or going offline etc.) make traditional cartographic analysis on a case-by-case basis appear insufficient to keep track of the ever evolving practice of online mapmaking.
With my work, I am trying to exploit the underlying structures of digital online maps to develop new tools for analyzing the processes of transformation and representation put to work in them. Modern digital maps are often delivered not as static images, but in the form of data accompanied by program code, containing the instructions for the recipient's computer for how to render the map and, potentially, how to dynamically alter its appearance over time (through animation, interaction, sensor readings etc.). By using automated reverse engineering techniques on such cartographic programs, it is possible to trace each cartographic mark generated by a program (geometry, color, text etc.) back through the code (containing the rules for how the marks are created) to its origins in either data or code.
Supported by such a technical framework, the cartographer can be assisted in answering a number of questions, which could not be fully or as rapidly answered using traditional cartographic analysis methods. Some of the questions to investigate using the proposed methods are: Which entities from the data are represented on the map and which are potentially filtered out? Which rules govern the symbolization of data and entities on the map? Where in the code are these rules defined, and who wrote that code? Which cartographic transformations (projection, generalization, classification, symbolization etc.) are applied, and are these in line with established cartographic best practices? With a framework in place that can perform an analysis of the cartographic code of a map with regards to these questions, new practical tools can be envisioned to aid cartographers, as well as non-experts, in improving their understanding of digital maps. The analysis method and tools developed have the potential to advance our understanding of the cartographic quality and integrity of online maps, as well as contribute to the current wider debate on ethics and accountability of algorithms.