Home 9 Against the Grain 9 Visualizing Data Part 3: A Conversation with Parsons School of Design’s Daniel Sauter

Visualizing Data Part 3: A Conversation with Parsons School of Design’s Daniel Sauter

by | Nov 6, 2020 | 0 comments

#

By: Nancy K. Herther , writer, consultant and former Sociology/Anthropology Librarian, University of Minnesota Libraries 

Daniel Sauter is an Associate Professor of Art, Media and Technology and Director of Data Visualization at Parsons School of Design of The New School. Sauter teaches “the next generation of creative thinkers along with his own research into creating installations and visualizations dealing with the social implications of emerging technologies.” His research focuses on how the “computational regime transforms digital identity, geopolitics, and urban spaces.”

NKH: The very concept of “mapping” and the role of “geography” across the disciplines seems to have changed dramatically in recent years. How has this impacted your teaching and the value that information has for your students?

Daniel Sauter

Sauter: Data-driven processes have dramatically increased in recent years, shaping opinion, policy, and decision-making across industries and sectors. To meet the increasing demand for experts, universities are adding degree programs and curricula focused on data visualization and mapping–at Parsons embedded in a leading design school with a close link to social research. We are working with students to integrate design, statistics, and computer science with the goal to turn data into insight. And we believe that effective and critical data communicators require a holistic understanding of qualitative and quantitative research methods, along with technical skills and ethics training, to create meaningful representations of data.

We visualize quantities, time, relationships, and geographies right from the start, and work with partners such as the UN, the Metropolitan Museum, and the Smithsonian Institution through praxis-based research. This approach makes it instantly clear that symbolic and geographic forms of data representation are equally relevant when considering various approaches to visualize a particular corpus or database, creatively. In this mapping process, each decision to categorize, classify, and generalize is a distinct act by a subjective author from a particular background and experience. To make those decisions explicit, our approach is to build and share comprehensive mapping tools in response to a particular research question. 

This allows for multiple vantage points and further investigation. In a sense, the tools themselves become a comprehensive form of citation that can be validated. Citation has always been a key in gauging bias and the validity of an argument. The same holds true for data visualizations. Our students use statistical analysis and machine learning techniques for this work, which cannot be validated from a single visual output. Validation requires access to the underlying methods, models, and data. Teaching in this field inherently requires a great deal of lateral thinking, which makes it an incredibly exciting and continuous learning experience, especially working with students from design, statistics, and computer science who are keen on adding at least one new subject area to their career trajectory.

NKH:  Research is certainly making it clear that visualization’s value in helping us all better understand data and information. Mapping has always been an important way to represent information for easy understanding and clear presentation. From cave art to early global maps to the growing use of visual representation as promoted by Edward Tufte and so many others over the last 50 years. How do you see this progression?

Sauter: Visualizations are embedded and disseminated within a global media landscape that is increasingly tailored to our individual needs, wants, and biases. To amplify a given talking point, the authority ascribed to maps and data visualizations is too often deployed as a prop and framed as evidence, rather than a means of understanding. As a consequence, the Tufte textbook canon promoting the visual display of quantitative information finds itself in stark contrast to assets designed for the 24-hour news cycle and social media memes. 

Yet, when we hear “flattening the curve“–and imagine a governor standing next to a mountain model that illustrates the progression of a pandemic; or we picture a president altering a NOAA hurricane forecast map in “Sharpiegate,“ we are under the influence of data visualization vernacular now entering the collective conscience. This can be considered progress of sorts, but also indicates the more sobering realization that we have our work cut to achieve clarity and understanding. It takes time and cognitive effort to unpack complex issues and information pertaining to it, and there is little incentive to slow down. In the information economy, profit from cognitive labor accrues as we scroll, infinitely. This dynamic is likely to remain as data coverage, granularity, and update frequency increases.

The next generation of maps is not directly intended for human consumption, but things, in the IoT and smart city context. The emerging concept of the “digital twin” produces a digital replica of people and things at urban scale, turning passive data analysis of captured data into active sensor networks that with activity and risk projections calculated near real-time. We are individual actors and data points in this network, who benefit from and participate in this mapping totality, more or less knowingly. While each use case is specific, Vehicle-to-Everything (V2X) communications provides one example of a set of relationships within the smart city paradigm. Autonomous vehicles (AVs) use three-dimensional maps of the built environment and real-time laser imaging, detection, and ranging capacity (Lidar) to navigate, and are designed to communicate with pedestrians, lighting poles, buildings, smart homes, mobile networks, the cloud, and each other, wirelessly and at the same time. 

The resulting maps are computed into sub-second decisions and instructions for the vehicle, and other vehicles around it. As UAVs (unmanned aerial vehicles) are starting to roam the sky, this type of communication extends into the third dimension. Active human participation in real-time mapping is virtually impossible and reduced to an abstraction of a computed outcome. There is lots of exciting and difficult work to be done by future cartographers and spatial informatics experts in this sector, given the high-stakes of safety benefits and ethics dilemmas that are built into autonomous driving. Mapping the underlying processes for legal and oversight purposes is just one example of a new frontier in a progression for the discipline.

NKH: Clearly technology is one of the key driving forces for these changes. And Open Source Open Data and other collaborative options have been essential. Not that the private sector isn’t also using these new tools and technologies as well. As with so many technological advances, once the private sector gets involved, we see monopolies and other ‘negative’ impacts. What about the role of the private sector? Are there any control issues (e.g., ownership of technology, patents, whatever) or is this an area that might be able to just grow and develop organically, openly? Can we control the change or at least control the availability of open tools?


Sauter: It took a global pandemic for Apple and Google to develop a joint protocol for contract tracing, designed on peer-to-peer principles and built to preserve privacy. The underlying Bluetooth technology is 30 years old and available in virtually all smartphones. Therefore this new joint initiative is not built on a new technical innovation, but the requirement to design for interoperability and privacy in order “to help combat the virus and save lives.”  

Almost any digital exchange can be reimagined in P2P networks, sometimes with added efficiency. But this approach does not necessitate centralized information brokers and therefore potentially cuts out profit from from cognitive labor and advertising. Having followed peer-to-peer smartphone technology and developed an open-source library for students [ketai.org], I don’t see a chance for this approach to scale, given the prevailing notion of free access to monopolized information infrastructure and social media. 

As long as 47 U.S. Code § 230: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” is in place, US-based social media monopolies are held harmless as they claim to be neutral internet companies and not liable content providers and publishers. Therefore, they are not held accountable for peddling hate speech, falsehoods, and election propaganda–as other content and news providers are in the US. How should those monopolies cease to dominate without regulation comparable to GDPR?

By “restoring internet freedom” in 2018, the FCC made matters worse by removing “unnecessary regulations” leaving your internet access to corporate interests. Regulating the persuasive power of social media monopolies by prohibiting “practices that exploit human psychology or brain physiology to substantially impede freedom of choice,” such as “infinite scroll” and “auto-play”, is one attempt to regain control, but it only touches the surface of the user interface and problem. It requires a larger recognition that lawmakers are the framers of the digital future. We’ve realized that we have to get involved in our diets to improve health and environmental impacts for a more sustainable future—my hope is that we get equally involved in our information diets.

Nancy K. Herther is a writer, consultant and former librarian with the University of Minnesota Libraries.

0 Comments

Submit a Comment

Your email address will not be published.

Share This