By Paal Fredrik Skjørten Kvarberg
“The great promise of the Internet was that more information would automatically yield better decisions. The great disappointment is that more information actually yields more possibilities to confirm what you already believed anyway.” — Brian Eno (2017).
Polarization and the public sphere
One of the greatest challenges facing modern democracies is polarization. A group is polarized if they are divided by differences in beliefs, and fail to understand and respect contrary points of view. When a society is polarized, citizens struggle to find agreements to solutions to the problems they have in common. They lose faith that open discussion and rational debate will lead to a common understanding and an outcome everyone can be satisfied with. So they use the public sphere as a place to throw insults and denigrate the opposition, and work against fellow citizens who hold contrary beliefs to force through the solution they believe is best.
The causes of polarization are many, and undoubtedly the most consequential causes have to do with deep cultural and political problems to which there are no easy solutions. However, there is a set of causes that have to do with the way information flows through society and how we communicate. These contingent factors have to do with our skills with language, the way current media platforms and technologies structure communication, and our expectations of content. The recent disruptions in media and digital communication have shown that these factors are malleable, and that technological innovation has the potential to drastically change the public conversation for better and for worse.
Disputas is a tech startup I co-founded to build technologies that might have a strong impact on the state of the public conversation. We aim to do this through a three-step strategy. First, we develop Ponder, an interactive educational platform for text analysis and critical thinking. Then we develop Hylas, an AI-powered assistant for critical literacy and persuasive writing. And finally, we create the Web of Belief, which is a logical knowledge graph for organizing thoughts in terms of their logical interconnections. These technologies build on each other, and there are deep affinities between them. In what follows, I’ll do my best to explain the thinking behind these ideas, and how we think that they might improve the public conversation.
In recent years, there has been a massive effort to address the problem of fake news through fact-checking. However, for polarization, the way we communicate is actually more consequential than whether the things we say are true or not. It is common now to see deliberately vague or misleading language, personal attacks, and rhetorical maneuvers in debates. Very few possess the skills needed to express nuanced ideas in an understandable and convincing manner, it seems. See https://www.logikksjekk.no/ for a selection and explanation of uses and abuses of language in debate from the Norwegian public sphere.
Efforts are being made towards spreading these skills in humanities courses through K12 and in universities, but the effectiveness of such courses is limited. According to theorists, the most important limiting factor of such courses is the lack of active practice on the part of students. In language-driven courses, problems are typically addressed by text content, and assignments are typically delivered as essays. It is generally very complicated and time-consuming to read and evaluate such texts, and university funds are limited. Therefore, there is virtually no practice and feedback in humanistic courses, even though pedagogical science clearly shows that this is the most effective way to learn new skills. Some creative professors tried to address this problem in the early 2000s through technological innovation. They developed pedagogical software for visualizing the contents of text analysis. With these programs, students could finish assignments, and educators could evaluate them much faster than before, which opened space for practice-oriented courses. Some of these practice-oriented courses showed extremely good results, reliably yielding three to four times better learning outcomes on standardized tests than courses using traditional methods (Twardy 2004; Grant 2020). The creators of these programs soon hid them behind expensive paywalls. And despite the great potential inherent in such educational technologies, especially for feedback automation, they were never developed to their full potential. Most available services are expensive and rely on the same technologies and ideas that were developed in the early 2000s.
Inspired by the amazing results of these initiatives, we started developing Ponder: a free platform for active text analysis (an early beta is available at our website https://disputas.no/). We aim to spread the technology needed to improve the learning outcomes of practice-oriented courses to the humanities courses that are essential for effective communication and critical thought. The design of Ponder involves three fields. To the left, a text field for reading and interacting with words and sentences. A mid-section for listing the units of the analysis, and a diagram visualizing the contents of the analysis graphically. Students mark parts of text and assign categories to the text fragments, to demonstrate their understanding of the roles that distinct words and sentences play in relation to each other, and to the text as a whole. Teachers can conduct an analysis prior to student efforts, and Ponder can use this analysis, in addition to prior assignments, to automatically assess and give feedback to students that attempt to analyze the same text. There are a bunch of fine-grained functions to enhance the user experience, but the most important takeaway is that we use visualizations and interactive game-like features to accommodate the expectations of the modern generation. If all goes well, we will manage to make thoughtful interaction with abstract scientific and political ideas in complex texts as engaging and addictive as computer games.
We also have non-pedagogical ulterior motives with Ponder, which is part of our threefold strategy. We designed Ponder in a way such that student and teacher interaction results in user-generated linguistic training data. This is the type of data that can be used to develop artificially intelligent systems for natural language processing (NLP). Our plan for the Ponder dataset is to develop Hylas, an intelligent assistant for critical literacy and persuasive writing. If this scheme works out, we can use the data from Ponder to teach Hylas the same linguistic skills that are taught in the courses where Ponder is applicable.
The current plan is to deploy Hylas as a plugin, similar to the implementations of grammar checking software like Grammarly. When installed, Hylas can be activated to identify the main lines of reasoning in a text, reveal rhetorical ploys and highlight vague or misleading language. Hylas can also be used to make highly relevant summarizations of a text, and visualize its logical structure for a quick overview.
Hylas facilitates a better public debate by making it easier for everyone to distinguish between the uses and abuses of language in argumentation. With Hylas, writers are empowered to write clearer and more sensible texts, and readers have an easier time understanding them. However, the greatest causal path to impact on the quality of public debate is perhaps through the apprehension Hylas would instill in debaters who knowingly mislead or evoke agitating language in a subversive manner. The mere existence of Hylas implies that attempts at devious rhetorical ploys over straightforward argumentation could always be exposed by an objective machine, and this gives everyone additional incentives to think carefully about the way they argue, for the better. Hylas could also be used to extract large amounts of structured information from a database or the internet. Hylas could, for instance, read everything written by an author, and note how often the author employs fallacies of a particular kind, ad hominem arguments, or rhetorical tropes.
We could also use Hylas to digest all content in a database or the internet and find all written arguments that bear on a particular issue. Using this technique, we can recover and systematize content in a way so that people have an easier time finding the relevant considerations they should take into account when deciding on a difficult issue. For decision making, political, scientific, and otherwise, we think that this could be a game-changer.
The Web of Belief
One of the target subjects for Ponder is logic and critical thinking. A central activity in these courses is argument analysis, which is the practice of identifying, interpreting, reconstructing, and evaluating the reasoning in a text. Ponder offers a unique set of functions for the analysis of reasoning, including cutting-edge research-backed Bayesian approaches to evaluation and updating. Highlights include two simple step-by-step procedures for estimating and quantifying the likely truth of claims and the strength of inductive inferences. Here we rely on research into effective methods of estimation and judgment under uncertainty (Tetlock 2016), and research into argument schemes and the conditions for good inductive inferences (Walton et al. 2008). These ideas have complex underpinnings, but for all practical purposes they are super easy to grasp, and use. The Ponder program asks you how likely you think it is that a particular statement is true, and how strong the arguments supporting it are, in an intuitive click-based interface. Ponder offers guidance in the form of checklists and suggested methods of analysis.
When a user evaluates the reasoning in a text, the evaluated claims and arguments are automatically organized in a graphical visualization that reveals the logical structure of that text. This graph is expanded if the user evaluates the reasoning in another text, where at least one of the claims is relevant to the truth of a claim in the first. The two graphs merge to form a larger interconnected graph. By reading more about a subject, the user gradually expands the graph so as to form what we may call a logical knowledge graph. The weights representing the beliefs of the user are automatically and continually updated as the user reads and evaluates content that is relevant to her prior beliefs. As tweets, comments, news entries, academic articles, books and reports concerning all that the user cares and thinks about are integrated into the graph, it gradually turns into a visual representation of her network of beliefs. A map of her mind.
This structure can be called a logical knowledge graph. We call our variant The Web of Belief, a term first coined by the great philosopher W. V. O. Quine (1970). It is a knowledge management tool that can be used to organize one’s thoughts, and to see the logically interrelated nature of ideas. We realize that the process of analyzing text is laborious, and that most individuals wouldn’t want to register their considerations of the reasoning in a text if they had to analyze it fully themselves. However, if we can develop Hylas to a level of high performance, then the process of analysis would be much simplified. We could make it so the user would only have to give a quantitative measure of how likely they think the main claims of the text are to be true, and how strong the key arguments are through a click-based interface. The process shouldn’t take any longer than a few seconds. When we consider the upsides of having a logical knowledge graph for keeping track of one’s thoughts, we are quite confident that modern users, at least the nerdier percentiles, would be willing to spend some time registering quantitative measures of their beliefs when reading and researching issues they care about. The greatest upside to organizing beliefs in a logical knowledge graph is that all beliefs are automatically updated once a new belief is registered in the system. The weights on the inferential relationships between beliefs can carry information from one end of the graph to the other, which makes updating effortless. Moreover, the Web can exploit the data registered by other people to nudge the user in the direction of improved thinking. We envisage a series of functions to nudge users towards epistemic virtues like open mindedness, modesty, consistency, integrity and fairness. For instance, one could make a function that highlights conflicting beliefs in the web, so that the user could arbitrate between them and gain a more consistent view as a result. One could also imagine a function that lets the user know if they only tend to read (and register) the books that like minded people read. It could show how one’s own thinking is similar or different from the thinking of friends, scientists and political parties. Most importantly, it could show exactly where the disagreement lies, and what claims are relevant to examine in order to arbitrate differences. On the basis of this type of data, it is possible to write functions that ask the user if she is open to examining an argument that people with an opposite point of view tend to think is important. The system could also notify the user in case a researcher had recently given an argument in an article that persuaded most people who looked at it, even though not many had done so already. We won’t list all possibilities here, but we hope this small selection shows that a system like this has a great potential to counteract tendencies towards polarization.
The main purpose of this text is to show, by way of example, that there is a huge potential to do something about polarization through technological innovation. At Disputas we are currently giving this a shot. Most projects like this fail, so the odds are stacked against us. However, we are certain that a project like ours will succeed in the near future, and that the world needs this to happen sooner rather than later.
If you have ideas or want to contribute in this space, we would be happy to have a chat. Reach me on email@example.com.
Eno, B. (2017). The quote is from Brian Eno’s explanation of his answer to the 2017 question on Edge.org: “What scientific term of concept should be more widely known?”, to which he answered ‘confirmation bias’. Link: https://www.edge.org/response-detail/27147.
Grant, D. L. (2020) “The Impact of Guided Practice in Argument Analysis and Composition via Computer Assisted Argument Mapping Software on Students’ Ability to Analyze and Compose Evidence-Based Arguments”, (Doctoral dissertation), link: https://scholarcommons.sc.edu/etd/6079.
Quine, W. V. O. & Ullian, J. S. (1970) The Web of Belief, New York: Random House.
Tetlock, P. Gardner, D. (2016). Superforecasting, Random House: New York.
Twardy, C. (2004). “Argument maps improve critical thinking.” Teaching Philosophy, 27 (2):95–116.
Walton, D., Reed, C., & Macagno, F. (2008) Argumentation Schemes, Cambridge: Cambridge University Press.