26–28 Oct 2023
Scuola Normale Superiore
Europe/Rome timezone

Reasoning with Imperfect Information in Social Settings 

26-28th October 2023, Scuola Normale Superiore, Pisa, Italy

CFA deadline: 14th July, 23:59 AoE.

Location: 

26th-27th :Palazzo del Castelletto, Aula Dini (Piazzale Luciano Lischi 11)

28th :Palazzo della Carovana, Aula Stemmi (Piazza dei Cavalieri 7)

 

The assessment and understanding of fragmented and vague information has become an increasingly pressing issue in social deliberations, as dramatically demonstrated during the COVID-19 pandemic.
Rational agents and decision-makers must navigate inconsistent and partial information while striving to organise it coherently, and shared information in communication requires agents to continually update their beliefs. Logical methods provide a natural toolkit for formally analysing these complex phenomena.
The workshop "Reasoning with Imperfect Information in Social Settings", organised by Scuola Normale Superiore, aims at bringing together researchers in logic, formal and social epistemology and computer science, who are exploring the intricacies of information dynamics in social scenarios.
Relevant topics include:

  • Belief revision and merging;
  • Judgement and preference aggregation;
  • Multi-agent non-monotonic reasoning;
  • Multi-agent epistemic and deontic logic;
  • Formal methods for representing social epistemic attitudes.

 

Invited Speakers 

  • Marcello D’Agostino (Università di Milano)
  • Gabriella Pigozzi (Université Paris-Dauphine)
  • Christian Straßer (Ruhr University Bochum)
  • Réka Markovich (Université du Luxemburg)

 

Abstract submission

To submit, please send an anonymised abstract of 800-1000 words (excluding references) to pietro.vigiani@sns.it no later than July 14th, 23:59 AoE. Notification of acceptance will be sent by August 15th. Session times will be 45 min., divided into 30 min. for the presentation followed by 10 min. of discussion. Individuals belonging to underrepresented groups in the field of logic are strongly encouraged to submit their work.

 

Programme

Thursday 26, 15.00-19.00.  (Palazzo del Castelletto, Aula Dini)

  • 15.00-16.15. Invited speaker: Marcello D'Agostino.
  • 16.15-17.00. Paolo Baldi.
  • Coffee Break.
  • 17.30-18.15. Francesca Doneda.
  • 18.15-19.00. Andrea Sabatini.

 

Friday 27, 9.00-13.00. (Palazzo del Castelletto, Aula Dini)

  • 9.00-10.15. Invited speaker: Gabriella Pigozzi.
  • 10.15-11.00. Carlo Proietti.
  • Coffee Break.
  • 11.30-12.15. Felix Kopecky.
  • 12.15-13.00. Wojtek Jamroga.

 

Friday 27, 15.00-19.00. (Palazzo del Castelletto, Aula Dini)

  • 15.00-16.15. Invited speaker: Christian Straßer.
  • 16.15-17.00. Lorenzo Rossi and Caterina Sisti.
  • Coffee Break.
  • 17.30-18.15. Jan Sprenger.
  • Social dinner. (Palazzo della Carovana, Chiostro)

 

Saturday 28, 9.00-13.00. (Palazzo della Carovana, Aula Stemmi)

  • 9.00-10.15. Invited speaker: Réka Markovich.
  • 10.15-11.00. Matteo Michelini.
  • Coffee Break.
  • 11.30-12.15. Lorenzo Casini.
  • 12.15-13.00. Chris Fermüller.
  • Social lunch. (Palazzo della Carovana, Chiostro)

 

Invited Talks' Abstracts

Marcello D'Agostino (Università di Milano)

Tractable Depth-Bounded Approximations to FDE and its Satellites.

Abstract: In the late 1970s Dunn and Belnap gave an interesting interpretation of first-degree entailment (FDE), a fragment of relevance logic,  in terms of intelligent database management or question-answering systems. Databases have a great propensity to be incomplete and become inconsistent: what is stored in a database is usually obtained from different sources which may provide only partial information and may well conflict with each other.  For a matrix to characterize a logic adequate for making deductions with information that might be both inconsistent and partial, at least 4 different values are needed [see 2]. An elegant 4-valued matrix is precisely Belnap-Dunn’s. Similar informational interpretations can be given for the closely related Logic of Paradox (LP) and Kleene's (strong) 3-valued logic (K^3). However, FDE, LP and K3 all these logics are co-NP complete, and so idealized models of how an agent can think. We address this issue by shifting to signed formulae, where the signs express "imprecise" values associated with two bipartitions of the corresponding set of standard values. We present proof systems whose operational rules are all linear and have only two structural branching rules that express a generalized Principle of Bivalence. Each of these systems leads to defining an infinite hierarchy of tractable approximations to the respective logic, in terms of the maximum number of allowed nested applications of the two branching rules. Further, each resulting hierarchy admits of an intuitive 5-valued non-deterministic semantics.

 

Réka Markovich (Université du Luxemburg)

Deontic Logic for Epistemic Rights.

Agents’ epistemic states regarding some propositions can be multifarious. We can believe (in) something, we might know it, or just would like to do so – for instance, to be able to reason about a more complete data set. And there are legal (and moral) settings regarding the deontic status of these epistemic states governing normative relations between agents. The Deontic Logic for Epistemic Rights is an ongoing project investigating what deontic, epistemic, and action logics we need to reason about epistemic rights as normative positions. In this talk, I will introduce the background of this investigation (including the theory of normative positions), the different approaches we have applied, and some models we have developed of epistemic rights like the right to know or the freedom of thought.

 

Gabriella Pigozzi (Université Paris-Dauphine)

An Agent-Based Model of MySide Bias in Scientific Debates.

Abstract: We present an argumentative agent-based model for studying the impact of ‘myside bias’ on the argumentative dynamics in scientific communities. According to recent insights in cognitive science, scientific reasoning is influenced by ‘myside bias’ – a tendency to prioritize the search and generation of arguments that support one’s views rather than arguments that would undermine them and to apply more critical scrutiny to opposing than to one’s stances. Although myside bias may pull individual scientists away from the truth, its effects on communities of reasoners remain unclear since Mercier and Heintz argue that specific socio-epistemic mechanisms may mitigate its negative impact. Our model aims two-fold: first, to study the argumentative dynamics generated by myside bias, and second, to examine the hypothesis that shared beliefs among scientists may act as a mitigating factor against the pernicious effects of the bias. 

This is joint work with Louise Dupuis de Tarlé, Matteo Michelini, Annemarie Borg, Gabriella Pigozzi, Juliette Rouchier, Dunja Šešelja and Christian Straßer

 

Christian Straßer (Ruhr University Bochum)

Towards deontic explanations: deontic argumentation calculi.

When reasoning with norms one is usually not merely interested in knowing whether an obligation holds, but also in why it holds. Answers to such why questions are deontic explanations. They not only lead to a better understanding of normative reasoning, they also motivate compliance and collaboration. The study of explanation is growing rapidly but, so far, little to no formal work has been done in the intersection of explanation and normative reasoning. What is more, existing approaches in deontic logic do not make explicit the reasons why certain obligations do and do not hold given a normative system, and are, therefore, not yet suitable for deontic explanations. In this presentation, Deontic Argumentation Calculi (DAC) are introduced: Sequent-style proof systems tailored to the construction of deontic explanations. DAC generate transparent arguments that provide explicit reasons why certain obligations hold and why certain norms are inapplicable. Formal Argumentation frameworks instantiated with DAC arguments are sound and complete with respect to the nonmonotoni inference relation of the class of constrained Input/Output logics, and have close relations to (deontic as well as disjunctive) Default Logic. It is demonstrated how to employ DAC to generate deontic explanations using general tools from formal argumentation.

Joint work with: Kees van Berkel.

 

Contributed Talks' Abstracts

Paolo Baldi (Università del Salento) and Hykel Hosni (Università di Milano).

Logic-based Approximations of Evidence. 

Abstract: We provide a general framework, for addressing some of the criticisms towards Savages approach to the foundations of decision theory. In particular, we investigate the relation between the Sure-Thing principle, one of the main axioms for preference among acts in Savage, and reasoning by cases in logic. We introduce a hierarchy of preferences structures, based on the ideas of Depth-Bounded Boolean logics, which suitably restrict the use of reasoning by cases, and show that our structures provide a reasonable account of those scenarios where Savage’s Sure-Thing principle prescribe counterintuitive preferences. We then investigate the conditions under which our preference structures give rise, in the limit, to a qualitative probability, which is representable by a finitely additive probability 

  

Lorenzo Casini  (IMT School for Advanced Studies) & Jürgen Landes (Università di Milano).

Bias, Conflict of Interest, and the Principle of Total Evidence.

Abstract: Randomized controlled trials (RCTs) are often treated as the gold standard of medical research (Sackett et al. 1996). Advocates of "evidence-based medicine" hold that RCTs are at the top of the quality-of-evidence hierarchy and preach the “best evidence” view (Slavin 1986) that only top-quality studies ought to be considered. Yet, RCTs are also criticized for being subject to biases (e.g., small size, inadequate blinding) that too often make them less than perfectly reliable (Worrall 2002, Ioannidis 2005). In this light, should one consider other sources of evidence? In the Variety-of-Evidence literature, it has been argued that all sources of evidence can in principle be useful, no matter their reliability (Osimani and Landes 2023). Meanwhile, it was observed that most medical trials suffer from conflicts of interest, such as sponsorship by pharmaceutical companies (Roseman et al 2011). Available reviews suggest both that conflicts of interest raise the probability of biased estimates and that studies subject to conflicts of interest are more reliable in virtue of their better design/quality (Lexchin et al 2003). Therefore, conflict of interest can have an ambiguous effect on medical results. So the question arises, would one here, too, benefit from considering all sources of evidence? Drawing on a suggestion by Fuller (2018), we build a Bayesian model to shed light on the matter. In particular, we assess whether studies subject to conflicts of interest can improve confirmation (1) despite their ambiguous effects and (2) despite the uncertainty relative to such effects and the very presence of a conflict of interest. Finally, we consider, given our modelling constraints, (3) which kind of evidence between quality and conflict of interest is most relevant to confirmation---or differently put, whether one may be more justified to neglect one rather than the other.

 

Francesca Doneda (Università di Milano).

A many-valued system for assessing trust in information contents under uncertainty. 

Abstract: In contexts such as expert debates, with uncertain and constantly updated information, the ground truth is often unavailable or undetermined, which makes it impossible to check claims.  This problem has emerged most recently and clearly during the SARS-CoV-19 pandemic, where the debate has often presented strongly polarised positions held by well-respected medical experts. A characteristic of this and similar debates is that they do not square well with the requirements of standard fact-checking practices. Hence, it is clear that new formal systems to model these communication dynamics are required. A first peculiarity of debates on topics concerning (yet) uncertain truths is that participants seldom support their own opinions with complete certainty. In public debates or debates among experts, it is very common to come across uncertain opinions, affirmed with a particular degree of confidence. From these considerations, it emerges the need to introduce a language that enables us to manage nuances in opinions. Technically, the extension internalizes in the logical language expressions explicitly associating graded values to formulae. This machinery enables us to model agents expressing their confidence in the truth of a formula in a graded fashion. A further relevant aspect of debates to which we refer is the exchange of information, which requires agents to continually update their opinions. This evolution in time is due to agents writing and reading information, assessing its trustworthiness and updating their opinions accordingly. The system features five different operators to model this aspect of debates as well.

 

Chris Fermüller (TU Wien)

Judgment Aggregation with Graded Deontic Logics. 

Abstract: Judgment aggregation poses the problem to find a consistent collective judgment on a set of logically dependent propositions, called agenda, based on a profile of individual judgments on the agenda items. We argue that two stages of generalization from the classical setup are useful for handling various application scenarios for assessing fragmented, vague and uncertain information in a systematic manner: Firstly, rather than considering only judgments about the truth or falsity of propositions, it is often useful to allow for degrees of assent or dissent, and hence to replace classical logic by many-valued logic for formalizing the collective as well as the individual judgments.  In a second step, we focus on a special type of agenda items, where the individuals are solicited to specify their degree of assent to deontic claims, asserting that something should or should not be the case. There are many results in the literature regarding scenario 1, i.e. many-valued judgment aggregation. Most of those results are negative, demonstrating that there is no general aggregation rule, satisfying various rationality constraints, that guarantees consistent collective judgments when applied to arbitrary profiles of consistent individual judgments. We will show that it is much easier to arrive at consistent collective judgments if the agenda items are graded deontic statements. This calls for considering a graded deontic logic as underlying formalism for scenario 2. The literature on that type of logic is still very sparse. We will adapt a many-valued deontic logic introduced by Dellunde and Godo (2008) to our setting. Finally, we will provide an outlook on alternative graded deontic logics and their application to judgment aggregation.

 

Wojtek Jamroga (University of Luxembourg & Polish Academy of Sciences)

Playing to Learn, or to Keep Secret: Strategic Logic Meets Information Theory. 

Abstract: Many important properties of multi-agent systems refer to the participants’ ability to achieve  given goal, or to prevent the system from an undesirable event. Among intelligent agents, the goals are often of epistemic nature, i.e., concern the ability to obtain knowledge about an important fact \phi. Such properties can be e.g. expressed in ATLK, that is, alternating-time temporal logic ATL extended with epistemic operators. In many realistic scenarios, however, players do not need to fully learn the truth value of \phi. They may be almost as well off by gaining _some_ knowledge; in other words, by reducing their uncertainty about \phi. Similarly, in order to keep \phi secret, it is often insufficient that the intruder never fully learns its truth value. Instead, one needs to require that his uncertainty about \phi never drops below a reasonable threshold. With this motivation in mind, we introduce the logic ATLH, extending ATL with quantitative modalities based on the Hartley measure of uncertainty. The new logic enables to specify agents' abilities w.r.t. the uncertainty of a given player about a given set of statements. It turns out that ATLH has the same expressivity and model checking complexity as ATLK. However, the new logic is exponentially more succinct than ATLK, which is the main technical result of this paper.

 

Felix Kopecky (Karlsruhe Institute of Technology)

Inconsistent belief aggregation in diverse and polarised groups: A computational study. 

Abstract: The effects of opinion diversity and belief polarisation on epistemic group problem solving are increasingly well understood, but mainly in ideal scenarios with ample temporal and evidential resources. Here we investigate scenarios in which epistemic decisions, such as expert advice to policy makers or the public, must be made without delay and on the basis of permissive evidence. To measure how diverse and polarised groups would handle these scenarios, we track the consistency of group opinions aggregated through sentence-wise majority voting. Simulations on an agent-based model reveal that high opinion diversity, but not polarisation, incurs a significant inconsistency risk – an intricate risk that can not automatically be avoided through information addition. These inconsistent aggregations indicate that epistemic problem solving is more difficult for diverse groups when the evidence is permissive. The results contribute to understanding the expectations that policy makers and the public can reasonably hold toward expert groups, and where their advice might have limits.

  

Matteo Michelini (Ruhr University of Bochum & TU Eindhoven) & Javier Osorio Mancilla (Universidad Autonoma de Madrid)

More Than Social Imitation: Why Solving Disagreements May Improve The Performance Of Epistemic Communities.

Abstract: In philosophy of science and social epistemology, there is a rich variety of views concerning the nature of evidence. Despite the diversity of views on the notion, there seems to be a broadly shared intuition across different epistemological perspectives, which is that individuals may fail to respond correctly to their evidence and that such failure is problematic. This is often described as "failing to respect one's evidence" (Feldman 2005). Whether it is ignoring some piece of data, misinterpreting it or jumping to unwarranted conclusions, failing to respond adequately to a relevant piece of evidence is seen as a deficiency in one's epistemic responsibilities. This intuition is also integral to scientific inquiry. In science, evidence is one of the cornerstones of certain scientific practices such as theory formation or hypothesis testing. Therefore, failure to respect one's evidence can lead to flawed conclusions and/or unfruitful research directions. While the intuition may be indisputable at an individual level, questions may arise when we consider it at a collective level. In this talk, we present an agent-based model that introduces a critical perspective to this discussion by undermining this widely-held intuition within scientific practice. In the model, a community of agents pursue two rival methods, one of which is in fact better than the other, and they have to discover it. They can disagree about the evidential relevance of a piece of data based on their background assumptions. Furthermore, some of these interpretations are mistaken, i.e., can be understood as failing to respect their evidence. Our arguments center on what we call the MBA effect: Misinterpretation Boosts Accuracy of inquiry in a scientific community. Through this effect, the model shows that communities in which some scientists misinterpret some of the evidence outperform communities in which no single scientist misinterprets the available evidence. This leads to a complex set of dynamics where what could be considered a "failure" in the individual sense may not straightforwardly apply at the collective level. 

 

Carlo Proietti (CNR Genova)

Informational influence as a cause of (bi-) polarization? A simulative approach. 

Abstract: Group polarization occurs when the opinion of a group becomes more radical after discussion. A group is instead said to bi-polarize when two subgroups polarize towards opposite directions. It is natural to ask whether these opinion dynamics can occur among rational agents. Insights from cognitive psychology suggest a more descriptive approach to this question, since many purportedly irrational biases seem to work as evolutionarily functional ways of exchanging information and updating opinions. By consequence, (bi-)polarization dynamics induced by such mechanisms may be regarded as natural phenomena. Exploring the details of informational influence explanations of (bi-) polarization, such as Persuasive Arguments Theory (PAT), is one way to check if this idea holds on well. The multi-agent model by Mäs and Flache (2013) provides a first instantiation of PAT and shows that bi-polarization only needs a moderate degree of homophily, i.e. the tendency of individuals to communicate only with others having similar opinions. Recently, we expanded this model along two lines. First, we integrated the dimension of argument strength from abstract argumentation. This was to investigate whether stronger arguments on one side can induce more consensus in that direction. Further, we revised its informational influence process with different protocols of communication and update, to test whether bi-polarization can occur without homophily, as an effect of purely informational biases. With some provisos, our simulations provide positive answers to these questions. Here, we test the robustness of these results w.r.t specific parameters. First, we vary the measure of argument strength.  Furthermore, we test the effect of mixing different types of agents and the communication/update policies adopted by a single agent. Finally, we study whether a moderate degree of bi-polarization may have beneficial effects, in the sense of being truth-conducive for society, as it happens for division of labor in scientific communities.

 

Andrea Sabatini (Scuola Normale Superiore)

A Proof-theoretic Approach to Belief Revision.

Abstract: In this talk we will present a proof-theoretic approach to AGM belief revision. We first consider a variant of Kleene’s G4 sequent calculus for classical propositional logic, and extend it with rules for classically invalid sequents, thus taking a hybrid sequent calculus G4hyb for classical logic. We use G4hyb to decompose any (anti)sequent into a set of atomic (anti)sequents, and exploit G4hyb decomposition to present a refined characterization of maximal consistent subsets of inconsistent sets of formulae. Moreover, for any classically invalid formula A, we employ G4hyb decomposition to get the minimal sequent calculus G4s which is sound and complete w.r.t. to the extension of classical logic with A while enjoying Cut elimination. If A stands for incoming information w.r.t. background knowledge, and the latter is represented via a set of proper axioms W, then we can use the G4s sequent calculus as the basis for representing AGM revision of W with A. Specifically, we can extend G4s with extra-logical rules for the proper axioms in W, and then turn the sequent calculus thus obtained into a family of HG4s hypersequent calculi, where hypersequents are non-standardly conceived as sets of conjunctively taken (anti)sequents. Under suitable choice of the antisequents occurring in initial hypersequents, each HG4s hypersequent calculus will be a proof-theoretic device for calculating a refined remainder set of W with respect to the negation of A. We show that our HG4s hypersequent calculi have nice structural properties, and that they can be employed to give a uniform treatment of different notions of AGM contraction. Additionally, we argue that our HG4s hypersequent calculi pave the way for a uniform, purely syntactic treatment of nonmonotonic KLM logics.

 

Caterina Sisti (Scuola Normale Superiore) & Lorenzo Rossi (Università di Torino)

Variable-hypothetical conditionals. 

Abstract: This work develops an analysis and a probabilistic semantics for conditionals based on so-far underexplored aspects of F.P. Ramsey’s work – what we call the variable hypothetical view. According to this view, conditionals (indicatives and counterfactuals alike) should be understood as instances of suitable generalizations, which Ramsey calls ‘variable hypotheticals’ (VHs). The VH intuition aims to explain what is the process that leads to assigning a certain degree of belief to a conditional, thereby explaining why specific conditionals are or are not acceptable. In slightly more precise terms, we can use Ramsey’s VH approach to supplement probabilistic treatments of conditionals, such as Adams’s, with an ex- plicit account of the non-quantitative factors that explain our acceptance or rejection of inferences involving conditionals. We also aim to employ Ramsey’s VH approach in order to develop different log- ics of conditionals. By including more and more rational requirements on conditional reasoning, different logics of conditionals result hence modeling different kinds of rea- soners. Furthermore, the VH intuition offers a unified framework to model assignment of degrees of beliefs for both factual and counterfactual conditionals. The plan of the paper is as follows. After a heuristic section and some historical coordinates, we formalize variable hypotheticals and their relations with conditionals. We subsequently develop a probabilistic semantics for conditionals based on variable hypotheticals, and we associate a conditional logic with it, showing that it invalidates the paradoxes of material implication, and delivers intuitively correct principles of conditional reasoning. The fifth section outlines how the VH account presented extends to counterfactuals. Section §6 compares it with existing theories of conditionals. Finally, we evaluate the framework and sketch some prospects for future work and conclude.

 

Jan Sprenger (Università di Torino), Lorenzo Rossi (Università di Torino) & Paul Égré (École normale supérieure Paris).

Probabilistic Reasoning with Non-Monotonic Conditionals.

Abstract: This paper develops a trivalent semantics for indicative conditionals and extends it to a probabilistic theory of valid inference and inductive learning with conditionals. On this account, (i) all complex conditionals can be rephrased as simple conditionals, connecting our account to Adams’s theory of $p$-valid inference; (ii) we obtain Stalnaker’s Thesis as a theorem while avoiding the well-known triviality results; (iii) we generalize Bayesian conditionalization to an updating principle for conditional sentences. The final result is a unified semantic and probabilistic theory of conditionals with attractive results and predictions.

 

Proceedings

A peer review process will select papers for publication in a volume of the Edizioni della Normale series.

 

Organisation

 

Program Committee

  • Mario Piazza (Scuola Normale Superiore)
  • Caterina Sisti (Scuola Normale Superiore)
  • Matteo Tesi (Scuola Normale Superiore)
  • Pietro Vigiani (Scuola Normale Superiore)
  • Gustavo Cevolani (Scuola IMT Alti Studi Lucca)
  • Andrea Sereni (Istituto Universitario di Studi Superiori Pavia)
  • Marcello D’Agostino (Università di Milano)
  • Gabriella Pigozzi (Université Paris-Dauphine)
  • Christian Straßer (Ruhr University Bochum)

 

 


 

Starts
Ends
Europe/Rome
Scuola Normale Superiore
Piazza dei Cavalieri 7

This workshop is part of the project titled ``Understanding public data: experts, decisions, epistemic values" which is being conducted collaboratively by the Scuola Normale Superiore, the IMT School for Advanced Studies in Lucca, and the Institute for Advanced Study of Pavia (IUSS).