Beyond Facts - a Survey and Conceptualisation of Claims in Online Discourse Analysis

Tracking #: 2838-4052

Authors: 
Katarina Boland
Pavlos Fafalios
Andon Tchechmedjiev
Stefan Dietze1
Konstantin Todorov

Responsible editor: 
Philipp Cimiano

Submission type: 
Survey Article
Abstract: 
Analyzing statements of facts and claims in online discourse is subject of a multitude of research areas. Methods from natural language processing and computational linguistics help investigate issues such as the spread of biased narratives and falsehoods on the Web. Related tasks include fact-checking, stance detection and argumentation mining. Knowledge-based approaches, in particular works in knowledge base construction and augmentation, are concerned with mining, verifying and representing factual knowledge. While all these fields are concerned with strongly related notions, such as claims, facts and evidence, terminology and conceptualisations used across and within communities vary heavily, making it hard to assess commonalities and relations of related works and how research in one field may contribute to address problems in another. We survey the state-of-the-art from a range of fields in this interdisciplinary area across a range of research tasks. We assess varying definitions and propose a conceptual model - Open Claims - for claims and related notions that takes into consideration their inherent complexity, distinguishing between their meaning, linguistic representation and context. We also introduce an implementation of this model by using established vocabularies and discuss applications across various tasks related to online discourse analysis.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Johannes Daxenberger submitted on 14/Aug/2021
Suggestion:
Accept
Review Comment:

I’ve reviews an earlier draft of the survey and recommended (minor) revisions. My concerns have largely been addressed, such that now I can recommend acceptance.
With regard to the dimensions that should be evaluated: (1) the text is suited as introductory text, (2) the presentation is well-balanced and covered, (3) clearly presented (with exceptions, see below), and (4) IMHO relevant to the Semantic Web community.

The information-seeking perspective you mention on page 10 and explicitly reference at least in Reimers et al. [211] (but also Shnarch et al. 2018 [66]) has a different notion of „argument“ - usually along the lines: „a span of text expressing evidence or reasoning that can be used to either support or oppose a given topic“ (Stab et al. 2018, https://aclanthology.org/D18-1402.pdf). That is usually a combination of claim and premise where both parts can be and often are implicit. But it is not the same as a claim, so please either address this early in the text or state it when you explicitly refer to works using this notion of „argument“.

Must-Edits:
- Please redraw Fig. 8. Many Labels are not readable (covered by text boxes)

Suggested Edits:
- Overlapping labels in Figs. 2 and 3
- Page 8 right col. Lines 13ff maybe highlight definitions you use and move table 2 where they are summarized to earlier in the paper
- More visual highlighting or structure/shorter paragraphs: e.g. on Pages 10 and 17 you have paragraphs that span almost a whole column
- Include the concept of „domain“ and „argument“ (see my comment above) in Table 2 (used several times in related work and the paper itself)

Minor:
- Page 11: headline 3.4 newline missing
- Page 11: right col. l.20 whitespace missing
- Table 2 l. 11 empty new line
- Generally: Pang et Lee (Page 17) => Pang and Lee; please also fix page 18 left col. l. 11
- Reference 129 in the bibliography seems broken
- Please check consistency of quotation marks, e.g. on page 20 l. 15 left col.
- Page 20 left col. l.23 and right col. L.49: whitespace missing (and further on page 21 l. 26 right - please check throughout the paper)

References maybe worth adding:
- https://aclanthology.org/2021.acl-long.366.pdf

Review #2
Anonymous submitted on 24/Oct/2021
Suggestion:
Minor Revision
Review Comment:

First of all, I do want to commend the authors for their thorough response. Well done! I appreciate that writing a review of this span is a demanding endeavor and that the results will never be satisfactory to all. The concerns mention below are hence to be interpreted with these premises in mind.

Claim: We do not assume the definitions at the beginning
See Section 3
3.1: The authors name definitions pertaining to facts. None of the definitions pertain to *how* the truth value of facts is assessed. Then comes the statement "Note that facts themselves cannot be observed directly" (pertains to the assessment of truth values, statement without references). Is this is claim by the authors? If yes, it violates the author's claim that they do not assume definitions at the begining. If no, then it is really surprising and unrelated to the previous definitions. Clearly, none of the definitions provided forbids the direct observation of facts (some actually seem to assume it but it might just be my bias in interpretation). Consider the fact "This is a review". It fits all six definitions but can also be observed directly (at least in some sense of observing). Mixes of definition and opinions (which I take the statement pertaining to observing facts to be) are still to be found in several other fragments of this section. The authors claim they merely expatiate upon relevant concepts but I would still suggest they do more and intertwine a predefined model, probably unbeknownst of themselves. It'd be great if the authors would check especially Section 3 sentence by sentence to ensure that there are no biases.

Q1: "“facts are what is represented in KGs or KBs”." I am afraid this is still not true. For example, a property graph can consist of exactly one node. A node is not a fact. Still, said KG would represent the node. It follows that your statement cannot be correct.

Q2: Thanks for the improved text. Sounds good!

Q3: See answer to Q2.

Q4: OK.

Q5: Fair.

Q1: Cheers.

Q2: That does make sense. Great.

Q3: Thanks.

Q4: Does read better.

References:
Please do check your references. A cursory reading suggests the following missing/incorrect data.
[7] I do wonder why there are no links added for this paper. It is available at http://cj2015.brown.columbia.edu/papers/automate-fact-checking.pdf
[32] in In. Please check.
[48] ISWC papers usually have DOIs. Please check.
[55] to [59]: The same author seems to have different names. Error in bib file? I do wonder why you cite a demo paper instead of the main paper for [59], which is
Syed, Zafar Habeeb, Michael Röder, and Axel-Cyrille Ngonga Ngomo. "Unsupervised discovery of corroborative paths for fact validation." In International Semantic Web Conference, pp. 630-646. Springer, Cham, 2019.

Overall, the document is well suitable for non-domain experts and beginners. The presentation is comprehensive and the addition of an ontology is definitely appreciated.