Review Comment:
The following provides a list of typos, grammar mistakes, and formulation and formatting suggestions.
Page 2
* instances about a single object --> instances of a single object
* such a kind of issues --> such kind of issues
* a unique (possibly correct) answer --> a unique (ideally correct) answer
* to evaluate accordingly the information items they provide --> to evaluate the information items they provide accordingly
* transparent, and as a consequence, --> transparent and, as a consequence,
* linguistic-based fine-grained relations --> linguistic, fine-grained relations, or: linguistically fine-grained relations
Page 3
* applied over 300 --> applied to over 300
* chapters values is released --> chapters is released
* We do not think that such a kind of explanations would be possible with alignment, but we do not claim that our solution is better.
--> Should be "such kinds of explanation", but actually it's absolutely not clear what this sentence and the sentences following it want to say.
* from the same SPARQL query --> from one SPARQL query
* under the form of a graph --> in the form of a graph
* Language-specific DBpedia chapters can contain different information from one language to another, providing more specificity on certain topics, or filling information gaps.
--> Language-specific DBpedia chapters can contain different information on particular topics, e.g. providing more or more specific information.
Page 4
* actor A. Albanese --> actor Antonio Albanese
* The chapter of the longest language-specific Wikipedia page describing the queried entity is rewarded with respect to the others
--> Actually, this formulation is not accurate, as the score of this page is 1 whereas all other pages get a score < 1. This means that the page is not rewarded but the others are penalized. (The same holds for geo-localization.)
* and to the corresponding chapter is assigned a score equal to 1 --> and to the corresponding chapter a score equal to 1 is assigned
* whose appropriateness --> the appropriateness of which
* summed, and normalized --> summed and normalized
* less reliable chapter --> least reliable chapter
* Relations classification --> Relation classification
(This occurs throughout the whole paper. Also, you always say "results set", where I would rather say "result set".)
* such categories correspond to the linguistic phenomena (mainly discourse and lexical semantics) holding among heterogeneous values
--> such categories correspond to the lexical and discourse relations holding among heterogeneous values
(Also, it's not clear what "values" means. The labels of object resources and literals?)
* Footnote 2:
can be found here http://download.geonames.org/export/ dump/countryInfo.txt. --> can be found here: http://download.geonames.org/export/ dump/countryInfo.txt
Page 5
* SameAs --> owl:sameAs
(this occurs a lot, and I would always write "owl:sameAs")
* I would also capitalize all bold relation names.
* Footnote 3:
humans and machines alike http://www.wikidata.org/, --> humans and machines alike, http://www.wikidata.org,
* hyponymy: when the former is included within the latter --> maybe rather "when the latter is implied by the former"?
* the description of metonymy is also not optimal
Page 6
* relation between entities/objects --> What exactly are "entities/objects"? URIs, or labels, values? What exactly do you mean by "values"? In this whole section, please be more precise with these terms.
* data set --> dataset (This occurs throughout the whole paper.)
* no such a training set --> no such training set
* so that to accomplish our purpose --> in order to accomplish our purpose
* we detail RADAR 2.0 argumentation module --> we detail the RADAR 2.0 argumentation module
* with the other arguments --> with other arguments
* an example of AF --> an example of an AF
* Dung’s acceptability admissibility-based semantics --> this doesn't sound grammatical
* confidence associated to --> confidence associated with
Page 7
* Figure 2:
- Example of (a) AF --> Example of (a) an AF
- Please mention in the caption that single lines represent attacks and double lines represent support.
* associated to the sources --> associated with the sources
* accepted at the end --> accepted in the end
* if they overcome a certain threshold --> if they exceed a certain threshold
* Let α be a bipolar fuzzy labeling. We say that α is a bipolar fuzzy labeling if and only if ...
--> This doesn't make any sense. I would suggest moving the last sentence of Definition 1 to Definition 2, saying something like "A total function "α : A -> [0,1] is a bipolar fuzzy labeling if and only if ..."
* Table 1: Instead of A,B,C I would use small letters a,b,c, as they refer to the nodes in Figure 2.
* Also, you actually never mention how alpha is computed step-wise in cyclic graphs, converging to one value. It would be helpful to add a sentence about this.
* the bipolar fuzzy labeling algorithm is raised on the argumentation framework --> the bipolar fuzzy labeling algorithm is applied to the argumentation framework, or: the bipolar fuzzy labeling algorithm is executed on the argumentation framework
* we expect to have the Italian DBpedia chapter as the most reliable one being Stefano Tacconi an Italian soccer player
--> This is not grammatical... Probably you want to say the following:
we expect the Italian DBpedia chapter to be the most reliable one, given that Stefano Tacconi is an Italian soccer player
* the "correct" answer is 1.88 --> either remove the quotation marks around "correct" or say "the trusted answer is 1.88"
Page 8
* as well when --> either "as well as" or "when"
* non bipolar --> non-bipolar argumentation
* up to our knowledge --> to our knowledge
* linguistic phenomena holding among values --> linguistic relations holding between values
* the types of relation --> the type of relations
* specific relation (property) --> specific property
* among the categories distribution --> among the distribution of categories
* with this respect --> in this respect
Page 9
* In Tables 2 and 3 (and also 4 on page 13), I would capitalize all column headers.
* corresponding to the 47.8% of DBpedia instantiated properties --> corresponding to 47.8% of all properties in DBpedia
* triples, from --> triples from
* On the contrary --> In contrast
* non functional --> non-functional (occurs often)
* we reconciled 3.2 million functional properties --> I guess you mean 3.2 million triples?
* with an average accuracy comparable to the one described in Table 3 --> What do you mean by accuracy? Precision?
* the strategy "DBpedia CL" --> Please briefly mention what CL stands for.
--> Also, why choose the most specific class and not simply all classes?
* Footnote 12: This link should be provided in the main text, not in a footnote, I think.
Page 10
* QAKiS addresses the task of QA over structured knowledge-bases (e.g. DBpedia) [10], where the relevant information is expressed also in unstructured forms (e.g. Wikipedia pages). It implements a relation-based match for question interpretation
--> This is a bit confusing and misleading. Please reformulate.
* sent to a set of language-specific DBpedia chapters SPARQL endpoints --> sent to the SPARQL endpoints of the language-specific DBpedia chapters
* require either some forms of reasoning (e.g., counting or ordering) on data, aggregation (from data sets different from DBpedia), involve n-relations
--> require either some form of aggregation (e.g., counting or ordering), information from datasets different than DBpedia, involve n-ary relations
* Footnote 14: Please use http://www.sc.cit-ec.uni-bielefeld.de/qald/ as URL. The http://greententacle... URL is not persisent.
Page 12
* QALD data set was created --> the QALD dataset was created
* are present in this data, i.e., surface forms, geo-specification, and inclusion, and --> are present in this data - surface forms, geo-specification, and inclusion - and
* on the top of a QA system existing architecture --> on top of an existing QA system architecture
* previous work [12,11,8], introducing RADAR 1.0 --> previous work [12,11,8] introducing RADAR 1.0
* judge arguments' acceptability --> to judge an argument's acceptability
Page 13
* Relations categorization --> Relation categorization
* Relations extraction --> Relation extraction
* You have an extra space between every bold term and the following colon, which I would remove. E.g. "Evaluation : " --> "Evaluation: "
* Also, I would begin with a capital letter after the colon, and end each paragraph with a dot instead of a semicolon.
* the contribution on this side --> the contribution the contribution here
* linguistic-based relations --> linguistic relations
* data from QALD-2 have been used --> data from QALD-2 has been used
* f1 --> F1
* State of the art QA systems --> State-of-the-art QA systems
* SW --> either write "Semantic Web" or introduce the abbreviation somewhere.
Page 14
* Sometimes you write "Linked Data" and sometimes "linked data". Please stick to one.
* another possibility is to leave the data consumer itself to assign --> another possibility is to let the data consumer itself assign
|