FrameBase: Enabling Integration of Heterogeneous Knowledge

Tracking #: 1392-2604

Authors: 
Jacobo Rouces
Gerard de Melo
Katja Hose

Responsible editor: 
Guest Editors ESWC2015

Submission type: 
Full Paper
Abstract: 
Large-scale knowledge graphs such as those in the Linked Data cloud are typically represented as subject-predicate-object triples. However, many facts about the world involve more than two entities. While n-ary relations can be converted to triples in a number of ways, unfortunately, the structurally different choices made in different knowledge sources significantly impede our ability to connect them. They also increase semantic heterogeneity, making it impossible to query the data concisely and without prior knowledge of each individual source. This article presents FrameBase, a wide-coverage knowledge-base schema that uses linguistic frames to represent and query n-ary relations from other knowledge bases, providing multiple levels of granularity connected via logical entailment. Overall, this provides a means for semantic integration from heterogeneous sources under a single schema and opens up possibilities to draw on natural language processing techniques for querying and data mining.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Minor Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Andrea Giovanni Nuzzolese submitted on 30/Jul/2016
Suggestion:
Accept
Review Comment:

The authors have significantly improved the paper and addressed all my comments to the previous version of the paper very well.
The paper in its current form is much clearer and of high quality in all its parts.
Nevertheless, a few minor comments need to be addressed. Namely:

* In Section 1 (page 2) add a pointer to Section 2.1 when introducing the complexity of the triple-reification patterns.
In fact Section 2.1 provides the reader with a detailed discussion about the different ways of representing n-ary relations and their associated complexities;
* In Section 1 at page 4 the authors introduce FrameBase by citing two previous works. This is completely fair. However, I suggest to first introduce
FrameBase and only subsequently to cite [47] and [49] by stating that the current paper significantly extends those two previous work;
In this case the authors should provide a very high-level description of the nature of the extension;
* In Section 4.3.1 at page 16 the authors describe the storing mechanism of ReDer rules. It is a bit unclear what "virtualizing the dereified layer" means. This needs to be better clarified by also providing examples;
* In Section 4.3.1 at page 17 the authors refer to grammatical functions (GFs) and phrase types (PTs) without introducing their role in FrameNet. This needs to be clarified;
* In Section 4.3.1 at page 17 there is a missing reference to a Figure: "The constructors are shown in Figure ??."

Review #2
By Bonaventura Coppola submitted on 21/Sep/2016
Suggestion:
Accept
Review Comment:

The paper motivates, introduces, and evaluates FrameBase as a knowledge base (KB) schema founded on a dual representation of n-ary relations: a neo-davidsonian reification allowing expressive and compact representation of complex events, and a bare direct binary predicate (DBP)-based representation intended to preserve both compatibility towards other/source KB schemas and simplified/legacy querying when the expressivity of n-ary relations is not required.

After a first round of reviews, this second version of the paper far improved under many critical aspects. In its current shape, the paper shows a sound and clear organization which thoroughly accounts the complexity of the FrameBase building process. Relevant changes included better introduction and contextualization of the individual processing steps, relevant terminological clarifications (e.g. "rules" versus "rule constructors"), and inclusion of new figures that clarified several points of possible misunderstanding. In particular, it was particularly appreciated the authors' effort in clarifying a previous critical discussion point (referred as "R29") in the previous review round.

In general, the paper has sufficient content and quality for publication on the Semantic Web Journal. Some points of inherent weakness from the first version remain as they are rooted in the very choices of FrameBase's developers. On the other side, the service potentially offered to the Semantic Web community is extremely relevant and deserves endorsement and encouragement. Eventually, the FrameBase impact will be determined by the authors' capability of enforcing sound development, wide integration, and stability of the T-Box over time and different releases.

Most of the specific flaws still found in the paper only slightly impact its readability, and possibly the decision of the interested reader about testing FrameBase or not. For this reason, applying the individual corrections can be safely left to authors' interest and responsibility. Therefore, my eventual decision is to ACCEPT the paper for publication with no need of further revision by me.

DETAILED PROBLEMS, ERRORS, CORRECTIONS.

Section 1

S1: Note 1: includes two mentions of an "other kind of reification" discussed elsewhere in the paper. Please add a Section reference.

S2: Figure 1 makes a major improvement in clarity of this paper revision. Unfortunately it includes several flaws/inconsistencies which should be corrected in order to allow a full understanding of the long and detailed (de-)reification discussion in the paper:

S2a: Choices in the relations to be displayed are not consistent across subfigures. For example, Fig. 1b shows the non-reified triples, while 1d and 1e don't. This specific point may be solved as well by specific mention in the respective sub-captions.

S2b: the reified nodes should have some label as "reified event" or "reified triple". Again, text in sub-captions may help.

S2c: for a deeper understanding, I was forced to refer very frequently to the OLD Table 1 which was present in the PREVIOUS version of the paper. It was not a good idea to remove that! I understand that keeping the alignment was difficult because the figure now includes far more triples, which is excellent and MUST be kept. However, please make an attempt to extend and re-introduce the old Table 1 because it's critical to understand reification.

S2d: Figure 1f includes two mistakes. First, the two lower sub-blocks should represent respectively "John-1964" and "Mary-1964"; instead, they are now exactly identical. Second, the same two blocks both include "1964" as Partner property values. They should be "Mary" and "John" respectively.

S2e: The subcaptions could explicitly back-refer to respective explanation passages in Section 2.1 to make understanding easier.

S2f: MOST IMPORTANT! It's still very difficult to pair Figure 1 with the complexity analysis made in (current version's) Table 1. Being now n=3 and k=3, the only subfigures with triples matching the counts in Table 1 are 1d and 1e. In no way I managed to match counts in the first three rows of Table 1 (wrt columns "All triples" and "Core") to what I see in Fig 1b, 1c, and 1f. With some effort, it's possible to speculate why the counts depend on both n and k. But, no way to make the constants included in the counts (1, 2, ...) match. Please explain in detail how do you count (n+4)k in 1b, (n+2)k in 1c, and (n+1)k in 1f. Also, please align the pattern names between Figure 1 and Table 1.

Section 2

S3: Figure 2: I advise to label the unlabeled reification node.

S4: Figure 2: Please align naming of relations to Figure 1 (pick either "is" or "was" everywhere)

S5: Table 1: previous item S2f applies. In addition, first 3 counts of the "linking event" are not clear. First make clear if they are complexities (as said in the caption) or exact counts (as said in the paper). I would expect k(k-1)/2 in the latter and a simple k^2 in the former. Why k(k-1)? Are you counting inverse equivalence relations?

S6: in 2.1.1, please align naming of "wasMarriedOnDate" and "gotMarriedTo" with those in the Figures (which are different in turn, see S4).

S7: Also, same paragraph text in 2.1.1 should read "Given TWO triples with property ... and ONE triple with ... we cannot be" (now it's the opposite).

Section 3

S8: In sentence "The less verbose but also..." should be "DEREIFED layer" instead of "reified".

Section 4

S9: First paragraph, typo: "tHe dereified layer"

S10: in 4.2, in item (2): "class frame-Personal_relationship HAS" instead of "have".

S11: in 4.2, item (3) "Intermediate Nodes" is a core algorithmic point of the paper and FrameBase: I strongly advise to provide explicit pseudocode for it. This would allow easier understanding of properties e.g. why the result is a hard clustering with no overlap, etc.

S12: 4.2 is very long. I would at least separate the last part on labeling/annotation and linking, starting from "Names, definitions, and glosses ..."

S13: in 4.3.2, first paragraph. Do you use ALL the English sentences or you only pick some of them? In the latter case, which ones?

S14: in 4.3.2, in " -Dependent (Dep)" paragraph, the second "PP[to]" should be "PP[of]" (?)

S15: in 4.3.2, "The constructors are shown in Figures ??" with unresolved reference

S16: nine lines later, please double check repetition of "" wrt. Fig. 7

S17: second last line of page 17: "agree that they there is": delete "they".

S18: page 18, second column, first line, should be "prEposition".

S19: Algorithm 1, in the "Output" section, should be "prEposition".

S20: Algorithm 1, please define the P set

S21: page 21, first column, discussion about redundancy. To avoid it, quite a blind pruning is applied, that is expected to happen at cost of coverage. Is this truly necessary? Can't this be shifted at querying time, when multiple constraints would naturally reduce/cancel redundancy?

S22: page 21, second column, "The Kuhn-Munkres algorithm", delete repetition of the word "algorithm".

Section 5

S23: Coverage. A major point of weakness in the paper is the lack of coverage evaluation for the ReDer constructors/rules, which are a core mechanism. In principle, I would like to see how many average DBPs are inferred from the reified frames, for which you compute 9.45 average frame element definitions each. Your arguments on the topic would probably include that 1) many non meaningful DBPs are intentionally skipped (like LOCATION-TIME triples) and that 2) your explicit intention is to establish a highly precise conversion process, and you actually achieve this as shown ion Table 2. Nonetheless, let me warn that coverage issues will eventually make or break FrameBase's success. My warm advise is to take coverage into serious account, and maybe start including actual numbers even in this paper version.

S24: Section 5.1: Hanging "Section [?]" BibTex reference.

S25: Section 5.2: Typo: "An resulting average"

Section 6

S26: Typo: "the same so instantiation"

S27: Section 6.2. It leaves the reader unsatisfied since it mentions results from an external work without mentioning the method. I think you are allowed to include a few lines of summary from [48] and explain how these basic integration rules are obtained. This would improve understanding and integration with Section 7.

Review #3
By Valentina Presutti submitted on 25/Sep/2016
Suggestion:
Minor Revision
Review Comment:

The paper has much improved compared to the first submitted version. I appreciate that the authors addressed many of my comments and included many more useful examples. I confirm my positive opinion about its publication and value for the community. It is for this reason that I require an additional revision, to the aim of hopefully get the paper at its best.

The main issues remain the same as before although they have been mitigated with the new revision: evaluation and presentation. Some more in depth explanation/discussion is certain parts are also required (see details below).

As for the evaluation there is still lack of important details on the adopted methodology and its execution.

Section 5.1:
in this section the evaluation is described as a comparison between FrameBase mapping and other approaches with reference to MapNet, considered as gold standard. I have some problem in following this approach, hence additional details are needed.
If MapNet is considered a gold standard (is it?) why do you need a different mapping? Isn't it complete enough, maybe?
I noticed that in the result table you do not have all measures for all approaches. Why is that?
Did you run all the experiments or rely on the results reported in the cited papers? If the latter is the case, were the experiments executed under exactly the same conditions? This is quite important as to assess potential bias as the difference in the figures are small especially for the FrameBase approach and the SSI-Dijkstra+ one.
If instead you re-run the experiments, please provide the complete figures for all of them or motivate why you can not (e.g. precision, recall and F1 for Neighbourhoods).

Section 5.2:
I have some problem in interpreting the results of the hierarchy evaluation. Please provide the valid values for the Wilson confidence interval and explain what it measures and how to interpret the result.
The evaluation of the nuance is explained by the authors as "small change of nuance". Was the semantic distance between the frames quantified/evaluated by the raters? What does it mean that the nuance was "small"? As the average is around 30% it is key to understand if the distance between the frames in a same cluster is significant or small, as stated. I think this should be discussed and some examples reported. Please report also about how to interpret the value of the weighted Cohen's Kappa (why the maximum is 0.87?).

Section 5.3:
How is correctness defined? What is the task given to the raters for evaluating this dimension? What is the background of the raters that evaluated the readability of rules? The definition of this task includes to evaluate if the meaning of a frame element is obvious for a layman reader, but it does not emerge that raters were layman readers.

As a general comment, as evaluation was conducted on a small portion of the data, the authors may want to report an analysis of the errors based e.g. on the investigation of the disagreement cases.

As for the presentation the revised structure significantly improved readability but the English form is in many parts very hard to follow and can be put simpler and more effectivly.

I appreciate the opportunity to use synonyms in order to reduce repetitions, however in scientific papers this may lead to confusion.
Example: use "inverse" instead of "reciprocal" (section 2.1.5). I recommend to review the text in order to remove the use of synonyms, when possible.

There are many complex and long sentences that make the paper hard to follow. I had to re-read many passages more than once to understand them and sometimes I kept my doubts. I recommend to try and keep sentences simple and short.
Some examples:
- try to simplify the example of section 4.2 with the frames Transfer, Giving, Intentionally-Act. It is very hard to follow and the anaphoras are ambiguous.
- A similar consideration applies to the example (section 4.3) of the transitive and intransitive frames for smuggle (I did not get it).
- Section 4.3.2: What do Type I and II mistakes, refer to?
- Section 6.1: the last paragraphs before section 6.2 are very hard to follow.
- Section 7.1 Second bullet of "FrameBase driven" and the whole paragraph before "Source-data driven".

There are some forms that sound naive or redundant or repeated sentences such as "rich annotations", "rich hierarchy", "broad general", "global LOD cloud"; adjectives don't add any useful information in these cases. The sentence (section 3.1) "This data led to the task of automatic semantic role labelling..." is a repetition.

Some figure references are broken.

Other comments

Figure 1b: what does "equivalent" mean in this example? the statements are not the same and two instances cannot be equivalent in the OWL sense. I am not sure what the authors mean to represent but in the example this is unclear.

Section 2.1.2 (fifth bullet): I am not sure this is a real issue. If one reifies a triple is with the goal of adding information about it. Hence the two reified triples about e.g. London connected to Paris (or about the double marriage) would refer to different underlying contexts, hence I am not sure that having only one of them for both cases would serve a better conceptual design.

Section 2.3: FRED actually does integrate its produced graphs to many external knowledge bases such as DBpedia, WordNet, VerbNet, Schema.org and DOLCE Ultra-Lite (see also http://www.semantic-web-journal.net/content/semantic-web-machine-reading... which is going to appear on SWJ).

Section 4.1:
the authors choose to align each LU to only one synset in order to guarantee more precision. I may have missed something but I am still not convinced of this statement. For example, when a LU can be mapped to more than one synset, wouldn't a further disambiguation step help choosing the best one? Are the authors sure that precision would go down?

Have you tried without the semantic pointers? I think that comparing the results would better support your final choice.

Section 6.1:
The examples given of the manually built Class-Frame integration rules refer to classes that were designed with a n-ary shape in mind. The authors are invited to comment on the issue of aligning frames to classes that are not designed to represent n-ary relations and how this issue can be (or has been?) overcome.

Section 6.2: Please, add a summarised description of the cited methods for automatic creation of integration rules. Considering that the paper's title points at the integration task this aspect has a key importance.

Minor:
Section 3:
The less verbose ... is the reified layer -> dereified?