Generating Natural Language Descriptions from OWL Ontologies: the NaturalOWL System

We present NaturalOWL, a natural language generation system that produces texts describing individuals or classes of OWL ontologies. Unlike simpler OWL verbalizers, which typically express a single axiom at a time in controlled, often not entirely fluent natural language primarily for the benefit of domain experts, we aim to generate fluent and coherent multi-sentence texts for end-users. With a system like NaturalOWL, one can publish information in OWL on the Web, along with automatically produced corresponding texts in multiple languages, making the information accessible not only to computer programs and domain experts, but also end-users. We discuss the processing stages of NaturalOWL, the optional domain-dependent linguistic resources that the system can use at each stage, and why they are useful. We also present trials showing that when the domain-dependent llinguistic resources are available, NaturalOWL produces significantly better texts compared to a simpler verbalizer, and that the resources can be created with relatively light effort.


Introduction
Ontologies play a central role in the Semantic Web (Berners-Lee, Hendler, & Lassila, 2001;Shadbolt, Berners-Lee, & Hall, 2006).Each ontology provides a conceptualization of a knowledge domain (e.g., consumer electronics) by defining the classes and subclasses of the individuals (entities) in the domain, the types of possible relations between them etc.The current standard to specify Semantic Web ontologies is owl (Horrocks, Patel-Schneider, & van Harmelen, 2003), a formal language based on description logics (Baader, Calvanese, McGuinness, Nardi, & Patel-Schneider, 2002), rdf, and rdf schema (Antoniou & van Harmelen, 2008), with owl2 being the latest version of owl (Grau, Horrocks, Motik, Par-c 2013 AI Access Foundation.All rights reserved. sia, Patel-Schneider, & Sattler, 2008).Given an owl ontology for a knowledge domain, one can publish on the Web machine-readable data pertaining to that domain (e.g., catalogues of products, their features etc.), with the data having formally defined semantics based on the conceptualization of the ontology. 1Following common practice in Semantic Web research, we actually use the term ontology to refer jointly to terminological knowledge (TBox) that establishes a conceptualization of a knowledge domain, and assertional knowledge (ABox) that describes particular individuals.
Several equivalent owl syntaxes have been developed, but people unfamiliar with formal knowledge representation often have difficulties understanding them (Rector, Drummond, Horridge, Rogers, Knublauch, Stevens, Wang, & Wroe, 2004).For example, the following statement defines the class of St. Emilion wines, using the functional-style syntax of owl, one of the easiest to understand, which we also adopt throughout this article. 2 [Greek description:] Ο Tecra A8 είναι ένας φορητός υπολογιστής, κατασκευασμένος από την Toshiba.
The examples above illustrate how a system like Naturalowl can help publish information on the Web both as owl statements and as texts generated from the owl statements.This way, information becomes easily accessible to both computers, which can process the owl statements, and end-users speaking different languages; and changes in the owl statements can be automatically reflected in the texts by regenerating them.To produce fluent, coherent multi-sentence texts, Naturalowl relies on natural language generation (nlg) methods (McKeown, 1985;Reiter & Dale, 2000) to a larger extent compared to existing owl verbalizers; for example, it includes mechanisms to avoid repeating information, to order the facts to be expressed, aggregate smaller sentences into longer ones, generate referring expressions etc.Although nlg is an established area, this is the first article to discuss in detail an nlg system for owl ontologies, excluding simpler verbalizers.We do not propose novel algorithms from a theoretical nlg perspective, but we show that there are several particular issues that need to be considered when generating from owl ontologies.For example, some owl statements lead to overly complicated sentences, unless they are converted to simpler intermediate representations first; there are also several owl-specific opportunities to aggregate sentences (e.g., when expressing axioms about the cardinalities of properties); and referring expression generation can exploit the class hierarchy.
Naturalowl can be used with any owl ontology, but to obtain texts of high quality domain-dependent generation resources are required; for example, the classes of the ontology can be mapped to natural language names, the properties to sentence plans etc.Similar linguistic resources are used in most nlg systems, though different systems adopt different linguistic theories and algorithms, requiring different resources.There is little consensus on exactly what information nlg resources should capture, apart from abstract specifications (Mellish, 2010).The domain-dependent generation resources of Naturalowl are created by a domain author, a person familiar with owl, when the system is configured for a new ontology.The domain author uses the Protégé ontology editor and a Protégé plug-in that allows editing the domain-dependent generation resources and invoking Naturalowl to view the resulting texts. 3We do not discuss the plug-in in this article, since it is very similar to the authoring tool of m-piro (Androutsopoulos, Oberlander, & Karkaletsis, 2007).
owl ontologies often use English words or concatenations of words (e.g., manufacturedBy) as identifiers of classes, properties, and individuals.Hence, some of the domain-dependent generation resources can often be extracted from the ontology by guessing, for example, that a class identifier like Laptop in our earlier example is a noun that can be used to refer to that class, or that a statement of the form ObjectPropertyAssertion(:manufacturedBy X Y ) should be expressed in English as a sentence of the form "X was manufactured by Y ".Most owl verbalizers follow this strategy.Similarly, if domain-dependent generation resources are not provided, Naturalowl attempts to extract them from the ontology, or it uses generic resources.The resulting texts, however, are of lower quality; also, non-English texts cannot be generated, if the identifiers of the ontology are English-like.There is a tradeoff between reducing the effort to construct domain-dependent generation resources for owl ontologies, and obtaining higher-quality texts in multiple languages, but this tradeoff has not been investigated in previous work.We present trials we performed to measure the effort required to construct the domain-dependent generation resources of Naturalowl and the extent to which they improve the resulting texts, also comparing against a simpler verbalizer that requires no domain-dependent generation resources.The trials show that the domain-dependent generation resources help Naturalowl produce significantly better texts, and that the resources can be constructed with relatively light effort, compared to the effort typically needed to construct an ontology.
Overall, the main contributions of this article are: (i) it is the first detailed discussion of a complete, general-purpose nlg system for owl ontologies and the particular issues that arise when generating from owl ontologies; (ii) it shows that a system that relies on nlg methods to a larger extent, compared to simpler owl verbalizers, can produce significantly better natural language descriptions of classes and individuals, provided that appropriate domain-dependent generation resources are available; (iii) it shows how the descriptions can be generated in more than one languages, again provided that appropriate resources are available; (iv) it shows that the domain-dependent generation resources can be constructed with relatively light effort.As already noted, this article does not present novel algorithms from a theoretical nlg perspective.In fact, some of the algorithms that Naturalowl uses are of a narrower scope, compared to more fully-fledged nlg algorithms.Nevertheless, the trials show that the system produces texts of reasonable quality, especially when domaindependent generation resources are provided.We hope that if Naturalowl contributes towards a wider adoption of nlg methods on the Semantic Web, other researchers may wish to contribute improved components, given that Naturalowl is open-source.
Naturalowl is based on ideas from ilex (O'Donnell, Mellish, Oberlander, & Knott, 2001) and m-piro (Isard, Oberlander, Androutsopoulos, & Matheson, 2003).The ilex project developed an nlg system that was demonstrated mostly with museum exhibits, but did not support owl. 4 The m-piro project produced a multilingual extension of the system of ilex, which was tested in several domains (Androutsopoulos et al., 2007).Attempts to use the generator of m-piro with owl, however, ran into problems (Androutsopoulos, Kallonis, & Karkaletsis, 2005).By contrast, Naturalowl was especially developed for owl.
In the remainder of this article, we assume that the reader is familiar with rdf, rdf schema, and owl.Readers unfamiliar with the Semantic Web may wish to consult an introductory text first (Antoniou & van Harmelen, 2008). 5We also note that the recently very popular Linked Data are published and interconnected using Semantic Web technologies. 6Most Linked Data currently use only rdf and rdf schema, but owl is in effect a superset of rdf schema and, hence, the work of this paper also applies to Linked Data.
Section 2 below briefly discusses some related work; we provide further pointers to related work in the subsequent sections.Section 3 then explains how Naturalowl generates texts, also discussing the domain-dependent generation resources of each processing stage.Section 4 describes the trials we performed to measure the effort required to construct the domain-dependent generation resources and their impact on the quality of the generated texts.Section 5 concludes and proposes future work.

Related Work
We use the functional-style syntax of owl in this article, but several equivalent owl syntaxes exist.There has also been work to develop controlled natural languages (cnls), mostly English-like, to be used as alternative owl syntaxes.Sydney owl Syntax (sos) (Cregan, Schwitter, & Meyer, 2007) is an English-like cnl with a bidirectional mapping to and from the functional-style syntax of owl; sos is based on peng (Schwitter & Tilbrook, 2004).A similar bidirectional mapping has been defined for Attempto Controlled English (ace) (Kaljurand, 2007).Rabbit (Denaux, Dimitrova, Cohn, Dolbear, & Hart, 2010) and clone (Funk, Tablan, Bontcheva, Cunningham, Davis, & Handschuh, 2007) are other owl cnls, mostly intended to be used by domain experts when authoring ontologies (Denaux, Dolbear, Hart, Dimitrova, & Cohn, 2011).We also note that some owl cnls cannot express all the kinds of owl statements (Schwitter, Kaljurand, Cregan, Dolbear, & Hart, 2008).
Much work on owl cnls focuses on ontology authoring and querying (Bernardi, Calvanese, & Thorne, 2007;Kaufmann & Bernstein, 2010;Schwitter, 2010b); the emphasis is mostly on the direction from cnl to owl or query languages. 7More relevant to our work are cnls like sos and ace, to which automatic mappings from normative owl syntaxes are available.By feeding an owl ontology expressed, for example, in functional-style syntax to a mapping that translates to an English-like cnl, all the axioms of the ontology can be turned into English-like sentences.Systems of this kind are often called ontology verbalizers.This term, however, also includes systems that translate from owl to English-like statements that do not belong in an explicitly defined cnl (Halaschek-Wiener, Golbeck, Parsia, Kolovski, & Hendler, 2008;Schutte, 2009;Power & Third, 2010;Power, 2010;Stevens, Malone, Williams, Power, & Third, 2011;Liang, Stevens, Scott, & Rector, 2011b).
Although verbalizers can be viewed as performing a kind of light nlg, they typically translate axioms one by one, as already noted, without considering the coherence (or topical cohesion) of the resulting texts, usually without aggregating sentences nor generating referring expressions, and often by producing sentences that are not entirely fluent or natural.For example, ace and sos occasionally use variables instead of referring expressions (Schwitter et al., 2008).Also, verbalizers typically do not employ domain-dependent generation resources and typically do not support multiple languages.Expressing the exact meaning of the axioms of the ontology in an unambiguous manner is considered more important in verbalizers than composing a fluent and coherent text in multiple languages, partly because the verbalizers are typically intended to be used by domain experts.
7. Conceptual authoring or wysiwym (Power & Scott, 1998;Hallett, Scott, & Power, 2007), which has been applied to owl (Power, 2009), and round-trip authoring (Davis, Iqbal, Funk, Tablan, Bontcheva, Cunningham, & Handschuh, 2008) are bidirectional, but focus mostly on ontology authoring and querying.Some verbalizers use ideas and methods from nlg.For example, some verbalizers include sentence aggregation (Williams & Power, 2010) and text planning (Liang, Scott, Stevens, & Rector, 2011a).Overall, however, nlg methods have been used only to a very limited extent with owl ontologies.A notable exception is ontosum (Bontcheva, 2005), which generates natural language descriptions of individuals, but apparently not classes, from rdf schema and owl ontologies.It is an extension of miakt (Bontcheva & Wilks, 2004), which was used to generate medical reports.Both were implemented in gate (Bontcheva, Tablan, Maynard, & Cunningham, 2004) and they provide graphical user interfaces to manipulate domain-dependent generation resources (Bontcheva & Cunningham, 2003).No detailed description of ontosum appears to have been published, however, and the system does not seem to be publicly available, unlike Naturalowl.Also, no trials of ontosum with independently created ontologies seem to have been published.More information on how ontosum compares to Naturalowl can be found elsewhere (Androutsopoulos et al., 2012).Mellish and Sun (2006) focus on lexicalization and sentence aggregation, aiming to produce a single aggregated sentence from an input collection of rdf triples; by contrast, Naturalowl produces multi-sentence texts.In complementary work, Mellish et al. (2008) consider content selection for texts describing owl classes.Unlike Naturalowl, their system does not express only facts that are explicit in the ontology, but also facts deduced from the ontology.Nguyen et al. (2012) discuss how the proof trees of facts deduced from owl ontologies can be explained in natural language.It would be particularly interesting to examine how deduction and explanation mechanisms could be added to Naturalowl.

The Processing Stages and Resources of NaturalOWL
Naturalowl adopts a pipeline architecture, which is common in nlg (Reiter & Dale, 2000), though the number and purpose of its components often vary (Mellish, Scott, Cahill, Paiva, Evans, & Reape, 2006).Our system generates texts in three stages, document planning, micro-planning, and surface realization, discussed in the following sections; see Figure 1.

Document Planning
Document planning consists of content selection, where the system selects the information to convey, and text planning, where it plans the structure of the text to be generated.

Content Selection
In content selection, the system first retrieves from the ontology all the owl statements that are relevant to the class or individual to be described, it then converts the selected owl statements to message triples, which are easier to express as sentences, and it finally selects among the message triples the ones to be expressed.

OWL statements for individual targets
Let us first consider content selection when Naturalowl is asked to describe an individual (an entity), and let us call that individual the target.The system scans the owl statements of the ontology, looking for statements of the forms listed in the left column of Table 1. 8  In effect, it retrieves all the statements that describe the target directly, as opposed to statements describing another individual or a (named) class the target is related to.
owl allows arbitrarily many nested ObjectUnionOf and ObjectIntersectionOf operators, which may lead to statements that are very difficult to express in natural language.
To simplify text generation and to ensure that the resulting texts are easy to comprehend, we do not allow nested ObjectIntersectionOf and ObjectUnionOf operators in the ontologies the texts are generated from.In Table 1, this restriction is enforced by requiring class identifiers to appear at some points where owl also allows expressions that construct unnamed classes using operators.If an ontology uses unnamed classes at points where Table 1 requires class identifiers (named classes), it can be easily modified to comply with Table 1 by defining new named classes for nested unnamed ones. 9In practice, nested ObjectUnionOf and ObjectIntersectionOf operators are rare; see the work of Power et al. (Power, 2010;Power & Third, 2010;Power, 2012) for information of the frequencies of different types of owl statements. 10 Statements of the form ClassAssertion(Class target ) may be quite complex, because Class is not necessarily a class identifier.It may also be an expression constructing an unnamed class, as in the following example.This is why there are multiple rows for ClassAssertion in Table 1.

ClassAssertion(
ObjectIntersectionOf(:Wine ObjectHasValue(:locatedIn :stEmilionRegion) ObjectHasValue(:hasColor :red) ObjectHasValue(:hasFlavor :strong) ObjectHasValue(:madeFrom :cabernetSauvignonGrape) ObjectMaxCardinality(1 :madeFrom)) :chateauTeyssier2007) Naturalowl would express the owl statement above by generating a text like the following.Recall that the texts of Naturalowl are intended to be read by end-users.Hence, we prefer to generate texts that may not emphasize enough some of the subtleties of the owl 8. Some owl statements shown in Table 1 with two arguments can actually have more arguments, but they can be converted to the forms shown.9.It is also easy to automatically detect nested unnamed classes and replace them, again automatically, by new named classes (classes with owl identifiers).The domain author would have to be consulted, though, to provide meaningful owl identifiers for the new classes (otherwise arbitrary identifiers would have to be used) and natural language names for the new classes (see Section 3.2.1 below).10.One could also refactor some nested operators; for example, t ∈ The conversion to message triples, to be discussed below, in effect also performs some refactoring, but it cannot cope with all the possible nested union and intersection operators, which is why we disallow them as a general rule.Stricter texts of this kind, however, seem inappropriate for end-users.In fact, it could be argued that even mentioning that the wine is made from exactly one grape variety in the text that Naturalowl produces is inappropriate for end-users.Our system can be instructed to avoid mentioning this information via user modeling annotations, discussed below.

OWL statements for class targets
If the system is asked to describe a class, rather than an individual, it scans the ontology for statements of the forms listed in the left column of Table 2.The class to be described must be a named one, meaning that it must have an owl identifier, and Target denotes its identifier.Again, to simplify the generation process and to avoid producing complicated texts, Table 2 requires class identifiers to appear at some points where owl also allows expressions that construct unnamed classes using operators.If an ontology uses unnamed classes at points where Table 2 requires class identifiers, it can be easily modified.
In texts describing classes, it is difficult to express informally the difference between EquivalentClasses and SubClassOf.EquivalentClasses(C1 C2 ) means that any individual of C1 also belongs in C2 , and vice versa.By contrast, SubClassOf(C1 C2 ) means that any member of C1 also belongs in C2 , but the reverse is not necessarily true.If we replace EquivalentClasses by SubClassOf in the definition of StEmilion of page 672, any member of StEmilion is still necessarily also a member of the intersection, but a wine with all the characteristics of the intersection is not necessarily a member of StEmilion.Consequently, one should perhaps add sentences like the ones shown in italics below, when expressing EquivalentClasses and SubClassOf, respectively.Naturalowl produces the same texts, without the sentences in italics, for both SubClassOf and EquivalentClasses, to avoid generating texts that sound too formal.Also, it may not mention some of the information of the ontology about a target class (e.g., that a St. Emilion has strong flavor), when user modeling indicates that this information is already known or that the text should not exceed a particular length.Hence, the generated texts express necessary, not sufficient conditions for individuals to belong in the target class.

OWL statements for second-level targets
In some applications, expressing additional owl statements that are indirectly related to the target may be desirable.Let us assume, for example, that the target is the individual exhibit24, and that the following directly relevant statements have been retrieved from the ontology.Naturalowl would express them by generating a text like the one below.The names of classes and individuals can be shown as hyperlinks to indicate that they can be used as subsequent targets.Clicking on a hyperlink would be a request to describe the corresponding class or individual.Alternatively, we may retrieve in advance the owl statements for the subsequent targets and add them to those of the current target.More precisely, assuming that the target is an individual, the subsequent targets, called second-level targets, are the target's class, provided that it is a named one, and the individuals the target is directly linked to via object properties.Naturalowl considers second-level targets only when the current target is an individual, because with class targets, second-level targets often lead to complicated texts.To retrieve owl statements for both the current and the second-level targets (when applicable), or only for the current target, we set the maximum fact distance to 2 or 1, respectively.Returning to exhibit24, let us assume that the maximum fact distance is 2 and that the following owl statements for second-level targets have been retrieved. 11SubClassOf(:Aryballos :Vase) SubClassOf(:Aryballos ObjectHasValue(:exhibitTypeCannedDescription "An aryballos was a small spherical vase with a narrow neck, in which the athletes kept the oil they spread their bodies with"^^xsd:string)) DatatypePropertyAssertion(:periodDuration :archaicPeriod "700 BC to 480 BC"^^xsd:string) DatatypePropertyAssertion(:periodCannedDescription :archaicPeriod "The archaic period was when the Greek ancient city-states developed"^^xsd:string) DataPropertyAssertion(:techniqueCannedDescription :blackFigureTechnique "In the black-figure technique, the silhouettes are rendered in black on the pale surface of the clay, and details are engraved"^^xsd:string) To express all the retrieved owl statements, including those for the second-level targets, Naturalowl would now generate a text like the following, which may be preferable, if this is the first time the user encounters an aryballos and archaic exhibits.
This is an aryballos, a kind of vase.An aryballos was a small spherical vase with a narrow neck, in which the athletes kept the oil they spread their bodies with.This aryballos was found at the Heraion of Delos and it was created during the archaic period.The archaic period was when the Greek ancient citystates developed and it spans from 700 bc to 480 bc.This aryballos was decorated with the black-figure technique.In the black-figure technique, the silhouettes are rendered in black on the pale surface of the clay, and details are engraved.This aryballos is currently in the Museum of Delos.
We note that in many ontologies it is impractical to represent all the information in logical terms.In our example, it is much easier to store the information that "An aryballos was a small. . .bodies with" as a string, i.e., as a canned sentence, rather than defining classes, properties, and individuals for spreading actions, bodies, etc. and generating the sentence from a logical meaning representation.Canned sentences, however, have to be entered in multiple versions, if several languages or user types need to be supported.

Converting OWL statements to message triples
Tables 1 and 2 also show how the retrieved owl statements can be rewritten as triples of the form S, P, O , where S is the target or a second-level target; O is an individual, datatype value, class, or a set of individuals, datatype values, or classes that S is mapped to; and P specifies the kind of mapping.We call S the semantic subject or owner of the triple, and O the semantic object or filler ; the triple can also be viewed as a field named P , owned by S, and filled by O.For example, the owl statements about exhibit24 shown above, including those about the second-level targets, are converted to the following triples.
Every owl statement or collection of owl statements can be represented as a set of rdf triples. 12The triples of Tables 1-2 are similar, but not the same as rdf triples.Most notably, expressions of the form modifier(ρ) cannot be used as P in rdf triples.To avoid confusion, we call message triples the triples of Tables 1-2, to distinguish them from rdf triples.As with rdf triples, message triples can be viewed as forming a graph.Figure 2 shows the graph for the message triples of exhibit24; the triple linking blackFigureTechnique to a canned sentence is not shown to save space.The second-level targets are the classes and individuals at distance one from the target (exhibit24). 13By contrast, the graph for the rdf triples representing the owl statements would be more complicated, and second-level targets would not always be at distance one from the target.
Each message triple is intended to be easily expressible as a simple sentence, which is not always the case with rdf triples representing owl statements.The message triples also capture similarities of the sentences to be generated that may be less obvious when looking at the original owl statements or the rdf triples representing them.For example, the ClassAssertion and SubClassOf statements below are mapped to identical message triples, apart from the identifiers of the individual and the class, and the similarity of the message triples reflects the similarity of the resulting sentences, also shown below.
12. See http://www.w3.org/TR/owl2-mapping-to-rdf/.13.Instead of retrieving the owl statements about the target and second-level targets and then converting them to message triples, one could equivalently convert all the owl statements of the ontology to message triples and select the message triples connecting the target to nodes up to distance two from the target.
By contrast, without the conversion to message triples, the owl statements and the rdf triples representing them would lead to more difficult to follow sentences like the following: Product 145 is a member of the class of individuals that are made from at most one grape.
St. Emilion is a subclass of the class of individuals that are made from at most one grape.
As a further example, Tables 1 and 2 discard ObjectIntersectionOf operators, producing multiple message triples instead.For example, the EquivalentClasses statement defining StEmilion on page 672 would be converted to the following message triples.
<:StEmilion, isA, :Bordeaux> <:StEmilion, :locatedIn, :stEmilionRegion> <:StEmilion, :hasColor, :red> <:StEmilion, :hasFlavor, :strong> <:StEmilion, :madeFromGrape, :cabernetSauvignonGrape> <:StEmilion, maxCardinality(:madeFromGrape), 1> The resulting message triples correspond to the sentences below, where subsequent references to StEmilion have been replaced by pronouns to improve readability; the sentences could also be aggregated into longer ones, as discussed in later sections.By contrast the original owl statement of page 672 and the rdf triples representing it would lead to the 'stricter' text of page 678, which is inappropriate for end-users, as already noted.Notice, also, that Table 2 converts EquivalentClasses and SubClassOf statements to identical triples, where P is isA, since Naturalowl produces the same texts for both kinds of statements, as already discussed.
The house wine has strong flavor or it has medium flavor.
The house wine has strong or medium flavor.
By contrast, the owl statement and the corresponding rdf triples in effect say that: The house wine is a member of the union of: (i) the class of all wines that have strong flavor, and (ii) the class of all wines that have medium flavor.

Interest scores and repetitions
Expressing all the message triples of all the retrieved owl statements is not always appropriate.Let us assume, for example, that the maximum fact distance is 2 and that a description of exhibit24 of Figure 2 has been requested by a museum visitor.It may be the case that the visitor has already encountered other archaic exhibits, and that the duration of the archaic period was mentioned in previous descriptions.Repeating the duration of the period may, thus, be undesirable.We may also want to exclude message triples that are uninteresting to particular types of users.For example, there may be message triples providing bibliographic references, which children would probably find uninteresting.
Naturalowl provides mechanisms allowing the domain author to assign an importance score to every possible message triple, and possibly different scores for different user types (e.g., adults, children).The score is a non-negative integer indicating how interesting a user of the corresponding type will presumably find the information of the message triple, if the information has not already been conveyed to the user.In the museum projects Naturalowl was originally developed for, the interest scores ranged from 0 (completely uninteresting) to 3 (very interesting), but a different range can also be used.The scores can be specified for all the message triples that involve a particular property P (e.g., P = madeFrom), or for all the message triples that involve semantic subjects S of a particular class (e.g., S ∈ Statue or S = Statue) and a particular property P , or for message triples that involve particular semantic subjects (e.g., S =exhibit37) and a particular property P .For example, we may wish to specify that the materials of the exhibits in a collection are generally of medium interest (P = madeFrom, score 2), that the materials of statues are of lower interest (S ∈ statue, P = madeFrom, score 1), perhaps because all the statues of the collection are made from stone, but that the material of the particular statue exhibit24 is very important (S = exhibit10, P = madeFrom, score 3), perhaps because exhibit24 is a gold statue.
We do not discuss the mechanisms that can be used to assign interest scores to message triples in this article, but a detailed description of these mechanisms can be found elsewhere (Androutsopoulos et al., 2012).We also note that when human-authored texts describing individuals and classes of the ontology are available along with the owl statements or, more generally, the logical facts they express, statistical and machine learning methods can be employed to learn to automatically select or assign interest scores to logical facts (Duboue & McKeown, 2003;Barzilay & Lapata, 2005;Kelly, Copestake, & Karamanis, 2010).Another possibility (Demir, Carberry, & McCoy, 2010) would be to compute the interest scores with graph algorithms like PageRank (Brin & Page, 1998).
The domain author can also specify how many times each message triple has to be repeated, before it can be assumed that users of different types have assimilated it.Once a triple has been assimilated, it is never repeated in texts for the same user.For example, the domain author can specify that children assimilate the duration of a historical period when it has been mentioned twice; hence, the system may repeat, for example, the duration of the archaic period in two texts.Naturalowl maintains a personal model for each end-user.The model shows which message triples were conveyed to the particular user in previous texts, and how many times.Again, more information about the user modeling mechanisms of Naturalowl can be found elsewhere (Androutsopoulos et al., 2012).

Selecting the message triples to convey
When asked to describe a target, Naturalowl first retrieves from the ontology the relevant owl statements, possibly also for second-level targets.It then converts the retrieved statements to message triples, and consults their interest scores and the personal user models to rank the message triples by decreasing interest score, discarding triples that have already been assimilated.If a message triple about the target has been assimilated, then all the message triples about second-level targets that are connected to the assimilated triple are also discarded; for example, if the creationPeriod triple (edge) of Figure 2 has been assimilated, then the triples about the archaic period (the edges leaving from archaicPeriod) are also discarded.The system then selects up to maxMessagesPerPage triples from the most interesting remaining ones; maxMessagesPerPage is a parameter whose value can be set to smaller or larger values for types of users that prefer shorter or longer texts, respectively.

Limitations of content selection
owl allows one to define the broadest possible domain and range of a particular property, using statements like the following.
ObjectPropertyDomain(:madeFrom :Wine) ObjectPropertyRange(:madeFrom :Grape) In practice, more specific range restrictions are then imposed for particular subclasses of the property's domain.For example, the following statements specify that when madeFrom is used with individuals from the subclass GreekWine of Wine, the range (possible values) of madeFrom should be restricted to individuals from the subclass GreekGrape of Grape.
More generally, Naturalowl does not consider owl statements that express axioms about properties, meaning statements declaring that a property is symmetric, asymmetric, reflexive, irreflexive, transitive, functional, that its inverse is functional, that a property is the inverse of, or disjoint with another property, that it is subsumed by a chain of other properties, or that it is a subproperty (more specific) of another property.Statements of this kind are mostly useful in consistency checks, in deduction, or when generating texts describing the properties themselves (e.g., what being a grandparent of somebody means).14

Text Planning
For each target, the previous mechanisms produce the message triples to be expressed, with each triple intended to be easily expressible as a single sentence.The text planner of Naturalowl then orders the message triples, in effect ordering the corresponding sentences.

Global and local coherence
When considering global coherence, text planners attempt to build a structure, usually a tree, that shows how the clauses, sentences, or larger segments of the text are related to each other, often in terms of rhetorical relations (Mann & Thompson, 1998).The allowed or preferred orderings of the sentences (or segments) often follow, at least partially, from the global coherence structure.In the texts, however, that Naturalowl is intended to generate, the global coherence structures tend to be rather uninteresting, because most of the sentences simply provide additional information about the target or the second-level targets, which is why global coherence is not considered in Naturalowl. 15hen considering local coherence, text planners usually aim to maximize measures that examine whether or not adjacent sentences (or segments) continue to focus on the same entities or, if the focus changes, how smooth the transition is.Many local coherence measures are based on Centering Theory (ct) (Grosz, Joshi, & Weinstein, 1995;Poesio, Stevenson, & Di Eugenio, 2004).Consult the work of Karamanis et al. (2009) for an introduction to ct and a ct-based analysis m-piro's texts, which also applies to the texts of Naturalowl.
When the maximum fact distance of Naturalowl is 1, all the sentence-to-sentence transitions are of a type known in ct as continue, which is the preferred type.If the maximum fact distance is 2, however, the transitions are not always continue.We repeat below the long aryballos description of page 681 without sentence aggregation.For readers familiar with ct, we show in italics the most salient noun phrase of each sentence u n , which realizes the discourse entity known as the preferred center c p (u n ).The underlined noun phrases realize the backward looking center c b (u n ), roughly speaking the most salient discourse entity of the previous sentence that is also mentioned in the current sentence.In sentence 4, where c p (u 4 ) is the target exhibit, c b (u 4 ) is undefined and the transition from sentence 3 to 4 is a nocb, a type of transition to be avoided; we mark nocb transitions with bullets.In sentence 6, c p (u 6 ) = c b (u 6 ) = c b (u 5 ), and we have a kind of transition known as smooth-shift (Poesio et al., 2004), less preferred than continue, but better than nocb.Another nocb occurs from sentence 7 to 8, followed by a smooth-shift from sentence 8 to 9, and another nocb from sentence 9 to 10.All the other transitions are continue.
The text planner of Naturalowl groups together sentences (message triples) that describe a particular second-level target (e.g., sentences 2-3, 6-7, and 9) and places each group immediately after the sentence that introduces the corresponding second-level target (immediately after sentences 1, 5, and 8).Thus the transition from a sentence that introduces a second-level target to the first sentence that describes the second-level target (e.g., from sentence 1 to 2, from 5 to 6, from 8 to 9) is a smooth-shift (or a continue in the special case from the initial sentence 1 to 2).A nocb occurs only at sentences that return to providing information about the primary target, after a group of sentences that provide information about a second-level target.All the other transitions are of type continue.
A simple strategy to avoid nocb transitions would be to end the generated text once all the message triples that describe a second-level target have been reported, and record in the user model that the other message triples that content selection provided were not actually conveyed.In our example, this would generate sentences 1 to 3; then if the user requested more information about the exhibit, sentences 4 to 7 would be generated etc.

Topical order
When ordering sentences, we also need to consider the topical similarity of adjacent sentences.Compare, for example, the following two texts.Even though both texts contain the same sentences, the second text is more difficult to follow, if at all acceptable.The first one is better, because it groups together topically related sentences.We mark the sentence groups in the first text by curly brackets, but the brackets would not be shown to end-users.In longer texts, sentence groups may optionally be shown as separate paragraphs or sections, which is why we call them sections.
To allow the message triples (and the corresponding sentences) to be grouped by topic, the domain author may define sections (e.g., locationSection, buildSection) and assign each property to a single section (e.g., assign the properties isInArea and isNextTo to locationSection).Each message triple is then placed in the section of its property.An ordering of the sections and of the properties inside each section can also be specified, causing the message triples to be ordered accordingly (e.g., we may specify that locationSection should precede buildSection, and that inside locationSection, the isInArea property should be expressed before isNextTo).The sections, the assignments of the properties to sections, and the order of the sections and the properties are defined in the domain-dependent generation resources (Androutsopoulos et al., 2012).

The overall text planning algorithm
Naturalowl's text planning algorithm is summarized in Figure 3.If the message triples to be ordered include triples that describe second-level targets, i.e., triples S, P, O whose owner S is a second-level target, then the triples of the primary and each second-level target  are ordered separately, using the ordering of properties and sections.The ordered triples of each second-level target are then inserted into the ordered list of the primary target triples, immediately after the first triple that introduces the second-level target, i.e., immediately after the first triple whose O is the second-level target.

Further related work on text planning
The ordering of properties and sections is similar to text schemata (McKeown, 1985), roughly speaking domain-dependent patterns that specify the possible arrangements of different types of sentences (or segments).Sentence ordering has been studied extensively in text summarization (Barzilay, Elhadad, & McKeown, 2002).Duboue and McKeown (2001) discuss methods that could be used to learn the order of sentences or other segments in nlg from semantically tagged training corpora.Consult also the work of Barzilay and Lee (2004), Elsner et al. (2007), Barzilay and Lapata (2008), and Chen et al. (2009).
Figure 4: A lexicon entry for the verb "to find".

Micro-planning
The processing stages we have discussed so far select and order the message triples to be expressed.The next stage, micro-planning, consists of three sub-stages: lexicalization, sentence aggregation, and generation of referring expressions; see also Figure 1 on page 676.

Lexicalization
During lexicalization, nlg systems usually turn the output of content selection (in our case, the message triples) to abstract sentence specifications.In Naturalowl, for every property of the ontology and every supported natural language, the domain author may specify one or more template-like sentence plans to indicate how message triples involving that property can be expressed.We discuss below how sentence plans are specified, but first a slight deviation is necessary, to briefly discuss the lexicon entries of Naturalowl.

Lexicon entries
For each verb, noun, or adjective that the domain author wishes to use in the sentence plans, a lexicon entry has to be provided, to specify the inflectional forms of that word. 16 All the lexicon entries are multilingual (currently bilingual); this allows sentence plans to be reused across similar languages when no better option is available, as discussed elsewhere (Androutsopoulos et al., 2007).Figure 4 shows the lexicon entry for the verb whose English base form is "find", as viewed by the domain author when using the Protégé plug-in of Naturalowl.The identifier of the lexicon entry is toFindLex.The English part of the entry shows that the base form is "find", the simple past is "found" etc.Similarly, the Greek part of the lexicon entry would show the base form of the corresponding verb ("βρίσκω") and its inflectional forms in the various tenses, persons etc.The lexicon entries for nouns and adjectives are very similar.
Most of the English inflectional forms could be automatically produced from the base forms by using simple morphology rules.We hope to exloit an existing English morphology component, such as that of simplenlg (Gatt & Reiter, 2009), in future work.Similar morphology rules for Greek were used in the authoring tool of m-piro (Androutsopoulos et al., 2007), and we hope to include them in a future version of Naturalowl.Rules of this kind would reduce the time a domain author spends creating lexicon entries.In the ontologies we have considered, however, a few dozens of lexicon entries for verbs, nouns, and adjectives 16.No lexicon entries need to be provided for closed-class words, like determiners and prepositions.
suffice.Hence, even without facilities to automatically produce inflectional forms, creating the lexicon entries is rather trivial.Another possibility would be to exploit a general-purpose lexicon or lexical database, like WordNet (Fellbaum, 1998) or celex, though resources of this kind often do not cover the highly technical concepts of ontologies. 17 The lexicon entries and, more generally, all the domain-dependent generation resources of Naturalowl are stored as instances of an owl ontology (other than the ontology the texts are generated from) that describes the linguistic resources of the system (Androutsopoulos et al., 2012).The domain author, however, interacts with the plug-in and does not need to be aware of the owl representation of the resources.By representing the domain-dependent generation resources in owl, it becomes easier to publish them on the Web, check them for inconsistencies etc., as with other owl ontologies.

Sentence plans
In Naturalowl, a sentence plan is a sequence of slots, along with instructions specifying how to fill them in.Figure 5 shows an English sentence plan for the property usedDuringPeriod, as viewed by the domain author when using the Protégé plug-in of Naturalowl.The sentence plan expresses message triples of the form S, usedDuringPeriod, O by producing sentences like the following.The first slot of the sentence plan of Figure 5 is to be filled in with an automatically generated referring expression for the owner (S) of the triple.For example, if the triple to express is <:stoaZeusEleutherios, :usedDuringPeriod, :classicalPeriod>, an appropriate referring expression for S may be a demonstrative noun phrase like "this stoa", a pronoun ("it"), or the monument's natural language name ("the Stoa of Zeus Eleutherios").We discuss the generation of referring expressions below, along with mechanisms to specify natural language names.The sentence plan also specifies that the referring expression must be in nominative case (e.g., "it" or "this stoa", as opposed to the genitive case expressions "its" or "this stoa's", as in "This stoa's height is 5 meters").
The second slot is to be filled in with a form of the verb whose lexicon identifier is toUseVerb.The verb form must be in the simple past and passive voice, in positive polarity (as opposed to "was not used").Its number must agree with the number of the expression in the first slot; for example, we want to generate "The Stoa of Zheus Eleutherios was used", but "Stoas were used".The third slot is filled in with the preposition "during".The fourth slot is filled in with an expression for the filler (O) of the message triple, in accusative case. 18With <:stoaZeusEleutherios, :usedDuringPeriod, :classicalPeriod>, the slot would be filled in with the natural language name of classicalPeriod. 19The sentence plan also allows the resulting sentence to be aggregated with other sentences.17.See http://www.ldc.upenn.edu/Catalog/catalogEntry.jsp?catalogId=LDC96L14 for celex.18. English prepositions usually require noun phrase complements in accusative (e.g., "on him").In Greek and other languages, cases have more noticeable effects.19.Future versions of Naturalowl may allow a referring expression for O other than its natural language name to be produced (e.g., a pronoun), as with S.More generally, the instructions of a sentence plan may indicate that a slot should be filled in with one of the following (i-vii): (i) A referring expression for the S (owner) of the message triple.A sentence plan may specify a particular type of referring expression to use (e.g., always use the natural language name of S) or, as in the example of Figure 5, it may allow the system to automatically produce the most appropriate type of referring expression depending on the context.
(ii) A verb for which there is a lexicon entry, in a particular form, possibly a form that agrees with another slot.The polarity of the verb can also be manually specified or, if the filler (O) of the message triple is a Boolean value, the polarity can be automatically set to match that value (e.g., to produce "It does not have a built-in flash" when O is false).
(iii) A noun or adjective from the lexicon, in a particular form (e.g., case, number), or in a form that agrees with another slot.
(iv) A preposition or (v) a fixed string.(vii) A concatenation of property values of O, provided that O is an individual.For example, we may need to express a message triple like the first one below, whose (anonymous in the rdf sense) object :n is linked to both a numeric value (via hasAmount) and an individual standing for the currency (via hasCurrency).

Default sentence plan
If no sentence plan has been provided for a particular property of the ontology, Naturalowl uses a default sentence plan, consisting of three slots.The first slot is filled in with an automatically generated referring expression for the owner (S) of the triple, in nominative case.The second slot is filled in with a tokenized form of the owl identifier of the property.The third slot is filled in with an appropriate expression for the filler (O) of the triple, as discussed above, in accusative case (if applicable).For the following message triple, the default sentence plan would produce the sentence shown below: <:stoaZeusEleutherios, :usedDuringPeriod, and(:classicalPeriod, :hellenisticPeriod, :romanPeriod)> Stoa zeus eleutherios used during period classical period, hellenistic period, and roman period.
Notice that we use a single message triple with an and(...) filler, instead of a different triple for each period.This kind of triple merging is in effect a form of aggregation, discussed below, but it takes place during content selection.Also, we assumed in the sentence above that the natural language names of the individuals have not been provided either; in this case, Naturalowl uses tokenized forms of the owl identifiers of the individuals instead.The tokenizer of Naturalowl can handle both CamelCase (e.g., :usedDuringPeriod) and underscore style (e.g., :used during period).When other styles are used in the identifiers of properties, classes, and individuals, the output of the tokenizer may be worse than the example suggests, but the resulting sentences can be improved by providing sentence plans and by associating classes and individuals with natural language names, discussed below.
Using rdfs:label strings owl properties (and other elements of owl ontologies) can be labeled with strings in multiple natural languages using the rdfs:label annotation property, defined in the rdf and owl standards.For example, the usedDuringPeriod property could be labeled with "was used during" as shown below; there could be similar labels for Greek and other languages.
AnnotationAssertion(rdfs:label :usedDuringPeriod "was used during"@en) If an rdfs:label string has been specified for the property of a message triple, Naturalowl uses that string in the second slot of the default sentence plan.The quality of the resulting sentences can, thus, be improved, if the rdfs:label strings are more natural phrases than the tokenized property identifiers.With the rdfs:label shown above, the default sentence plan would produce the following sentence.
Stoa zeus eleutherios was used during classical period, hellenistic period, and roman period.
Even with rdfs:label strings, the default sentence plan may produce sentences with disfluencies.Also, the rdfs:label strings do not indicate the grammatical categories of their words, and this does not allow the system to apply many of the sentence aggregation rules discussed below.A further limitation of the default sentence plan is that it does not allow the slots for S and O to be preceded or followed, respectively, by any other phrase.

Sentence plans for domain-independent and modified properties
The domain author does not need to provide sentence plans for domain-independent properties (e.g., instanceOf, isA, see Tables 1-2).These properties have fixed, domainindependent semantics; hence, built-in sentence plans are used.The English built-in sentence plans, which also serve as further examples of sentence plans, are summarized in Table 3; the Greek built-in sentence plans are similar.To save space we show the sentence plans as templates in Table 3, and we do not show the sentence plans for negated domainindependent properties (e.g., not(isA)), which are similar.Additional slot restrictions not St.Emilion is red.Notation: ref(ξ) stands for a referring expression for ξ; name(ξ) is the natural language name of ξ; name(indef, ξ) and name(noarticle, ξ) mean that the name should be a noun phrase with an indefinite or no article.Sentence plans involving name(adj, ξ) are used when the natural language name of ξ is a sequence of one or more adjectives; otherwise the sentence plan of the previous row is used.
Table 3: Built-in English sentence plans for domain-independent properties.
shown in Figure 3 require, for example, subject-verb number agreement and the verb forms ("is" or "was") to be in present tense.Information provided when specifying the natural language names of individuals and classes, discussed below, shows if definite or indefinite articles or no articles at all should be used (e.g., "the n97 mini", "exhibit 24", "a St. Emilion" or "the St. Emilion" or simply "St.Emilion"), and what the default number of each name is (e.g., "A wine color is" or "Wine colors are").It is also possible to modify the built-in sentence plans; for example, in a museum context we may wish to generate "An aryballos was a kind of vase" instead of "An aryballos is a kind of vase".

Specifying the appropriateness of sentence plans
Multiple sentence plans may be provided for the same property of the ontology and the same language.Different appropriateness scores (similar to the interest scores of properties) can then be assigned to alternative sentence plans per user type.This allows specifying, for example, that a sentence plan that generates sentences like "This amphora depicts Miltiades" is less appropriate when interacting with children, compared to an alternative sentence plan with a more common verb (e.g., "shows").Automatically constructed sentence plans inherit the appropriateness scores of the sentence plans they are constructed from.

Related work on sentence plans
The sentence plans of Naturalowl are similar to expressions of sentence planning languages like spl (Kasper & Whitney, 1989) that are used in generic surface realizers, such as fuf/surge (Elhadad & Robin, 1996), kpml (Bateman, 1997), realpro (Lavoie & Ram-bow, 1997), nitrogen/halogen (Langkilde, 2000), and openccg (White, 2006).The sentence plans of Naturalowl, however, leave fewer decisions to subsequent stages.This has the disadvantage that our sentence plans often include information that could be obtained from large-scale grammars or corpora (Wan, Dras, Dale, & Paris, 2010).On the other hand, the input to generic surface realizers often refers to non-elementary linguistic concepts (e.g., features of a particular syntax theory) and concepts of an upper model (Bateman, 1990); the latter is a high-level domain-independent ontology that may use a very different conceptualization than the ontology the texts are to be generated from.Hence, linguistic expertise, for example in Systemic Grammars (Halliday, 1994) in the case of kpml (Bateman, 1997), and effort to understand the upper model are required.By contrast, the sentence plans of Naturalowl require the domain author to be familiar with only elementary linguistic concepts (e.g., tense, number), and they do not require familiarity with an upper model.Our sentence plans are simpler than, for example, the templates of Busemann and Horacek (1999) or McRoy et al. (2003), in that they do not allow, for instance, conditionals or recursive invokation of other templates.See also the work of Reiter (1995) and van Deemter et al. (2005) for a discussion of template-based vs. more principled nlg.
When corpora of texts annotated with the message triples they express are available, templates can also be automatically extracted (Ratnaparkhi, 2000;Angeli, Liang, & Klein, 2010;Duma & Klein, 2013).Statistical methods that jointly perform content selection, lexicalization, and surface realization have also been proposed (Liang, Jordan, & Klein, 2009;Konstas & Lapata, 2012a, 2012b), but they are currently limited to generating single sentences from flat records.

Specifying natural language names
The domain author can assign natural language (nl) names to the individuals and named classes of the ontology; recall that by named classes we mean classes that have owl identifiers.If an individual or named class is not assigned an nl name, then its rdfs:label or a tokenized form of its identifier is used instead.The nl names that the domain author provides are specified much as sentence plans, i.e., as sequences of slots.For example, we may specify that the English nl name of the class ItalianWinePiemonte is the concatenation of the following slots; we explain the slots below.

[ indef an] [ adj Italian] [ headnoun wine] [prep from] [ def the] [noun Piemonte] [noun region]
This would allow Naturalowl to generate the sentence shown below from the following message triple; a tokenized form of the identifier of wine32 is used.
Similarly, we may assign the following nl names to the individuals classicalPeriod, stoa ZeusEleutherios, gl2011, and the classes ComputerScreen and Red.Naturalowl makes no distinction between common and proper nouns; both are entered as nouns in the lexicon, and may be multi-word (e.g., "Zeus Eleutherios").Naturalowl can also be instructed to capitalize the words of particular slots (e.g., "Classical").These nl names could be used to express the message triples shown below: <:stoaZeusEleutherios, :usedDuringPeriod, :classicalPeriod> The Stoa of Zeus Eleutherios was used during the Classical period.
More precisely, each nl name is a sequence of slots, with accompanying instructions specifying how the slots are to be filled in.Each slot can be filled in with: (i) An article, definite or indefinite.The article in the first slot (if present) is treated as the article of the overall nl name.
(ii) A noun or adjective flagged as the head (main word) of the nl name.Exactly one head must be specified per nl name and it must have a lexicon entry.The number and case of the head, which is also taken to be the number and case of the overall nl name, can be automatically adjusted per context.For example, different sentence plans may require the same nl name to be in nominative case when used as a subject, but in accusative when used as the object of a verb; and some aggregation rules, discussed below, may require a singular nl name to be turned into plural.Using the lexicon entries, which list the inflectional forms of nouns and adjectives, Naturalowl can adjust the nl names accordingly.The gender of head adjectives can also be automatically adjusted, whereas the gender of head nouns is fixed and specified by their lexicon entries.
(iii) Any other noun or adjective, among those listed in the lexicon.The nl name may require a particular inflectional form to be used, or it may require an inflectional form that agrees with another slot of the nl name.
(iv) A preposition, or (v) any fixed string.
As with sentence plans, the domain author specifies nl names by using the Protégé plug-in of Naturalowl.Multiple nl names can be specified for the same individual or class, and they can be assigned different appropriateness scores per user type; hence, different terminology (e.g., common names of diseases) can be used when generating texts for nonexperts, as opposed to texts for experts (e.g., doctors).The domain author can also specify, again using the plug-in, if the nl names of particular individuals or classes should involve definite, indefinite, or no articles, and if the nl names should be in singular or plural by default.For example, we may prefer the texts to mention the class of aryballoi as a single particular generic object, or by using an indefinite singular or plural form, as shown below.
The aryballos is a kind of vase.An aryballos is a kind of vase.Aryballoi are a kind of vase.

Sentence Aggregation
The sentence plans of the previous section lead to a separate sentence for each message triple.nlg systems often aggregate sentences into longer ones to improve readability.In Naturalowl, the maximum number of sentences that can be aggregated to form a single longer sentence is specified per user type via a parameter called maxMessagesPerSentence.In the museum contexts our system was originally developed for, setting maxMessagesPerSentence to 3 or 4 led to reasonable texts for adult visitors, whereas a value of 2 was used for children.The sentence aggregation of Naturalowl is performed by a set of manually crafted rules, intended to be domain-independent.We do not claim that this set of rules, which was initially based on the aggregation rules of m-piro (Melengoglou, 2002), is complete, and we hope it will be extended in future work; see, for example, the work of Dalianis (1999) for a rich set of aggregation rules. 20Nevertheless, the current rules of Naturalowl already illustrate several aggregation opportunities that arise when generating texts from owl ontologies.
To save space, we discuss only English sentence aggregation; Greek aggregation is similar.We show mostly example sentences before and after aggregation, but the rules actually operate on sentence plans and they also consider the message triples being expressed.The rules are intended to aggregate short single-clause sentences.Sentence plans that produce more complicated sentences may be flagged (using the tickbox at the bottom of Figure 5) to signal that aggregation should not affect their sentences.The aggregation rules apply almost exclusively to sentences that are adjacent in the ordering produced by the text planner; the only exception are aggregation rules that involve sentences about cardinality restrictions.Hence, depending on the ordering of the text planner there may be more or fewer aggregation opportunities; see the work of Cheng and Mellish (2000) for related discussion.Also, the aggregation rules of Naturalowl operate on sentences of the same topical section, because aggregating topically unrelated sentences often sounds unnatural.
The aggregation of Naturalowl is greedy.For each of the rules discussed below, starting from those discussed first, the system scans the original (ordered) sentences from first to last, applying the rule wherever possible, provided that the rule's application does not lead to a sentence expressing more than maxMessagesPerSentence original sentences.If a rule can be applied in multiple ways, for example to aggregate two or three sentences, the application that aggregates the most sentences without violating maxMessagesPerSentence is preferred.Avoid repeating a noun with multiple adjectives: Message triples of the form S, P, O 1 , . . ., S, P, O n will have been aggregated into a single message triple S, P, and(O 1 , . . ., O n ) .If the nl names of O 1 , . . ., O n are, apart from possible initial determiners, sequences of adjectives followed by the same head noun, then the head noun does not need to be repeated.Let us consider the following message triple.Assuming that the nl names of the three periods are as in the first sentence below, the original sentence will repeat "period" three times.The aggregation rule omits all but the last occurrence of the head noun.
Cardinality restrictions and values: This is a set of rules that aggregate all the sentences (not necessarily adjacent) that express message triples of the form S, M (P ), O and S, P, O , for the same S and P , with M being any of minCardinality, maxCardinality, exactCardinality.
When these rules are applied, MaxMessagesPerSentence is ignored.For example, these rules perform aggregations like the following.Class and prepositional phrase: The second sentence now involves the verb "to be" in the active simple present, immediately followed by a preposition; the other conditions are as in the previous rule.The subject and verb of the second sentence are omitted.
Bancroft Chardonnay is a kind of Chardonnay.It is from Bancroft.⇒ Bancroft Chardonnay is a kind of Chardonnay from Bancroft.
Class and multiple adjectives: This rule aggregates (i) a sentence of the same form as in the previous two rules, and (ii) one or more immediately preceding or subsequent sentences, each expressing a single message triple S, P i , O i , for the same S, where P i are (unmodified) properties of the ontology.Each of the preceding or subsequent sentences must involve the verb "to be" in the active simple present, immediately followed by only an adjective.The adjectives are absorbed into sentence (i) maintaining their order.
This is a motorbike.It is red.It is expensive.⇒ This is a red, expensive motorbike.
Same verb conjunction/disjunction: In a sequence of sentences involving the same verb form, each expressing a single message triple S, P i , O i , where S is the same in all the triples and P i are (unmodified) properties of the ontology, a conjunction can be formed by mentioning the subject and verb once.The "and" is omitted when a preposition follows.A similar rule applies to sentences produced from disjunctions of message triples, as illustrated below.A variant of the first aggregation rule is then also applied.
The house wine has strong flavor or it has medium flavor.⇒ The house wine has strong flavor or medium flavor.⇒ The house wine has strong or medium flavor.
Different verbs conjunction: When there is a sequence of sentences, not involving the same verb form, each expressing a message triple S, P i , O i , where S is the same in all the triples and P i are (unmodified) properties of the ontology, a conjunction can be formed: Bancroft Chardonnay is dry.It has moderate flavor.It comes from Napa.⇒ Bancroft Chardonnay is dry, it has moderate flavor, and it comes from Napa.

Generating Referring Expressions
A sentence plan may require a referring expression to be generated for the S of a message triple S, P, O .Depending on the context, it may be better, for example, to use the nl name of S (e.g., "the Stoa of Zeus Eleutherios"), a pronoun (e.g., "it"), a demonstrative noun phrase (e.g., "this stoa") etc.Similar alternatives could be made available for O, but Naturalowl currently uses O itself, if it is a datatype value; or the nl name of O, its tokenized identifier, or its rdfs:label, if O is an entity or class; and similarly for conjunctions and disjunctions in O. Hence, below we focus only on referring expressions for S.
Naturalowl currently uses a limited range of referring expressions, which includes only nl names (or tokenized identifiers or rdfs:label strings), pronouns, and noun phrases involving only a demonstrative and the nl name of a class (e.g., "this vase").For example, referring expressions that mention properties of S (e.g., "the vase from Rome") are not generated.Although the current referring expression generation mechanisms of Naturalowl work reasonably well, they are best viewed as placeholders for more elaborate algorithms (Krahmer & van Deemter, 2012), especially algorithms based on description logics (Areces, Koller, & Striegnitz, 2008;Ren, van Deemter, & Pan, 2010).
Let us consider the following generated text, which expresses the triples S i , P i , O i shown below.We do not aggregate sentences in this section, to illustrate more cases where referring expressions are needed; aggregation would reduce, however, the number of pronouns, making the text less repetitive.For readers familiar with ct (Section 3.1.2),we show again in italics the noun phrase realizing c p (u n ), we show underlined the noun phrase realizing c b (u n ), and we mark nocb transitions with bullets.
Note that with both referents, the transition from sentence 6 to 7 is a continue; hence, transition type preferences play no role.The gender of each generated pronoun is the gender of the (most appropriate) nl name of the S that the pronoun realizes.21If S does not have an nl name, Naturalowl uses the gender of the (most appropriate) nl name of the most specific class that includes S and has an nl name (or one of these classes, if they are many).nl names can also be associated with sets of genders, which give rise to pseudo-pronouns like "he/she"; this may be desirable in the nl name of a class like Person.
With some individuals or classes, we may not wish to use nl names, nor tokenized identifiers or rdfs:label strings.This is common, for example, in museum ontologies, where some exhibits are known by particular names, but many other exhibits are anonymous and their owl identifiers are not particularly meaningful.Naturalowl allows the domain author to mark individuals and classes as anonymous, to indicate that their nl names, tokenized identifiers, and rdfs:label strings should be avoided.When the primary target is marked as anonymous, Naturalowl uses a demonstrative noun phrase (e.g., "this statue") to refer to it.The demonstrative phrase involves the nl name of the most specific class that subsumes the primary target, has an nl name, and has not been marked as anonymous.Especially in sentences that express isA or instanceOf message triples about the primary target, the demonstrative phrase is simply "this", to avoid generating sentences like "This statue is a statue".The marking of anonymous individuals and classes currently affects only the referring expressions of the primary target.

Surface Realization
In many nlg systems, the sentences at the end of micro-planning are underspecified; for example, the order of their constituents or the exact forms of their words may be unspecified.Large-scale grammars or statistical models can then be used to fill in the missing information during surface realization, as already discussed (Section 3.2.1).By contrast, in Naturalowl (and most template-based nlg systems) the (ordered and aggregated) sentence plans at the end of micro-planning already completely specify the surface (final) form of each sentence.Hence, the surface realization of Naturalowl is mostly a process of converting internal, but fully specified and ordered sentence specifications to the final text.Punctuation and capitalization are also added.Application-specific markup (e.g., html tags, hyperlinks) or images can also be added by modifying the surface realization code of Naturalowl.

Trials
In our previous work, Naturalowl was used mostly to describe cultural heritage objects.In the xenios project, it was tested with an owl version of an ontology that was created during m-piro to document approximately 50 archaeological exhibits (Androutsopoulos et al., 2007). 22The owl version comprised 76 classes, 343 individuals (including cities, persons etc.), and 41 properties.In xenios, Naturalowl was also embedded in a robotic avatar that presented the exhibits of m-piro in a virtual museum (Oberlander, Karakatsiotis, Isard, & Androutsopoulos, 2008).More recently, in the indigo project, Naturalowl was embedded in mobile robots acting as tour guides in an exhibition about the ancient Agora of Athens. 23An owl ontology documenting 43 monuments was used; there were 49 classes, 494 individuals, and 56 properties in total.
In xenios and indigo, the texts of Naturalowl were eventually indistinguishable from human-authored texts.We participated, however, in the development of the ontologies, and we may have biased them towards choices (e.g., classes, properties) that made it easier for Naturalowl to generate high-quality texts.Hence, in the trials discussed below, we wanted to experiment with independently developed ontologies.We also wanted to experiment with different domains, as opposed to cultural heritage.
A further goal was to compare the texts of Naturalowl against those of a simpler verbalizer.We used the owl verbalizer of the swat project (Stevens et al., 2011;Williams, Third, & Power, 2011), which we found to be particularly robust and useful. 24The verbalizer produces an alphabetical glossary with an entry for each named class, property, and individual, without requiring domain-dependent generation resources.Each glossary entry is a sequence of English-like sentences expressing the corresponding owl statements of the ontology.The swat verbalizer uses a predetermined partial order of statements in each glossary entry; for example, when describing a class, statements about equivalent classes or super-classes are mentioned first, and individuals belonging in the target class are mentioned last. 25The verbalizer actually translates the owl ontology to Prolog, it extracts lexicon entries from owl identifiers and rdfs:label strings, and it uses predetermined sentence plans specified as a dcg grammar.It also aggregates, in effect, message triples of the same property that share one argument (S or O) (Williams & Power, 2010).
Our hypothesis was that the domain-dependent generation resources would help Naturalowl produce texts that end-users would consider more fluent and coherent, compared to those produced by the swat verbalizer, but also those produced by Naturalowl without domain-dependent generation resources.We also wanted to demonstrate that high-quality texts could be produced in both English and Greek, and to measure the effort required to create the domain-dependent generation resources of Naturalowl for existing ontologies.This effort had not been measured in our previous work, because the development of the domain-dependent generation resources was combined with the development of the ontologies.Since the time needed to create the domain-dependent generation resources depends on one's familiarity with Naturalowl and its Protégé plug-in, exact times are not particularly informative.Instead, we report figures such as the number of sentence plans, lexicon entries etc. that were required, along with approximate times.We do not evaluate the usability of the Protégé plug-in of Naturalowl, since it is very similar to the authoring tool of m-piro.Previous experiments (Androutsopoulos et al., 2007) showed that computer science graduates with no expertise in nlg could learn to use effectively the authoring tool of m-piro to create the necessary domain-dependent generation resources for existing or new ontologies, after receiving the equivalent of a full-day introduction course.

Trials with the Wine Ontology
In the first trial, we experimented with the Wine Ontology, which is often used in Semantic Web tutorials. 26It comprises 63 wine classes, 52 wine individuals, a total of 238 classes and individuals (including wineries, regions, etc.), and 14 properties.
We submitted the Wine Ontology to the swat verbalizer to obtain its glossary of Englishlike descriptions of classes, properties, and individuals.We retained only the descriptions of the 63 wine classes and the 52 wine individuals.Subsequently, we also discarded 20 of the 63 wine class descriptions, as they were for trivial classes (e.g., RedWine) and they were stating the obvious (e.g., "A red wine is defined as a wine that has as color Red"). 27In the descriptions of the remaining 43 wine classes and 52 wine individuals, we discarded sentences expressing axioms that Naturalowl does not consider, for example sentences providing examples of individuals that belong in a class being described.The remaining sentences express the same owl statements that Naturalowl expresses when its maximum fact distance is set to 1. Two examples of texts produced by the swat verbalizer follow.
Chenin Blanc (class): A chenin blanc is defined as something that is a wine, is made from grape the Chenin Blanc Grape, and is made from grape at most one thing.A chenin blanc both has as flavor Moderate, and has as color White.A chenin blanc both has as sugar only Off Dry and Dry, and has as body only Full and Medium.
The Foxen Chenin Blanc (individual): The Foxen Chenin Blanc is a chenin blanc.The Foxen Chenin Blanc has as body Full.The Foxen Chenin Blanc has as flavor Moderate.The Foxen Chenin Blanc has as maker Foxen.The Foxen Chenin Blanc has as sugar Dry.The Foxen Chenin Blanc is located in the Santa Barbara Region.
Subsequently, we generated texts for the 43 classes and 52 individuals using Naturalowl without domain-dependent generation resources, hereafter called Naturalowl(−), setting the maximum fact distance to 1; the resulting texts were very similar to swat's.
We then constructed the domain-dependent generation resources of Naturalowl for the Wine Ontology.The resources are summarized in Table 4.They were constructed by the second author, who devoted three days to their construction, testing, and refinement. 28 Our experience is that it takes weeks (if not longer) to develop an owl ontology the size of the Wine Ontology (acquire domain knowledge, formulate the axioms in owl, check for inconsistencies, populate the ontology with individuals etc.); hence, a period of a few days is 26.See http://www.w3.org/TR/owl-guide/wine.rdf.27.Third (2012) discusses how owl axioms leading to undesirable sentences of this kind might be detected.28.Some of the resources were constructed by editing directly their owl representations, rather than using the Protégé plug-in, which was not fully functional at that time.By using the now fully functional plug-in, the time to create the domain-dependent generation resources would have been shorter.relatively light effort, compared to the time needed to develop an owl ontology of this size.Only English texts were generated in this trial; hence, no Greek resources were constructed.We defined only one user type, and we used interest scores only to block sentences stating the obvious, by assigning zero interest scores to the corresponding message triples; we also set maxMessagesPerSentence to 3.Only 7 of the 14 properties of the Wine Ontology are used in the owl statements that describe the 43 classes and 52 individuals.We defined only 5 sentence plans, as some of the 7 properties could be expressed by the same sentence plans.We did not define multiple sentence plans per property.We also assigned the 7 properties to 2 sections, and ordered the sections and properties.We created nl names only when the automatically extracted ones were causing disfluencies.The extracted nl names were obtained from the owl identifiers of classes and individuals; no rdfs:label strings were available.To reduce the number of manually constructed nl names further, we declared the 52 individual wines to be anonymous (and provided no nl names for them).Most of the 67 lexicon entries were used in the remaining 41 nl names of classes and individuals; nl names were very simple, having 2 slots on average.We used Naturalowl with the domain-dependent resources, hereafter called Naturalowl(+), to re-generate the 95 texts, again setting the maximum fact distance to 1; example texts follow.The resulting 285 texts (95 × 3) of the three systems (swat verbalizer, Naturalowl(−), Naturalowl(+)) were shown to 10 computer science students (both undergraduates and graduate students), who were not involved in the development of Naturalowl; they were all fluent in English, though not native English speakers, and they did not consider themselves wine experts.The students were told that a glossary of wines was being developed for people who were interested in wines and knew basic wine terms (e.g., wine colors, wine flavors), but who were otherwise not wine experts.Each one of the 285 texts was given to exactly one student.Each student was given approximately 30 texts, approximately 10 randomly selected texts from each system.The owl statements that the texts were generated from were not shown, and the students did not know which system had generated each text.Each student was shown all of his/her texts in random order, regardless of the system that had generated them.The students were asked to score each text by stating how strongly they agreed or disagreed with statements S 1 -S 5 below.A scale from 1 to 3 was used (1: disagreement, 2: ambivalent, 3: agreement).(S1) Sentence fluency: The sentences of the text are fluent, i.e., each sentence on its own is grammatical and sounds natural.When two or more smaller sentences are combined to form a single, longer sentence, the resulting longer sentence is also grammatical and sounds natural.

Resources
(S2) Referring expressions: The use of pronouns and other referring expressions (e.g., "this wine") is appropriate.The choices of referring expressions (e.g., when to use a pronoun or other expression instead of the name of an object) sound natural, and it is easy to understand what these expressions refer to.
(S3) Text structure: The order of the sentences is appropriate.The text presents information by moving reasonably from one topic to another.
(S4) Clarity: The text is easy to understand, provided that the reader is familiar with basic wine terms.
(S5) Interest: People interested in wines, but who are not wine experts, would find the information interesting.Furthermore, there are no redundant sentences in the text (e.g., sentences stating the obvious).29 S 5 assesses content selection, the first processing sub-stage; we expected the differences across the three systems to be very small, as they all reported the same information, with the exception of redundant sentences blocked by using zero interest assignments in Naturalowl.S 3 assesses text planning, the second sub-stage; again we expected small differences, as many of the wine properties can be mentioned in any order, though there are some properties (e.g., maker, location) that are most naturally reported separately from others (e.g., color, flavor), which is why we used two sections (Table 4).S 1 assesses lexicalization and aggregation; we decided not to use separate statements for these two stages, since it might have been difficult for the students to understand exactly when aggregation takes place.S 2 assesses referring expression generation.S 4 measures the overall perceived clarity of the texts.There was no statement for surface realization, as this stage had a rather trivial effect.
Table 5 shows the average scores of the three systems, with averages computed on the 95 texts of each system, along with 95% confidence intervals (of sample means).For each criterion, the best score is shown in bold; the confidence interval of the best score is also shown in bold if it does not overlap with the other confidence intervals. 30s expected, the domain-dependent generation resources clearly help Naturalowl produce more fluent sentences and much better referring expressions.The text structure scores show that the assignment of the ontology's properties to sections and the ordering of the sections and properties had a greater impact on the perceived structure of the texts than we expected.The highest score of the swat verbalizer was obtained in the clarity criterion, which agrees with our experience that one can usually understand what the texts of the swat verbalizer mean, even if their sentences are often not entirely fluent, not particularly well ordered, and keep repeating proper names.Naturalowl(+)had the highest clarity score, but the difference from the swat verbalizer, which had the second highest score, is not statistically significant.Naturalowl(+)also obtained higher interest scores than the other two systems, with statistically significant differences from both; these differences, which are larger than we expected, can only be attributed to the zero interest score assignments of the domain-dependent generation resources, which blocked sentences stating the obvious, because otherwise all three systems report the same information.
The swat verbalizer obtained higher scores than Naturalowl(−), with the text structure score being the only exception.Only the difference in the referring expression scores of the two systems, though, is statistically significant.Both systems, however, received particularly low scores for their referring expressions, which is not surprising, given that they both always refer to individuals and classes by extracted names; the slightly higher score of the swat verbalizer is probably due to its better tokenization of owl identifiers.

Trials with the Consumer Electronics Ontology
In the second trial, we experimented with the Consumer Electronics Ontology, an owl ontology for consumer electronics products and services. 31The ontology comprises 54 classes and 441 individuals (e.g., printer types, paper sizes, manufacturers), but no information about particular products.We added 60 individuals describing 20 digital cameras, 20 camcorders, and 20 printers.The 60 individuals were randomly selected from a publicly available dataset of 286 digital cameras, 613 camcorders, and 58 printers, whose instances comply with the Consumer Electronics Ontology. 32 We submitted the Consumer Electronics Ontology with the additional 60 individuals to the swat verbalizer, and retained only the descriptions of the 60 individuals.Again, we removed sentences expressing axioms Naturalowl does not consider.We also renamed the string values of some datatype properties to make the texts easier to understand (e.g., "cmt" became "cm").An example description follows.
The Sony Cyber-shot DSC-T90 is a digital camera.
The Sony Cyber-shot DSC-T90 has as manufacturer Sony.
The Sony Cyber-shot DSC-T90 has as data interface type Usb2 0.
The Sony Cyber-shot DSC-T90 has as depth Depth.Depth has as unit of measurement cm.Depth has as value float 9.4.
The Sony Cyber-shot DSC-T90 has as digital zoom factor the Digital Zoom Factor.The Digital Zoom Factor has as value float 12.1.[. . . ] The Sony Cyber-shot DSC-T90 has as feature Video Recording, Microphone and the Automatic Picture Stabilizer.
The Sony Cyber-shot DSC-T90 has as self timer true.[. . . ] In this ontology, many properties have composite values, expressed by using auxiliary individuals.In the example above, a property (hasDepth) connects the digital camera to an auxiliary individual Depth (similar to the anonymous node :n of the property concatenation price example of page 691), which is then connected via two other properties (hasValueFloat 31.Consult http://www.ebusiness-unibw.org/ontologies/consumerelectronics/v1.32.See http://rdf4ecommerce.esolda.com/for the dataset that we used.A list of similar datasets is available at http://wiki.goodrelations-vocabulary.org/Datasets. and hasUnitOfMeasurement) to the float value 9.4 and the unit of measurement (centimeters), respectively.We obtained the descriptions of the auxiliary individuals (e.g., Depth), which are different entries in the glossary of the swat verbalizer, and we copied them immediately after the corresponding sentences that introduce the auxiliary individuals.We also formatted each text as a list of sentences, as above, to improve readability.
We then generated texts for the 60 products using Naturalowl(−), setting the maximum fact distance to 1. Descriptions of auxiliary individuals were also generated and copied immediately after the sentences introducing them.The texts were very similar to those of the swat verbalizer, and they were formatted in the same manner.
In this trial, we also wanted to consider a scenario where the set of individuals to be described changes frequently (e.g., the products sold by a reseller change, new products arrive etc.) along with changes in other connected individuals (e.g., new manufacturers may be added), but nothing else in the ontology changes, i.e., only the assertional knowledge changes.In this case, it may be impractical to update the domain-dependent generation resources whenever the population of individuals changes.Our hypothesis was that by considering a sample of individuals of the types to be described (printers, cameras, camcorders, in our case), it would be possible to construct domain-dependent generation resources (e.g., sections, the ordering of sections and properties, sentence plans, the nl names of classes) that would help Naturalowl generate reasonably good descriptions of new (unseen) individuals (products), without updating the domain-dependent generation resources, using the tokenized owl identifiers or rdfs:label strings of the new individuals as their nl names.
To simulate this scenario, we randomly split the 60 products in two non-overlapping sets, the development set and the test set, each consisting of 10 digital cameras, 10 camcorders, and 10 printers.Again, the second author constructed and refined the domain-dependent generation resources of Naturalowl, this time by considering a version of the ontology that included the 30 development products, but not the 30 test products, and by viewing the generated texts of the 30 development products only.This took approximately six days (for two languages). 33Hence, relatively light effort was again needed, compared to the time it typically takes to develop an ontology of this size, with terminology in two languages.Texts for the 30 products of the test set were then also generated by using Naturalowl and the domain-dependent generation resources of the development set.
As in the previous trial, we defined only one user type, and we used interest scores only to block sentences stating the obvious.The maximum messages per sentence was again 3. We constructed domain-dependent generation resources for both English and Greek; the resources are summarized in Table 6.We created sentence plans only for the 42 properties of the ontology that were used in the development set (one sentence plan per property); the test set uses two additional properties, for which the default sentence plans of Naturalowl (for English and Greek) were used.We also assigned the 42 properties to 6 sections, and ordered the sections and properties.We created nl names only when the automatically extracted ones were causing disfluencies in the development texts.Unlike the previous trial, the products to be described were not declared to be anonymous individuals, but the number of nl names that had to be provided was roughly the same as in the previous trial, 33.Again, some of the domain-dependent generation resources were constructed by editing their owl representations.As a test, the second author later reconstructed the domain-dependent generation resources from scratch using the fully functional Protégé plug-ing, this time in four days.since fewer automatically extracted names were causing disfluencies; in particular, all the products had reasonably good rdfs:label strings providing their English names.An example description from the development set produced by Naturalowl(+)follows. We formatted the sentences of each section as a separate paragraph, headed by the name of the section (e.g., "Other features:"); this was easy, because Naturalowl can automatically mark up the sections in the texts.The maximum fact distance was again 1, but the sentence plans caused Naturalowl to automatically retrieve additional message triples describing the auxiliary individuals at distance 1; hence, we did not have to retrieve this information manually, unlike the texts of the swat verbalizer and Naturalowl(−).

Resources
Type: Sony Cyber-shot DSC-T90 is a digital camera.The 180 English texts that were generated by the three systems for the 30 development and 30 test products were shown to the same 10 students of the first trial.The students were now told that the texts would be used in on-line descriptions of products in the Web site of a retailer.Again, the owl statements that the texts were generated from were not shown to the students, and the students did not know which system had generated each text.Each student was shown 18 randomly selected texts, 9 for products of the development set (3 texts per system) and 9 for products of the test set (again 3 texts per system).Each student was shown all of his/her texts in random order, regardless of the system that had generated them.The students were asked to score the texts as in the previous trial.
Table 7 shows the results for the English texts of the development set. 34As in the previous trial, the domain-dependent generation resources clearly help Naturalowl produce much more fluent sentences, and much better referring expressions and sentence orderings.The text structure scores of the swat verbalizer and Naturalowl(−)are now much lower than in the previous trial, because there are now more message triples to express per individual and more topics, and the texts of these systems jump from one topic to another making the texts look very incoherent; for example, a sentence about the width of a camera may be separated from a sentence about its height by a sentence about shutter lag.This incoherence may have also contributed to the much lower clarity scores of these two systems, compared to the previous trial.The interest scores of these two systems are also much lower than in the previous trial; this may be due to the verbosity of their texts, caused by their frequent references to auxiliary individuals in the second trial, combined with the lack (or very little use) of sentence aggregation and pronoun generation.By contrast, the clarity and interest of Naturalowl(+)were judged to be perfect; the poor clarity and interest of the other two systems may have contributed to these perfect scores though.Again, the swat verbalizer obtained slightly better scores than Naturalowl without domain-dependent generation resources, except for clarity, but the differences are not statistically significant.Table 8 shows the results for the English texts of the test set.The results of the swat verbalizer and Naturalowl(−)are very similar to those of Table 7, as one would expect.Also, there was only a very marginal decrease in the scores of Naturalowl(+), compared to the scores of the same system for the development set in Table 7.There is no statistically significant difference, however, between the corresponding cells of the two tables, for any of the three systems.These results support our hypothesis that by considering a sample of individuals of the types to be described one can construct domain-dependent generation resources that can be used to produce high-quality texts for new individuals of the same types, when the rest of the ontology remains unchanged.The fact that all the products (but not the other individuals) had rdfs:label strings providing their English names probably contributed to the high results of Naturalowl(+)in the test set, but rdfs:label strings of this kind are common in owl ontologies.
We then showed the 60 Greek texts that were generated by Naturalowl(+)to the same 10 students, who were native Greek speakers; the swat verbalizer and Naturalowl(−)cannot  and seem to introduce noise.There were very small differences in the scores for referring expressions and text structure, which seem to suggest that when the overall quality of the texts decreases, the judges are biased towards assigning lower scores in all of the criteria. 35he third configuration was the same as the second one, but the component that generates pronouns and demonstrative noun phrases was disabled, causing Naturalowl to always use the nl names of the individuals and classes, or names extracted from the ontology.There was a big decrease in the score for referring expresions, showing that despite their simplicity, the referring expression generation methods of Naturalowl have a noticeable effect; we mark big decreases in italics in Table 10.The scores for sentence fluency, interest, and clarity were also affected, presumably because repeating the names of the individuals and classes made the sentences look less natural, boring, and more difficult to follow.There was almost no difference (a very small positive one) in the text structure score.
In the fourth configuration, the nl names of the individuals and classes were also removed, forcing Naturalowl to always use automatically extracted names.There was a further decrease in the score for referring expressions, but the decrease was small, because the referring expressions were already poor in the third configuration.Note, also, that the nl names are necessary for Naturalowl to produce pronouns and demonstrative noun phrases; hence, the higher referring expression score of the third configuration would not have been possible without the nl names.The sentence fluency and clarity scores were also affected in the fourth configuration, presumably because the automatically extracted names made the texts more difficult to read and understand.There were also small decreases in the scores for interest and even text structure, suggesting again that when the overall quality of the texts decreases, the judges are biased towards lower scores in all of the criteria.
In the fifth configuration, aggregation was turned off, causing Naturalowl to produce a separate sentence for each message triple.With sentences sharing the same subject no longer being aggregated, more referring expressions for subjects had to be generated.Since the component that generates pronouns and demonstrative noun phrases had been switched off and the nl names had been removed, more repetitions of automatically extracted names had to be used, which is why the score for referring expressions decreased further.Sentence fluency was also affected, since some obvious aggregations were no longer being made, which made the sentences look less natural.There was also a small decrease in the score for the perceived text structure and interest, but no difference in the score for clarity.Overall, the contribution of aggregation to the perceived quality of the texts seems to be rather small.
In the sixth configuration, all the sentence plans were removed, forcing Naturalowl to use the default sentence plan and tokenized property identifiers.There was a sharp decrease in sentence fluency and clarity, as one would expect, but also in the perceived interest of the texts.There was also a small decrease in the perceived text structure, and no difference in the score for referring expressions.Overall, these results indicate that sentence plans are a very important part of the domain-dependent generation resources.
In the seventh configuration, the sections, assignments of properties to sections, and the ordering of sections and properties were removed, causing Naturalowl to produce random orderings of the message triples.There was a very sharp decrease in the score for text structure.The scores for the perceived interest, clarity, but also sentence fluency were also affected, again suggesting that when the overall quality of the texts decreases, the judges are biased towards lower scores in all of the criteria.
We conclude that the sections and ordering information of the domain-dependent generation resources are, along with the sentece plans, particularly important.We note, however, that the best scores were obtained by enabling all the components and using all the available domain-dependent generation resources.

Conclusions and Future Work
We provided a detailed description of Naturalowl, an open-source nlg system that produces English and Greek texts describing individuals or classes of owl ontologies.Unlike simpler verbalizers, which typically express a single axiom at a time in controlled, often not entirely fluent English primarily for the benefit of domain experts, Naturalowl aims to generate fluent and coherent multi-sentence texts for end-users in more than one languages.
We discussed the processing stages of Naturalowl, the optional domain-dependent generation resources of each stage, as well as particular nlg issues that arise when generating from owl ontologies.We also presented trials we performed to measure the effort required to construct the domain-dependent generation resources and the extent to which they improve the resulting texts, also comparing against a simpler owl verbalizer that requires no domain-dependent generation resources and employs nlg methods to a lesser extent.The trials showed that the domain-dependent generation resources help Naturalowl produce significantly better texts, and that the resources can be constructed with relatively light effort, compared to the effort that is typically needed to develop an owl ontology.
Future work could compare the effort needed to construct the domain-dependent generation resources against the effort needed to manually edit the lower quality texts produced without domain-dependent generation resources.Our experience is that manually editing texts generated by a verbalizer (or Naturalowl(−)) is very tedious when there is a large number of individuals (e.g., products) of a few types to be described, because the editor has to repeat the same (or very similar) fixes.There may be, however, particular applications where post-editing the texts of a simpler verbalizer may be preferable.
We also aim to replace in future work the pipeline architecture of Naturalowl by a global optimization architecture that will consider all the nlg processing stages in parallel, to avoid greedy stage-specific decisions (Marciniak & Strube, 2005;Lampouras & Androutsopoulos, 2013a, 2013b).Finally, we hope to test Naturalowl with biomedical ontologies, such as the Gene Ontology and snomed. 36

Figure 1 :
Figure 1: The processing stages and sub-stages of Naturalowl.

The 2007
Chateau Teyssier is a member of the intersection of: (a) the class of wines, (b) the class of individuals from (not necessarily exclusively) the St. Emilion region, (c) the class of individuals that have (not necessarily exclusively) red color, (d) the class of individuals that have (not necessarily exclusively) strong flavor, (e) the class of individuals that are made exclusively from Cabernet Sauvignon grapes.

Figure 2 :
Figure 2: Graph view of message triples.
St. Emilion is a kind of Bordeaux.It is from the St. Emilion region.It has red color.It has strong flavor.It is made from Cabernet Sauvignon grape.It is made from at most one grape variety.

( 1 )
This (exhibit) is an aryballos.(2) An aryballos is a kind of vase.(3) An aryballos was a small spherical vase with a narrow neck, in which the athletes kept the oil they spread their bodies with.• (4) This aryballos was found at the Heraion of Delos.(5) It was created during the archaic period.(6) The archaic period was when the Greek ancient city-states developed.(7) It spans from 700 bc to 480 bc.• (8) This aryballos was decorated with the black-figure technique.(9) In the black-figure technique, the silhouettes are rendered in black on the pale surface of the clay, and details are engraved.• (10) This aryballos is currently in the Museum of Delos.

{
locationSection The Stoa of Zeus Eleutherios is located in the western part of the Agora.It is located next to the Temple of Apollo Patroos.}{ buildSection It was built around 430 bc.It was built in the Doric style.It was built out of porous stone and marble.}{useSection It was used during the Classical period, the Hellenistic period, and the Roman period.It was used as a religious place and a meeting point.}{ conditionSection It was destroyed in the late Roman period.It was excavated in 1891 and 1931.Today it is in good condition.}The Stoa of Zeus Eleutherios was built in the Doric style.It was excavated in 1891 and 1931.It was built out of porous stone and marble.It is located in the western part of the Agora.It was destroyed in the late Roman period.It was used as a religious place and a meeting point.It is located next to the Temple of Apollo Patroos.It was built around 430 bc.Today it is in good condition.It was used during the Classical period, the Hellenistic period, and the Roman period.

Figure 3 :
Figure 3: The overall text planning algorithm of Naturalowl.

Figure 5 :
Figure 5: A sentence plan for the property usedDuringPeriod.
(vi) An expression for the O (filler) of the triple.If O is an individual or class, then the expression is the natural language name of O; if O is a datatype value (e.g., an integer), then the value itself is inserted in the slot; and similarly if O is a disjunction or conjunction of datatype values, individuals, or classes.

)
It was sculpted by Nikolaou.(3) Nikolaou was born in Athens.(4) He was born in 1918.(5) He died in 1998.• (6) Exhibit 7 is now in the National Gallery.(7) It is in excellent condition.
Main features: It has a focal length range of 35.0 to 140.0 mm, a shutter lag of 2.0 to 0.0010 sec and an optical zoom factor of 4.0.It has a digital zoom factor of 12.1 and its display has a diagonal of 3.0 in.Other features: It features an automatic picture stabilizer, a microphone, video recording and it has a self-timer.Energy and environment: It uses batteries.Connectivity, compatibility, memory: It supports USB 2.0 connections for data exchange and it has an internal memory of 11.0 GB.Dimensions and weight: It is 5.7 cm high, 1.5 cm wide and 9.4 cm deep.It weighs 128.0 grm.
The 2007 Chateau Teyssier is a wine from the St. Emilion region.It has red color and strong flavor.It is made from exactly one grape variety: Cabernet Sauvignon grapes.

Table 1 :
owl statements for an individual target, and the corresponding message triples.
St. Emilion is a kind of Bordeaux from the St. Emilion region.It has red color and strong flavor.It is made from exactly one grape variety: Cabernet Sauvignon grapes.Every St. Emilion has these properties, and anything that has these properties is a St. Emilion.

Table 2 :
owl statements for a class target, and the corresponding message triples.
Model 35 is sold in at most three countries.Model 35 is sold in at least three countries.Model 35 is sold in Spain, Italy, and Greece.⇒Model 35 is sold in exactly three countries: Spain, Italy, and Greece.Class and passive sentence: This rule aggregates (i) a sentence expressing a message triple S, instanceOf, C or S, isA, C and (ii) a passive immediately subsequent sentence expressing a single triple of the form S, P, O , for the same S, where P is an (unmodified) property of the ontology.The subject and auxiliary verb of the second sentence are omitted.
Bancroft Chardonnay is a kind of Chardonnay.It is made in Bancroft.⇒ Bancroft Chardonnay is a kind of Chardonnay made in Bancroft.
It has medium body.It has moderate flavor.⇒ It has medium body and moderate flavor.He was born in Athens.He was born in 1918.⇒ He was born in Athens in 1918.

Table 4 :
Domain-dependent generation resources created for the Wine Ontology.
Chenin Blanc (class): A Chenin Blanc is a moderate, white wine.It has only a full or medium body.It is only off-dry or dry.It is made from exactly one wine grape variety: Chenin Blanc grapes.The Foxen Chenin Blanc (individual): This wine is a moderate, dry Chenin Blanc.It has a full body.It is made by Foxen in the Santa Barbara County.

Table 5 :
Results for texts generated from the Wine Ontology by the swat verbalizer and Naturalowl with (+) and without (−) domain-dependent generation resources.

Table 6 :
Domain-dependent generation resources for the Consmer Electronics Ontology.

Table 7 :
English development results for the Consumer Electronics Ontology.

Table 8 :
English test results for the Consumer Electronics Ontology.

Table 9 :
Greek results for the Consumer Electronics Ontology.