Building Recursive-Descent Natural Language Generators

Fri, 22 Apr 2011 23:33:44 -0500

Tags: research, nlg, nlp


I am currently teaching a graduate-level course on Natural Language Generation and as such I have this blog a little bit forgotten. Now I found a topic to scratch both itches, a straightforward way to build NLG systems for small-size projects.

In the same vein that recursive descent parsers are popular for writing quick (quick to write, that is) parsers, simple NLG can be tackled by means of mutually recursive functions that closely mimic the generation grammar.

I have used this approach in the past for the ProGenIE generator (briefly sketched in the following PDF file in slides 2-5, I should release the code at some moment) and it is quite effective. Discussing with one of the students in the class, the approach can be extended beyond syntactic structures to simple semantic predicates.

The Approach

Here I will sketch the generator using syntax as input, I mention how to generalize this to semantic inputs below. In a recursive-descent sentence generator, each grammar rule corresponds to a function which receives as arguments whatever it needs to generate its output correctly and returns both the generated string and a hash table (i.e., dictionary) of bound values (things that need to be known about the generated string, in the vein of the annotations employed in Concept-to-Speech systems).

Each of these functions will in turn implement the grammar by making calls to other grammar-symbols-turned-functions. The order of these calls is such that constraints between the functions can be transferred correctly (and if such constraints can be transferred correctly, then it is time to abandon recursive-descent and move on to a full-fledged surface realizer). For example:

def s(data):
    np_gen = np(data["np"])
    vp_data = data["vp"].copy()
    vp_data["subj"] = np_gen[1]
    vp_gen = vp(vp_data)
    return (np_gen[0] + " " + vp_gen[0], { "np":np_gen[1], "vp":vp_gen[1]})

def np(data):
    if data["type"] == 'pron':
        if data["pers"] == 1:
            if data["num"] == 'sing':
                return ("I", data)
                return ("we", data)
        elif data["pers"] == 2:
            return ("you", data)
        elif data["pers"] == 3:
            if data["num"] == 'sing':
                if data["gen"] == 'masc':
                    return ("he", data)
                    return ("she", data)
                return ("they", data)
        # missing modifiers, determiner, etc
        if data["num"] == "plural":
            return (data["head"]+"s", data)
            return (data["head"], data)

def vp(data):
    v_data = data["v"].copy()
    v_gen = v(v_data)
    np_gen = np(data["np"])
    return (v_gen[0] + " " + np_gen[0], { "v":v_gen[1], "np":np_gen[1] })

def v(data):
    ret = data["head"]
    if(data["subj"]["pers"] == 3 and data["subj"]["num"] == "sing"):
        ret = ret + "s"
    return (ret, data)

print s({ "np" : { "type" : "pron" , "pers" : 1, "num" : "sing" },
    "vp" : { "v" : { "head" : "eat" },
             "np" : { "type" : "noun", "head" : "carrot", "num" : "plural" }}})

Yes, the code above is not particularly fancy python and you can write a DSL which will take much better care of the data bits, etc. This is just for illustrative purposes.

Handling Semantic Inputs

Having a fully specified syntactic structure doesn't seem to be very useful as an input for a generator. We want to be able to provide much abstract inputs. For some problems, this might as well be possible using recursive descent techniques, too.

def verbalize(list_of_lat_long_timestamp):
    # TODO group things for time and space
    # list of dictionaries, containing "group" feature (space or time)
    # then "space" and "time" (one of which is "interval", the ungrouped one)
    list_of_messages = []
    ret_str = ""
    ret_data = {}
    i = 0
    for message in list_of_messages:
        msg_gen = gen_message(message)
        ret_str = ret_str + msg_gen[0].capitalize() + ". "
        ret_data[i] = msg_gen[1]
        i = i + 1
    return (ret_str, ret_data)

def gen_message(data):
    if(data["group"] == "space"):
        location = data["space"]
        time_interval = data["time"].copy()
        loc_gen = gen_location(location)
        # in case there are dependencies
        time_interval["location"] = loc_gen[1]
        time_gen = gen_time(time_interval)
        vp = { "v" : { "head" : "be", "time": "past" } }
        return s({ "np" : { "type" : "pron" , "pers" : 1, "num" : "sing" },
                   "vp" : vp})
        location_interval = data["space"].copy()
        time = data["time"].copy()
        time_gen = gen_time(time)
        # in case there are dependencies
        location_interval["time"] = time_gen[1]
        loc_gen = gen_location(location_interval)
        vp = { "v" : { "head" : "be", "time": "past" } }
        return s({ "np" : { "type" : "pron" , "pers" : 1, "num" : "sing" },
                   "vp" : vp})

def gen_time(data):
    if data["type"] == "interval":
        from_gen = gen_time(data["from"])
        to_gen = gen_time(data["to"])
        # these texts are not used
        return ("from "+from_gen+" to "+to_gen,
                {"pp":[ {"head":"from","np":from_gen["pp"]["np"]},
                        {"head":"to","np":to_gen["pp"]["np"]} ]})
        # TODO time arithmetic, to generate "today, yesterday, in the morning of day X",etc
        return ("at "+data["time"], {"pp": {"head":"at","np":{"head":data["time"]}}})

# TODO gen_location is similar

The above code (if completely fleshed out), is to be paired with to produce a nicely formated, human readable output, just in time for divorce court (of some possible iPhone user, we use Android at home). And if you still like the original app using Google maps more, remember it is difficult to convey the intricacies of timings with maps.

In the example above, the functions that generate semantic predicates are implemented as general as possible: they produce both a string and a full-fledged form that can be used for the syntax-based recursive descent generator in the previous section. For this particular application, full syntax might be overkill and just using the functions generation semantic predicates plus simple templates might be enough.

Similar Systems in NLG

An earlier such approach was Erik T. Mueller generator of his DAYDREAMER system. Its code is still available (and distributed under GPLv2!) at the CMU AI Repository. The most relevant work would be the one by Takashi Miyata , Yuji Matsumoto, in their paper Natural Language Generation for Legal Expert System and Visualization of Generation Process (1998). All in all, most techniques in Semantic-Head Driven Generation are relevant here.


Your name:

URL (optional):

Your e-mail (optional, won't be displayed):

Something funny using the word 'elephant' (spam filter):

Your comment: