Lessons about the very basics of modeling grammar, building on your understanding of words and sentences.

  1. Language, Grammar, Rules
  2. Parse Trees & X-Bar Theory
  3. Ambiguity & Other Models

Language, Grammar, Rules

This topic is for anyone interested in syntax, especially anyone who's still interested in syntax after going through the grammar of sentences series. I will present a short overview of computational grammars and generative syntax. I'll spend some time exploring what simple sentences tell us about human language grammar, take a look at the now-classic parse tree or X-bar model that attemtps to capture what humans are doing when we speak a language and put sentences together, and conclude by mentioning a few other models of human grammar.

The terms "language" and "grammar" get thrown around a lot, and it would take me well beyond my time here to work out a sturdy definition with you. I'll merely point you in a direction by narrowing language down to natural (human) language, which is an umbrella term for the specific first language you learned as a child (your first language) and whatever system or process you had available to you as a child that allowed you to learn that language.

At this point, the notion of grammar becomes relevant. Under one approach, the grammar is something you have in your mind that gets configured in a certain way, allowing you to use a language. Under another approach, grammar is something you puzzle out of a language as you learn it. In either case, notice that grammar is not a set of literary, stylistic or social preferences like 'don't say ain't - ain't ain't a word'. Instead, the linguistisc understanding of grammar seeks out the fundamental mechanism for sentence production.

The common way to conceive of this grammar of natural language is (a) to ground it in the speech of real speakers and (b) to explain it as a series of rules. Let's clarify the term 'rule' here. These rules are statements or imperatives that tell the system how to handle chunks of language. The grammar (using our linguistic definition) takes some input, processes it using a series of rule statements, and produces or generates some output. This setup means that grammar allows certain sentences and disallows others. Its ability to generate language earns this broad-stroked model of grammar the title generative grammar.

One tool for modeling linguistic grammar is X-Bar Theory. This model parses the grammar of sentences by looking for the relationships between word classes, determining how to group these words into phrases, working out the key word or head of the phrase and building back until we can view the grammar of a whole sentence as a single expanded tree.

Parse Trees & X-Bar Theory

Building a tree for a simple sentence

basic parse tree for the sentence 'The boss ate soup'

The video shows a simple tree for the sentence 'the boss ate soup'. The major nodes on this tree are phrases. Each phrase has a head. For example, the head of a verb phrase is a verb. The other required elements in a phrase are either specifiers or complements. Specifiers are sisters of the subphrases, meaning that they branch off from the main phrase here and sit parallel to any subphrases below the main phrase. (A specifier is first non-head non-complement branch from the phrase). Complements are sisters of the head, so they branch out next to the head word. Optional modifiers like 'happy' and 'from Rome' could be added as well: 'the happy boss from Rome'. The specifier still stays on the first branch of the Noun Phrase (the NP). Specifiers and complements are relative and found throughout the tree, just like heads - the subject of the sentence, this Noun Phrase (the subject NP 'the boss'), is the specifier of this Sentence node (the S 'the boss ate soup').

The tree still relies on terms for word classes (which traditional grammarians call parts of speech) like nouns (abbreviated N), verbs (V), prepositions (P) or determiners (det or D). What if I want to add a prepositional phrase like 'at home' to the sentence? We have to expand the nodes in our Verb Phrase. Instead of just the node for the verb and the Noun Phrase over here, we need a Prepositional Phrase (a PP). But this structure V-NP-PP starts to flatten out our tree.

parse tree with flat VP for the sentence 'The boss ate soup at home'

Linguistic evidence

When we start asserting the syntactic structure of a chunk of language, it makes sense to figure out if the arrangements we're chosing have linguistic support. Do we have any reason to maintain this flat V-NP-PP structure? On the contrary, do we have a rationale for adding some depth to this verb phrase? Consider two pieces of evidence (natural language data):

The boss ate soup at home. acceptable
*The boss ate at home soup. unacceptable

The second sentence doesn't work because the PP material falls too close to the V. Indeed, the V seems to expect the core argument NP 'soup' to follow. Core arguments fall adjacent to the verb.

Before we solve this flatness proglem in our tree, let's take a quick sidetrack into the issue of linguistic evidence. Ideally, any grammar we're proposing, especially any grammar that we claim models how humans produce language, or at least how computers can produce human language, must account for natural language sentences. Evidence includes sentences that impact the claim that our grammar works, and this evidence is positive evidence if it's an example of our language or negative evidence if it doesn't work in our language. (What it means for a sentence to be a negative example is a matter of debate.) Negative evidence is by convention marked with an asterisk (*).

Expanding the parse tree

Back to our tree. Let's account for the ordering of V-NP-PP by prioritizing their distance from the head V 'ate'. We add some intermediate nodes below the V, some V' ("V-bar") nodes that allow us to expand adjacent phrases. Now we can see that the NP sits closer to the V, but the PP appears at a distance.

parse tree for the Sentence 'The boss ate soup at home'

We've modeled a complete sentence. Let's pull back and make some general observations, then work through some useful terminology for moving around the tree.

First, notice that we have multiple kinds of phrases - NP, PP, VP. But they all behave in the same way, containing some head N, V, P and expanding out either to the left or the right. We can abstract and say that all of our phrases take the form XP ("X-phrase"), where X stands for some head word class. Since XP expand to some node X' ("X-bar"), the whole model takes the name X-Bar Theory.

What about our top node S? There's a mismatch between all of our expanding nodes here, which are XP, and this top branching node, which isn't named some XP. Consider a more appropriate name like IP (Inflectional Phrase). Of course, it follows that the head of an Inflectional Phrase is some I. The inflection may be pulled out as [+tense +agreement] information since this is a finite sentence. We can even expand beyond this sentence to a Complementizer Phrase (CP) with some complementizer word ('...that the boss ate soup at home'), which connects the IP to even larger chunks of language!

parse tree for the Inflectional Phrase 'The boss ate soup at home'

Walking the branches of the tree

At this point we have a parse tree that we can walk. Starting at the top node, this IP node dominates all the nodes below it. The nodes below each dominate the nodes below them. In all cases, notice that each node in our tree immediately dominates at most two other nodes - for instance, IP dominates NP & VP. We've followed the binary branching hypothesis, which maintains this kind of structure to give our sentences depth, flexibility and, according to proponents, grammatical accuracy.

Nodes falling to the left of other nodes on the same level precede nodes the the right. This precedence differs from dominance - for example, NP precedes VP. Nodes immediately dominated by a node c-command every node dominated by the immediate dominating node, so here V' c-commands a PP, P', P & NP. We continue to walk all of these branches to all of these branch nodes until we get to the leaves, which are the terminal symbols. These terminal symbols contain the actual morphemes or words of the sentence.

Rewrite rules

There's a shorthand way to capture this whole grammar in a succinct collection of statements called rewrite rules. We've seen that NP dominates N', N, that VP expands to V' and V or another phrase like NP or PP. So we can take all these together to say that (1) an XP, where X is your headword, can be rewritten as X' or some other phrase YP. (2) An X' expands to X or another phrase YP. (3) An X is a terminal symbol, so the branch has ended. This takes us from the top to the bottom of a tree.

The rewrite rules 1-3 above are typically reduced to the following expressions:

XP -> X', YP X Phrases can expand
X' -> X, YP X-bars can expand
X -> X X terminates the branch

Ambiguity

Scopes: global vs. local

Parse trees may help computers and people resolve the kinds of ambiguities that thrive in natural language. You're already aware that sentences can be ambiguous - someone has told you something that could be interpreted in different ways.

Ambiguity has two general scopes. A sentence can have global ambiguity or local ambiguity. Global ambiguity impacts the whole sentence, like in the canonical example 'time flies like an arrow'. Local ambiguity is limited to one or more pieces of a sentence.

Types of ambiguity

Besides scope, there are also different types of ambiguity. Structural ambiguity occurs when more than one parse tree can be used to represent the words, so different grammatical structures yield different interpretations of the sentence.

Consider again the sentence 'time flies like an arrow'. If you haven't evaluated that phrase before, think about it now... You can read the sentence in various ways, including with 'time' as the verb, 'flies' as the verb or 'like' as the verb.

When one of the leaves of a parse tree (the words) can be understood in multiple ways, you have an example of word sense ambiguity. The meaning of the word 'cards' in 'she has cards in her pocket' is ambiguous. We know it's a noun, but does it mean 'credit cards' or 'playing cards'?

When it's unclear what a pronoun refers back to, that's a case of referential ambiguity. In 'John told Jake that he has to come to the party', we know that 'he' is a pronoun, but does it refer to John or to Jake?

Conclusion & Other Models of Grammar

Alright, it's time to conclude this rough survey of parse trees. The topic of grammars is deep, contentious and central to questions about the structure of individual languages, the human brain's ability to acquire language, computer processing of language and artificial intelligence. There are many linguistic models of grammar, including other rule-based approaches (e.g. Government & Binding Theory), statistical approaches based on large amounts of natural language data (e.g. Statistical Parsing), and approaches that see language as the outcome of competing, ranked, violable constraints instead of inviolable rules (e.g. Optimality Theory). Those are all topics for another day. It's been fun to work through a simplified model with you, and thanks for learning with me.