Earley parser

In computer science, the Earley parser is an algorithm for parsing strings that belong to a given context-free language, though (depending on the variant) it may suffer problems with certain nullable grammars.[1] The algorithm, named after its inventor, Jay Earley, is a chart parser that uses dynamic programming; it is mainly used for parsing in computational linguistics. It was first introduced in his dissertation[2] in 1968 (and later appeared in an abbreviated, more legible, form in a journal[3]).

Earley parsers are appealing because they can parse all context-free languages, unlike LR parsers and LL parsers, which are more typically used in compilers but which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general case , where n is the length of the parsed string, quadratic time for unambiguous grammars ,[4] and linear time for all LR(k) grammars. It performs particularly well when the rules are written left-recursively.

Earley recogniser

The following algorithm describes the Earley recogniser. The recogniser can be easily modified to create a parse tree as it recognises, and in that way can be turned into a parser.

The algorithm

In the following descriptions, α, β, and γ represent any string of terminals/nonterminals (including the empty string), X and Y represent single nonterminals, and a represents a terminal symbol.

Earley's algorithm is a top-down dynamic programming algorithm. In the following, we use Earley's dot notation: given a production X → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected.

Input position 0 is the position prior to input. Input position n is the position after accepting the nth token. (Informally, input positions can be thought of as locations at token boundaries.) For every input position, the parser generates a state set. Each state is a tuple (X → α • β, i), consisting of

  • the production currently being matched (X → α β)
  • our current position in that production (represented by the dot)
  • the position i in the input at which the matching of this production began: the origin position

(Earley's original algorithm included a look-ahead in the state; later research showed this to have little practical effect on the parsing efficiency, and it has subsequently been dropped from most implementations.)

The state set at input position k is called S(k). The parser is seeded with S(0) consisting of only the top-level rule. The parser then repeatedly executes three operations: prediction, scanning, and completion.

  • Prediction: For every state in S(k) of the form (X → α • Y β, j) (where j is the origin position as above), add (Y → • γ, k) to S(k) for every production in the grammar with Y on the left-hand side (Y → γ).
  • Scanning: If a is the next symbol in the input stream, for every state in S(k) of the form (X → α • a β, j), add (X → α a • β, j) to S(k+1).
  • Completion: For every state in S(k) of the form (X → γ •, j), find states in S(j) of the form (Y → α • X β, i) and add (Y → α X • β, i) to S(k).

It is important to note that duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is.

The algorithm accepts if (X → γ •, 0) ends up in S(n), where (X → γ) is the top level-rule and n the input length, otherwise it rejects.

Pseudocode

Adapted from Speech and Language Processing[5] by Daniel Jurafsky and James H. Martin,

DECLARE ARRAY S;

function INIT(words)
    S ← CREATE-ARRAY(LENGTH(words) + 1)
    for k ← from 0 to LENGTH(words) do
        S[k] ← EMPTY-ORDERED-SET

function EARLEY-PARSE(words, grammar)
    INIT(words)
    ADD-TO-SET((γ → •S, 0), S[0])
    for k ← from 0 to LENGTH(words) do
        for each state in S[k] do  // S[k] can expand during this loop
            if not FINISHED(state) then
                if NEXT-ELEMENT-OF(state) is a nonterminal then
                    PREDICTOR(state, k, grammar)         // non-terminal
                else do
                    SCANNER(state, k, words)             // terminal
            else do
                COMPLETER(state, k)
        end
    end
    return chart

procedure PREDICTOR((A → α•Bβ, j), k, grammar)
    for each (B → γ) in GRAMMAR-RULES-FOR(B, grammar) do
        ADD-TO-SET((B → •γ, k), S[k])
    end

procedure SCANNER((A → α•aβ, j), k, words)
    if a ⊂ PARTS-OF-SPEECH(words[k]) then
        ADD-TO-SET((A → αa•β, j), S[k+1])
    end

procedure COMPLETER((B → γ•, x), k)
    for each (A → α•Bβ, j) in S[x] do
        ADD-TO-SET((A → αB•β, j), S[k])
    end

Example

Consider the following simple grammar for arithmetic expressions:

<P> ::= <S>      # the start rule
<S> ::= <S> "+" <M> | <M>
<M> ::= <M> "*" <T> | <T>
<T> ::= "1" | "2" | "3" | "4"

With the input:

2 + 3 * 4

This is the sequence of state sets:

(state no.)Production(Origin)Comment
S(0): • 2 + 3 * 4
1P → • S0start rule
2S → • S + M0predict from (1)
3S → • M0predict from (1)
4M → • M * T0predict from (3)
5M → • T0predict from (3)
6T → • number0predict from (5)
S(1): 2 • + 3 * 4
1T → number •0scan from S(0)(6)
2M → T •0complete from (1) and S(0)(5)
3M → M • * T0complete from (2) and S(0)(4)
4S → M •0complete from (2) and S(0)(3)
5S → S • + M0complete from (4) and S(0)(2)
6P → S •0complete from (4) and S(0)(1)
S(2): 2 + • 3 * 4
1S → S + • M0scan from S(1)(5)
2M → • M * T2predict from (1)
3M → • T2predict from (1)
4T → • number2predict from (3)
S(3): 2 + 3 • * 4
1T → number •2scan from S(2)(4)
2M → T •2complete from (1) and S(2)(3)
3M → M • * T2complete from (2) and S(2)(2)
4S → S + M •0complete from (2) and S(2)(1)
5S → S • + M0complete from (4) and S(0)(2)
6P → S •0complete from (4) and S(0)(1)
S(4): 2 + 3 * • 4
1M → M * • T2scan from S(3)(3)
2T → • number4predict from (1)
S(5): 2 + 3 * 4 •
1T → number •4scan from S(4)(2)
2M → M * T •2complete from (1) and S(4)(1)
3M → M • * T2complete from (2) and S(2)(2)
4S → S + M •0complete from (2) and S(2)(1)
5S → S • + M0complete from (4) and S(0)(2)
6P → S •0complete from (4) and S(0)(1)

The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences.

Constructing the parse forest

Earley's dissertation[6] briefly describes an algorithm for constructing parse trees by adding a set of pointers from each non-terminal in an Earley item back to the items that caused it to be recognized. But Tomita noticed[7] that this does not take into account the relations between symbols, so if we consider the grammar S → SS | b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb.

Another method[8] is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation for ambiguous parses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest.

  • Predicted items have a null SPPF pointer.
  • The scanner creates an SPPF node representing the non-terminal it is scanning.
  • Then when the scanner or completer advance an item, they add a derivation whose children are the node from the item whose dot was advanced, and the one for the new symbol that was advanced over (the non-terminal or completed item).

Note also that SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from.

See also

Citations

  1. Kegler, Jeffrey. "What is the Marpa algorithm?". Retrieved 20 August 2013.
  2. Earley, Jay (1968). An Efficient Context-Free Parsing Algorithm (PDF). Carnegie-Mellon Dissertation.
  3. Earley, Jay (1970), "An efficient context-free parsing algorithm" (PDF), Communications of the ACM, 13 (2): 94–102, doi:10.1145/362007.362035
  4. John E. Hopcroft and Jeffrey D. Ullman (1979). Introduction to Automata Theory, Languages, and Computation. Reading/MA: Addison-Wesley. ISBN 0-201-02988-X. p.145
  5. Jurafsky, D. (2009). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Pearson Prentice Hall. ISBN 9780131873216.
  6. Earley, Jay (1968). An Efficient Context-Free Parsing Algorithm (PDF). Carnegie-Mellon Dissertation. p. 106.
  7. Tomita, Masaru (April 17, 2013). Efficient Parsing for Natural Language: A Fast Algorithm for Practical Systems. Springer Science and Business Media. p. 74. ISBN 1475718853. Retrieved 16 September 2015.
  8. Scott, Elizabeth (April 1, 2008). "SPPF-Style Parsing From Earley Recognizers". Electronic Notes in Theoretical Computer Science. 203 (2): 53–67. doi:10.1016/j.entcs.2008.03.044. Retrieved 16 September 2015.

Other reference materials

  • Aycock, John; Horspool, R. Nigel (2002). Practical Earley Parsing. The Computer Journal. 45. pp. 620–630. doi:10.1093/comjnl/45.6.620.
  • Leo, Joop M. I. M. (1991), "A general context-free parsing algorithm running in linear time on every LR(k) grammar without using lookahead", Theoretical Computer Science, 82 (1): 165–176, doi:10.1016/0304-3975(91)90180-A, MR 1112117
  • Tomita, Masaru (1984). "LR parsers for natural languages" (PDF). COLING. 10th International Conference on Computational Linguistics. pp. 354–357.

Implementations

C, C++

Haskell

Java

  • – a Java implementation of the Earley algorithm
  • PEN – a Java library that implements the Earley algorithm
  • Pep – a Java library that implements the Earley algorithm and provides charts and parse trees as parsing artifacts
  • digitalheir/java-probabilistic-earley-parser - a Java library that implements the probabilistic Earley algorithm, which is useful to determine the most likely parse tree from an ambiguous sentence

C#

  • coonsta/earley - An Earley parser in C#
  • patrickhuber/pliant - An Earley parser that integrates the improvements adopted by Marpa and demonstrates Elizabeth Scott's tree building algorithm.
  • ellisonch/CFGLib - Probabilistic Context Free Grammar (PCFG) Library for C# (Earley + SPPF, CYK)

JavaScript

OCaml

  • Simple Earley - An implementation of a simple Earley-like parsing algorithm, with documentation.

Perl

  • Marpa::R2 – a Perl module. Marpa is an Earley's algorithm that includes the improvements made by Joop Leo, and by Aycock and Horspool.
  • Parse::Earley – a Perl module implementing Jay Earley's original algorithm

Python

  • Lark – an object-oriented, procedural implementation of an Earley parser in <200 lines of code
  • NLTK – a Python toolkit with an Earley parser
  • Spark – an object-oriented little language framework for Python implementing an Earley parser
  • spark_parser – updated and packaged version of the Spark parser above, which runs in both Python 3 and Python 2
  • earley3.py – a stand-alone implementation of the algorithm in less than 150 lines of code, including generation of the parsing-forest and samples
  • tjr_python_earley_parser - a minimal Earley parser in Python

Common Lisp

Scheme, Racket

Resources

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.