start with the tag is ignored, and the tag is striped from the rest. So
`--tag Foo`
means that the three needed sections must be `Foo: header`, `Foo: code`,
-and `Foo: grammar`.
+and `Foo: grammar`. The tag `calc` is used to extract the same calculator
+from this file.
[mdcode]: mdcode.html
[scanner]: scanner.html
s->precedence = g->prec_levels;
s->assoc = assoc;
found += 1;
+ t = token_next(ts);
}
if (found == 0)
err = "No symbols given on precedence line";
return code;
}
-Now we have all the bit we need to parse a full production.
+Now we have all the bits we need to parse a full production.
###### includes
#include <memory.h>
struct symbol **body;
int body_size;
struct text code;
+ int code_line;
###### symbol fields
int first_production;
tk = token_next(state);
}
if (tk.num == TK_open) {
+ p.code_line = tk.line;
p.code = collect_code(state, tk);
if (p.code.txt == NULL) {
err = "code fragment not closed properly";
p->head->first_production = g->production_count;
array_add(&g->productions, &g->production_count, p);
-Now we are ready to read in the grammar.
+Now we are ready to read in the grammar. We ignore comments
+and strings so that the marks which introduce them can be
+used as terminals in the grammar. We don't ignore numbers
+even though we don't allow them as that causes the scanner
+to produce errors that the parser is better positioned to handle.
###### grammar_read
static struct grammar *grammar_read(struct code_node *code)
}
We will often want to form the union of two symsets. When we do, we
-will often want to know if anything changed (as they might mean there
+will often want to know if anything changed (as that might mean there
is more work to do). So `symset_union` reports whether anything was
added to the first set. We use a slow quadratic approach as these
-sets don't really get very big. If profiles shows this to be a problem is
+sets don't really get very big. If profiles shows this to be a problem it
can be optimised later.
static int symset_union(struct symset *a, struct symset *b)
To know which symbols are line-like, we first need to know which
symbols start with a NEWLINE token. Any symbol which is followed by a
NEWLINE, or anything that starts with a NEWLINE, is deemed to be a line-like symbol.
-Certainly when trying to parse one of these we must take not of NEWLINEs.
+Certainly when trying to parse one of these we must take note of NEWLINEs.
Clearly the `TK_newline` token can start with a NEWLINE. Any symbol
which is the head of a production that contains a starts-with-NEWLINE
this attribute of symbols, and compute it in a repetitive manner
similar to `set_nullable`.
-Once we have that, we can determine which symbols are `line_like` be
+Once we have that, we can determine which symbols are `line_like` by
seeing which are followed by a `can_eol` symbol in any production.
###### symbol fields
`set_nullable`. This makes use of the fact that `symset_union`
reports if any change happens.
-The core of this which finds the "first" of part of a production body
+The core of this, which finds the "first" of part of a production body,
will be reused for computing the "follow" sets, so we split it out
into a separate function.
both a success flag and a pointer to where in the list an insert
should happen, so we don't need to search a second time.
+FIXME: document min_prefix
+
###### declarations
struct itemset {
struct itemset *next;
short state;
struct symset items;
struct symset go_to;
+ enum assoc assoc;
+ unsigned short precedence;
char completed;
char starts_line;
int min_prefix;
them to a data structure, of freeing them.
static int add_itemset(struct grammar *g, struct symset ss,
+ enum assoc assoc, unsigned short precedence,
enum grammar_type type)
{
struct itemset **where, *is;
is->state = g->states;
g->states += 1;
is->items = ss;
+ is->assoc = assoc;
+ is->precedence = precedence;
is->next = *where;
is->go_to = INIT_DATASET;
*where = is;
If any of these symbols are flagged as starting a line, then this
state must be a `starts_line` state so now is a good time to record that.
-NOTE: precedence handling should happen here - I haven't written this yet
-though.
+When itemsets are created we assign a precedence to the itemset from
+the complete item, if there is one. We ignore the possibility of
+there being two and don't (currently) handle precedence in such
+grammars. When completing a grammar we ignore any item where DOT is
+followed by a terminal with a precedence lower (numerically higher)
+than that for the itemset. Unless the terminal has right
+associativity, we also ignore items where the terminal has the same
+precedence. The result is that unwanted items are still in the
+itemset, but the terminal doesn't get into the go to set, so the item
+is ineffective.
###### complete itemset
for (i = 0; i < is->items.cnt; i++) {
if (bs == pr->body_size)
continue;
s = pr->body[bs];
+ if (s->precedence && is->precedence &&
+ is->precedence < s->precedence)
+ /* This terminal has a low precedence and
+ * shouldn't be shifted
+ */
+ continue;
+ if (s->precedence && is->precedence &&
+ is->precedence == s->precedence && s->assoc != Right)
+ /* This terminal has a matching precedence and is
+ * not Right-associative, so we mustn't shift it.
+ */
+ continue;
if (symset_find(&done, s->num) < 0) {
symset_add(&done, s->num, 0);
if (s->line_like)
int j;
unsigned short state;
struct symbol *sym = g->symtab[done.syms[i]];
+ enum assoc assoc = Non;
+ unsigned short precedence = 0;
struct symset newitemset = INIT_SYMSET;
if (type >= LALR)
newitemset = INIT_DATASET;
if (type >= LALR)
la = is->items.data[j];
pos = symset_find(&newitemset, pr->head->num);
+ if (bp + 1 == pr->body_size &&
+ pr->precedence > 0 &&
+ (precedence == 0 ||
+ pr->precedence < precedence)) {
+ // new itemset is reducible and has a precedence.
+ precedence = pr->precedence;
+ assoc = pr->assoc;
+ }
if (pos < 0)
symset_add(&newitemset, item_num(p, bp+1), la);
else if (type >= LALR) {
}
}
}
- state = add_itemset(g, newitemset, type);
+ state = add_itemset(g, newitemset, assoc, precedence, type);
if (symset_find(&is->go_to, done.syms[i]) < 0)
symset_add(&is->go_to, done.syms[i], state);
}
-All that is left is to crate the initial itemset from production zero, and
+All that is left is to create the initial itemset from production zero, and
with `TK_eof` as the LA set.
###### functions
}
// production 0, offset 0 (with no data)
symset_add(&first, item_num(0, 0), la);
- add_itemset(g, first, type);
+ add_itemset(g, first, Non, 0, type);
for (again = 0, is = g->items;
is;
is = is->next ?: again ? (again = 0, g->items) : NULL) {
if (dot == pr->body_size)
printf(" .");
printf(" [%d]", p);
- if (pr->precedence)
+ if (pr->precedence && dot == pr->body_size)
printf(" (%d%s)", pr->precedence,
assoc_names[pr->assoc]);
+ if (dot < pr->body_size &&
+ pr->body[dot]->precedence) {
+ struct symbol *s = pr->body[dot];
+ printf(" [%d%s]", s->precedence,
+ assoc_names[s->assoc]);
+ }
printf("\n");
}
for (s = 0; s < g->states; s++) {
int j;
struct itemset *is = g->statetab[s];
- printf(" Itemset %d:%s min prefix=%d\n",
+ printf(" Itemset %d:%s min prefix=%d",
s, is->starts_line?" (startsline)":"", is->min_prefix);
+ if (is->precedence)
+ printf(" %d%s", is->precedence, assoc_names[is->assoc]);
+ printf("\n");
for (j = 0; j < is->items.cnt; j++) {
report_item(g, is->items.syms[j]);
if (is->items.data != NO_DATA)
LR0 conflicts are any state which have both a reducible item and
a shiftable item, or two reducible items.
-LR05 conflicts only occurs if two possibly reductions exist,
+LR05 conflicts only occur if two possible reductions exist,
as shifts always over-ride reductions.
###### conflict functions
short reduce_prod;
short reduce_size;
short reduce_sym;
- short shift_sym;
short starts_line;
short min_prefix;
};
for (i = 0; i < g->states; i++) {
struct itemset *is = g->statetab[i];
int j, prod = -1, prod_len;
- int shift_sym = -1;
- int shift_len = 0, shift_remain = 0;
+
for (j = 0; j < is->items.cnt; j++) {
int itm = is->items.syms[j];
int p = item_prod(itm);
int bp = item_index(itm);
struct production *pr = g->productions[p];
- if (bp < pr->body_size) {
- if (shift_sym < 0 ||
- (shift_len == bp && shift_remain > pr->body_size - bp)) {
- shift_sym = pr->body[bp]->num;
- shift_len = bp;
- shift_remain = pr->body_size - bp;
- }
+ if (bp < pr->body_size)
continue;
- }
/* This is what we reduce */
if (prod < 0 || prod_len < pr->body_size) {
prod = p;
}
if (prod >= 0)
- fprintf(f, "\t[%d] = { %d, goto_%d, %d, %d, %d, 0, %d, %d },\n",
+ fprintf(f, "\t[%d] = { %d, goto_%d, %d, %d, %d, %d, %d },\n",
i, is->go_to.cnt, i, prod,
g->productions[prod]->body_size,
g->productions[prod]->head->num,
is->starts_line, is->min_prefix);
else
- fprintf(f, "\t[%d] = { %d, goto_%d, -1, -1, -1, %d, %d, %d },\n",
- i, is->go_to.cnt, i, shift_sym,
+ fprintf(f, "\t[%d] = { %d, goto_%d, -1, -1, -1, %d, %d },\n",
+ i, is->go_to.cnt, i,
is->starts_line, is->min_prefix);
}
fprintf(f, "};\n\n");
The code fragment requires translation when written out. Any `$N` needs to
be converted to a reference either to that buffer (if `$0`) or to the
structure returned by a previous reduction. These pointers need to be cast
-to the appropriate type for each access. All this is handling in
+to the appropriate type for each access. All this is handled in
`gen_code`.
`gen_code` also allows symbol references to contain a '`<`' as in '`$<2`'.
struct production *p = g->productions[i];
fprintf(f, "\tcase %d:\n", i);
- if (p->code.txt)
+ if (p->code.txt) {
+ fprintf(f, "#line %d \"%s\"\n", p->code_line, file);
gen_code(p, f, g);
+ }
if (p->head->struct_name.txt)
fprintf(f, "\t\tret_size = sizeof(struct %.*s%s);\n",
For this, the grammar author is required to defined a `free_XX` function for
each structure that is used by a non-terminal. `do_free` will call whichever
is appropriate given a symbol number, and will call `free` (as is
-appropriate for tokens on any terminal symbol.
+appropriate for tokens) on any terminal symbol.
###### functions
To be able to run `mdcode` and `scanner` on the grammar we need to memory
map it.
-One we have extracted the code (with `mdcode`) we expect to find three
+Once we have extracted the code (with `mdcode`) we expect to find three
sections: header, code, and grammar. Anything else that is not
excluded by the `--tag` option is an error.
rv |= 1;
}
-If a headers section is defined, we write it out and include a declaration
+If a "headers" section is defined, we write it out and include a declaration
for the `parse_XX` function so it can be used from separate code.
if (rv == 0 && hdr && outfile) {
freeing function. The symbol leads us to the right free function through
`do_free`.
-The `indents` count tracks the line indents with in the symbol or
+The `indents` count tracks the line indents with-in the symbol or
immediately follow it. These are used to allow indent information to
guide parsing and error recovery.
`TK_in` tokens are easy: we simply update indent count in the top stack frame to
record how many indents there are following the previous token.
-`TK_out` tokens must either be canceled against an indent count
+`TK_out` tokens must be canceled against an indent count
within the stack. If we can reduce some symbols that are all since
the most recent indent, then we do that first. If the minimum prefix
-of the current state then extents back before the most recent indent,
+of the current state then extends back before the most recent indent,
that indent can be cancelled. If the minimum prefix is shorter then
the indent is premature and we must start error handling, which
currently doesn't work at all.
parser_trace_action(trace, "Reduce");
continue;
}
- if (tk->num == TK_out) {
- // Indent problem - synthesise tokens to get us
- // out of here.
- fprintf(stderr, "Synthesize %d to handle indent problem\n", states[tos->state].shift_sym);
- shift(&p, states[tos->state].shift_sym,
- 0, 1, tok_copy(*tk), states);
- // FIXME need to report this error somehow
- parser_trace_action(trace, "Synthesize");
- continue;
- }
/* Error. We walk up the stack until we
* find a state which will accept TK_error.
* We then shift in TK_error and see what state
break;
tos = &p.stack[p.tos-1];
tos->indents += indents;
+ exit(1);
}
free(tk);
pop(&p, p.tos, NULL, do_free);
As `scanner` provides number parsing function using `libgmp` is it not much
work to perform arbitrary rational number calculations.
-This calculator takes one expression, or an equality test per line. The
+This calculator takes one expression, or an equality test, per line. The
results are printed and if any equality test fails, the program exits with
an error.
# calc: grammar
+ $LEFT * /
+ $LEFT + -
+
Session -> Session Line
| Line
| ERROR NEWLINE ${ printf("Skipped a bad line\n"); }$
$number
- Expression -> Expression + Term ${ mpq_init($0.val); mpq_add($0.val, $1.val, $3.val); }$
- | Expression - Term ${ mpq_init($0.val); mpq_sub($0.val, $1.val, $3.val); }$
- | Term ${ mpq_init($0.val); mpq_set($0.val, $1.val); }$
-
- Term -> Term * Factor ${ mpq_init($0.val); mpq_mul($0.val, $1.val, $3.val); }$
- | Term / Factor ${ mpq_init($0.val); mpq_div($0.val, $1.val, $3.val); }$
- | Factor ${ mpq_init($0.val); mpq_set($0.val, $1.val); }$
-
- Factor -> NUMBER ${ if (number_parse($0.val, $0.tail, $1.txt) == 0) mpq_init($0.val); }$
+ Expression -> Expression + Expression ${ mpq_init($0.val); mpq_add($0.val, $1.val, $3.val); }$
+ | Expression - Expression ${ mpq_init($0.val); mpq_sub($0.val, $1.val, $3.val); }$
+ | Expression * Expression ${ mpq_init($0.val); mpq_mul($0.val, $1.val, $3.val); }$
+ | Expression / Expression ${ mpq_init($0.val); mpq_div($0.val, $1.val, $3.val); }$
+ | NUMBER ${ if (number_parse($0.val, $0.tail, $1.txt) == 0) mpq_init($0.val); }$
| ( Expression ) ${ mpq_init($0.val); mpq_set($0.val, $2.val); }$