We never need 2 blank lines in a row.
Signed-off-by: NeilBrown <neil@brown.name>
exit(0);
}
-
# indent: grammar
~~~~~~
libmdcode.o : libmdcode.c mdcode.h
$(CC) $(CFLAGS) -c libmdcode.c
-
### File: md2c.c
#include <unistd.h>
return c;
}
-
static char *take_code(char *pos, char *end, char *marker,
struct psection **table, struct text section,
int *line_nop)
struct section *code_extract(char *pos, char *end, code_err_fn error);
-
## Using the library
Now that we can extract code from a document and link it all together
no error we write out a file for each appropriate code section. And
we are done.
-
##### client includes
#include <fcntl.h>
###### test list
oceani_tests += "valvar"
-
###### test: valvar
program:
.tmp.code:3:12: error: unhandled parse error: :
oceani: no program found.
-
## Test erroneous command line args
To improve coverage, we want to test correct handling of strange command
./coverage_oceani $${1+"$$@"} > /dev/null 2>&1 ;\
done || true
-
###### test list
oceani_special_tests += "cmd"
oceani_special_tests += "cmd,-zyx"
in scope. It is permanently out of scope now and can be removed from
the "in scope" stack.
-
###### variable fields
int depth, min_depth;
enum { OutScope, PendingScope, CondScope, InScope } scope;
simple value is required, `inter_exec()` will dereference `lval` to
get the value.
-
###### core functions
struct lrval {
rv.str = text_join(left.str, right.str);
break;
-
###### value functions
static struct text text_join(struct text a, struct text b)
var_block_close(config2context(config), CloseElse);
}$
-
$*exec
// These scopes are closed in CondSuffix
ForPart -> for OpenScope SimpleStatements ${
c->prog = $<1;
} }$
-
$*binode
Program -> program OpenScope Varlist Block OptNL ${
$0 = new(binode);
`parsergen` program built from the C code in this file can extract
that grammar directly from this file and process it.
-
###### File: parsergen.c
#include <unistd.h>
#include <stdlib.h>
return sl->ss;
}
-
### Setting `nullable`
We set `nullable` on the head symbol for any production for which all
so is word-like. If it can derive a NEWLINE, then we consider it to
be like a line.
-
Clearly the `TK_newline` token can derive a NEWLINE. Any symbol which
is the head of a production that contains a line_like symbol is also a
line-like symbol. We use a new field `line_like` to record this
Then the go to sets:
-
static void report_goto(struct grammar *g, struct symset gt)
{
int i;
return cnt;
}
-
## Generating the parser
The exported part of the parser is the `parse_XX` function, where the name
short min_prefix;
};
-
###### functions
static void gen_goto(FILE *f, struct grammar *g)
scanner.c libscanner.c \
libmdcode.o libnumber.o libstring.o -licuuc -lgmp
-
## Basic tests
Some simple tests... maybe all tests are simple.
###### token_next init
int ignored = state->conf->ignored;
-
The different tokens are numbers, words, marks, strings, comments,
newlines, EOF, and indents, each of which is examined in detail below.
continue;
}
-
###### delayed tokens
if (state->check_indent || state->delayed_lines) {
###### token types
TK_eof,
-
###### white space
if (ch == WEOF) {
if (state->col) {
tok->txt.len = 0;
}
-
Tokens make not cross into the next `code_node`, and some tokens can
include the newline at the and of a `code_node`, we must be able to
easily check if we have reached the end. Equally we need to know if
tok.txt += d;
tok.len -= d;
-
Now that we have the mantissa and the exponent we can multiply them
together, also allowing for the number of digits after the decimal
mark.
If all goes well we check for the possible trailing letters and
return. Return value is 1 for success and 0 for failure.
-
###### number functions
int number_parse(mpq_t num, char tail[3], struct text tok)
{
libstring.o : libstring.c
$(CC) $(CFLAGS) -c libstring.c
-
## Testing
As "untested code is buggy code" we need a program to easily test