Index: libs/spirit/doc/karma/numeric.qbk =================================================================== --- libs/spirit/doc/karma/numeric.qbk (Revision 85789) +++ libs/spirit/doc/karma/numeric.qbk (Arbeitskopie) @@ -1043,7 +1043,7 @@ ] [tip The easiest way to implement a proper real number formatting policy is - to derive a new type from the the type `real_policies<>` while overriding + to derive a new type from the type `real_policies<>` while overriding the aspects of the formatting which need to be changed.] Index: libs/spirit/doc/qi/numeric.qbk =================================================================== --- libs/spirit/doc/qi/numeric.qbk (Revision 85789) +++ libs/spirit/doc/qi/numeric.qbk (Arbeitskopie) @@ -813,7 +813,7 @@ [heading `RealPolicies` Specializations] The easiest way to implement a proper real parsing policy is to derive a -new type from the the type `real_policies` while overriding the aspects +new type from the type `real_policies` while overriding the aspects of the parsing which need to be changed. For example, here's the implementation of the predefined `strict_real_policies`: @@ -1026,7 +1026,7 @@ [heading Boolean `Policies` Specializations] The easiest way to implement a proper boolean parsing policy is to derive a -new type from the the type `bool_policies` while overriding the aspects +new type from the type `bool_policies` while overriding the aspects of the parsing which need to be changed. For example, here's the implementation of a boolean parsing policy interpreting the string `"eurt"` (i.e. "true" spelled backwards) as `false`: Index: libs/spirit/example/lex/example1.cpp =================================================================== --- libs/spirit/example/lex/example1.cpp (Revision 85789) +++ libs/spirit/example/lex/example1.cpp (Arbeitskopie) @@ -106,7 +106,7 @@ iterator_type iter = lex.begin(it, str.end()); iterator_type end = lex.end(); - // Parsing is done based on the the token stream, not the character + // Parsing is done based on the token stream, not the character // stream read from the input. // Note, how we use the token_def defined above as the skip parser. It must // be explicitly wrapped inside a state directive, switching the lexer Index: libs/spirit/example/lex/example2.cpp =================================================================== --- libs/spirit/example/lex/example2.cpp (Revision 85789) +++ libs/spirit/example/lex/example2.cpp (Arbeitskopie) @@ -143,7 +143,7 @@ iterator_type iter = tokens.begin(it, str.end()); iterator_type end = tokens.end(); - // Parsing is done based on the the token stream, not the character + // Parsing is done based on the token stream, not the character // stream read from the input. bool r = qi::parse(iter, end, calc); Index: libs/spirit/example/lex/example3.cpp =================================================================== --- libs/spirit/example/lex/example3.cpp (Revision 85789) +++ libs/spirit/example/lex/example3.cpp (Arbeitskopie) @@ -127,7 +127,7 @@ iterator_type iter = tokens.begin(it, str.end()); iterator_type end = tokens.end(); - // Parsing is done based on the the token stream, not the character + // Parsing is done based on the token stream, not the character // stream read from the input. // Note how we use the lexer defined above as the skip parser. bool r = qi::phrase_parse(iter, end, calc, qi::in_state("WS")[tokens.self]); Index: libs/spirit/example/lex/example4.cpp =================================================================== --- libs/spirit/example/lex/example4.cpp (Revision 85789) +++ libs/spirit/example/lex/example4.cpp (Arbeitskopie) @@ -202,7 +202,7 @@ iterator_type iter = tokens.begin(it, str.end()); iterator_type end = tokens.end(); - // Parsing is done based on the the token stream, not the character + // Parsing is done based on the token stream, not the character // stream read from the input. // Note how we use the lexer defined above as the skip parser. It must // be explicitly wrapped inside a state directive, switching the lexer Index: libs/spirit/example/lex/example5.cpp =================================================================== --- libs/spirit/example/lex/example5.cpp (Revision 85789) +++ libs/spirit/example/lex/example5.cpp (Arbeitskopie) @@ -247,7 +247,7 @@ iterator_type iter = tokens.begin(it, str.end()); iterator_type end = tokens.end(); - // Parsing is done based on the the token stream, not the character + // Parsing is done based on the token stream, not the character // stream read from the input. // Note how we use the lexer defined above as the skip parser. It must // be explicitly wrapped inside a state directive, switching the lexer Index: libs/spirit/example/lex/example6.cpp =================================================================== --- libs/spirit/example/lex/example6.cpp (Revision 85789) +++ libs/spirit/example/lex/example6.cpp (Arbeitskopie) @@ -223,7 +223,7 @@ iterator_type iter = tokens.begin(it, str.end()); iterator_type end = tokens.end(); - // Parsing is done based on the the token stream, not the character + // Parsing is done based on the token stream, not the character // stream read from the input. // Note how we use the lexer defined above as the skip parser. It must // be explicitly wrapped inside a state directive, switching the lexer Index: libs/spirit/example/lex/lexer_debug_support.cpp =================================================================== --- libs/spirit/example/lex/lexer_debug_support.cpp (Revision 85789) +++ libs/spirit/example/lex/lexer_debug_support.cpp (Arbeitskopie) @@ -84,7 +84,7 @@ language_tokens tokenizer; // Our lexer language_grammar g (tokenizer); // Our parser - // Parsing is done based on the the token stream, not the character + // Parsing is done based on the token stream, not the character // stream read from the input. std::string str ("float f = 3.4\nint i = 6\n"); base_iterator_type first = str.begin(); Index: libs/spirit/example/lex/print_number_tokenids.cpp =================================================================== --- libs/spirit/example/lex/print_number_tokenids.cpp (Revision 85789) +++ libs/spirit/example/lex/print_number_tokenids.cpp (Arbeitskopie) @@ -94,7 +94,7 @@ print_numbers_tokenids print_tokens; // Our lexer print_numbers_grammar print; // Our parser - // Parsing is done based on the the token stream, not the character + // Parsing is done based on the token stream, not the character // stream read from the input. std::string str (read_from_file(1 == argc ? "print_numbers.input" : argv[1])); base_iterator_type first = str.begin(); Index: libs/spirit/example/lex/print_numbers.cpp =================================================================== --- libs/spirit/example/lex/print_numbers.cpp (Revision 85789) +++ libs/spirit/example/lex/print_numbers.cpp (Arbeitskopie) @@ -91,7 +91,7 @@ print_numbers_tokens print_tokens; // Our lexer print_numbers_grammar print; // Our parser - // Parsing is done based on the the token stream, not the character + // Parsing is done based on the token stream, not the character // stream read from the input. std::string str (read_from_file(1 == argc ? "print_numbers.input" : argv[1])); base_iterator_type first = str.begin(); Index: libs/spirit/example/lex/static_lexer/word_count_static.cpp =================================================================== --- libs/spirit/example/lex/static_lexer/word_count_static.cpp (Revision 85789) +++ libs/spirit/example/lex/static_lexer/word_count_static.cpp (Arbeitskopie) @@ -103,7 +103,7 @@ char const* first = str.c_str(); char const* last = &first[str.size()]; - // Parsing is done based on the the token stream, not the character stream. + // Parsing is done based on the token stream, not the character stream. bool r = lex::tokenize_and_parse(first, last, word_count, g); if (r) { // success Index: libs/spirit/example/lex/strip_comments.cpp =================================================================== --- libs/spirit/example/lex/strip_comments.cpp (Revision 85789) +++ libs/spirit/example/lex/strip_comments.cpp (Arbeitskopie) @@ -135,7 +135,7 @@ strip_comments_tokens strip_comments; // Our lexer strip_comments_grammar g (strip_comments); // Our parser - // Parsing is done based on the the token stream, not the character + // Parsing is done based on the token stream, not the character // stream read from the input. std::string str (read_from_file(1 == argc ? "strip_comments.input" : argv[1])); base_iterator_type first = str.begin(); Index: libs/spirit/example/lex/strip_comments.input =================================================================== --- libs/spirit/example/lex/strip_comments.input (Revision 85789) +++ libs/spirit/example/lex/strip_comments.input (Arbeitskopie) @@ -134,7 +134,7 @@ strip_comments_tokens strip_comments; // Our lexer strip_comments_grammar g (strip_comments); // Our grammar - // Parsing is done based on the the token stream, not the character + // Parsing is done based on the token stream, not the character // stream read from the input. std::string str (read_from_file(1 == argc ? "strip_comments.input" : argv[1])); base_iterator_type first = str.begin(); Index: libs/spirit/example/lex/word_count.cpp =================================================================== --- libs/spirit/example/lex/word_count.cpp (Revision 85789) +++ libs/spirit/example/lex/word_count.cpp (Arbeitskopie) @@ -146,7 +146,7 @@ char const* first = str.c_str(); char const* last = &first[str.size()]; -/*< Parsing is done based on the the token stream, not the character +/*< Parsing is done based on the token stream, not the character stream read from the input. The function `tokenize_and_parse()` wraps the passed iterator range `[first, last)` by the lexical analyzer and uses its exposed iterators to parse the toke stream. Index: libs/spirit/repository/doc/qi/keywords.qbk =================================================================== --- libs/spirit/repository/doc/qi/keywords.qbk (Revision 85789) +++ libs/spirit/repository/doc/qi/keywords.qbk (Arbeitskopie) @@ -13,7 +13,7 @@ The keyword list operator, `kwd("k1")[a] / kwd("k2")[b]`, works tightly with the kwd, ikwd, dkwd and idkwd directives to effeciently match keyword lists. As long as one of the keywords specified through the kwd, ikwd, dkwd or idkwd directive -matches, the keyword will be immediatly followed by the the keyword's associated subject parser. +matches, the keyword will be immediatly followed by the keyword's associated subject parser. The parser will continue parsing input as long as the one of the keywords and it's associated parser succeed. Writing : (kwd("k1")[a] / kwd("k2")[b] / ... )