Ticket #9135: spirit-typo.patch

File spirit-typo.patch, 11.1 KB (added by mlang@…, 9 years ago)

The patch.

  • libs/spirit/doc/karma/numeric.qbk

     
    10431043]
    10441044
    10451045[tip  The easiest way to implement a proper real number formatting policy is
    1046       to derive a new type from the the type `real_policies<>` while overriding
     1046      to derive a new type from the type `real_policies<>` while overriding
    10471047      the aspects of the formatting which need to be changed.]
    10481048
    10491049
  • libs/spirit/doc/qi/numeric.qbk

     
    813813[heading `RealPolicies` Specializations]
    814814
    815815The easiest way to implement a proper real parsing policy is to derive a
    816 new type from the the type `real_policies` while overriding the aspects
     816new type from the type `real_policies` while overriding the aspects
    817817of the parsing which need to be changed. For example, here's the
    818818implementation of the predefined `strict_real_policies`:
    819819
     
    10261026[heading Boolean `Policies` Specializations]
    10271027
    10281028The easiest way to implement a proper boolean parsing policy is to derive a
    1029 new type from the the type `bool_policies` while overriding the aspects
     1029new type from the type `bool_policies` while overriding the aspects
    10301030of the parsing which need to be changed. For example, here's the
    10311031implementation of a boolean parsing policy interpreting the string `"eurt"`
    10321032(i.e. "true" spelled backwards) as `false`:
  • libs/spirit/example/lex/example1.cpp

     
    106106    iterator_type iter = lex.begin(it, str.end());
    107107    iterator_type end = lex.end();
    108108
    109     // Parsing is done based on the the token stream, not the character
     109    // Parsing is done based on the token stream, not the character
    110110    // stream read from the input.
    111111    // Note, how we use the token_def defined above as the skip parser. It must
    112112    // be explicitly wrapped inside a state directive, switching the lexer
  • libs/spirit/example/lex/example2.cpp

     
    143143    iterator_type iter = tokens.begin(it, str.end());
    144144    iterator_type end = tokens.end();
    145145
    146     // Parsing is done based on the the token stream, not the character
     146    // Parsing is done based on the token stream, not the character
    147147    // stream read from the input.
    148148    bool r = qi::parse(iter, end, calc);
    149149
  • libs/spirit/example/lex/example3.cpp

     
    127127    iterator_type iter = tokens.begin(it, str.end());
    128128    iterator_type end = tokens.end();
    129129
    130     // Parsing is done based on the the token stream, not the character
     130    // Parsing is done based on the token stream, not the character
    131131    // stream read from the input.
    132132    // Note how we use the lexer defined above as the skip parser.
    133133    bool r = qi::phrase_parse(iter, end, calc, qi::in_state("WS")[tokens.self]);
  • libs/spirit/example/lex/example4.cpp

     
    202202    iterator_type iter = tokens.begin(it, str.end());
    203203    iterator_type end = tokens.end();
    204204       
    205     // Parsing is done based on the the token stream, not the character
     205    // Parsing is done based on the token stream, not the character
    206206    // stream read from the input.
    207207    // Note how we use the lexer defined above as the skip parser. It must
    208208    // be explicitly wrapped inside a state directive, switching the lexer
  • libs/spirit/example/lex/example5.cpp

     
    247247    iterator_type iter = tokens.begin(it, str.end());
    248248    iterator_type end = tokens.end();
    249249
    250     // Parsing is done based on the the token stream, not the character
     250    // Parsing is done based on the token stream, not the character
    251251    // stream read from the input.
    252252    // Note how we use the lexer defined above as the skip parser. It must
    253253    // be explicitly wrapped inside a state directive, switching the lexer
  • libs/spirit/example/lex/example6.cpp

     
    223223    iterator_type iter = tokens.begin(it, str.end());
    224224    iterator_type end = tokens.end();
    225225
    226     // Parsing is done based on the the token stream, not the character
     226    // Parsing is done based on the token stream, not the character
    227227    // stream read from the input.
    228228    // Note how we use the lexer defined above as the skip parser. It must
    229229    // be explicitly wrapped inside a state directive, switching the lexer
  • libs/spirit/example/lex/lexer_debug_support.cpp

     
    8484    language_tokens<lexer_type> tokenizer;           // Our lexer
    8585    language_grammar<iterator_type> g (tokenizer);   // Our parser
    8686
    87     // Parsing is done based on the the token stream, not the character
     87    // Parsing is done based on the token stream, not the character
    8888    // stream read from the input.
    8989    std::string str ("float f = 3.4\nint i = 6\n");
    9090    base_iterator_type first = str.begin();
  • libs/spirit/example/lex/print_number_tokenids.cpp

     
    9494    print_numbers_tokenids<lexer_type> print_tokens;  // Our lexer
    9595    print_numbers_grammar<iterator_type> print;       // Our parser
    9696
    97     // Parsing is done based on the the token stream, not the character
     97    // Parsing is done based on the token stream, not the character
    9898    // stream read from the input.
    9999    std::string str (read_from_file(1 == argc ? "print_numbers.input" : argv[1]));
    100100    base_iterator_type first = str.begin();
  • libs/spirit/example/lex/print_numbers.cpp

     
    9191    print_numbers_tokens<lexer_type> print_tokens;    // Our lexer
    9292    print_numbers_grammar<iterator_type> print;       // Our parser
    9393
    94     // Parsing is done based on the the token stream, not the character
     94    // Parsing is done based on the token stream, not the character
    9595    // stream read from the input.
    9696    std::string str (read_from_file(1 == argc ? "print_numbers.input" : argv[1]));
    9797    base_iterator_type first = str.begin();
  • libs/spirit/example/lex/static_lexer/word_count_static.cpp

     
    103103    char const* first = str.c_str();
    104104    char const* last = &first[str.size()];
    105105
    106     // Parsing is done based on the the token stream, not the character stream.
     106    // Parsing is done based on the token stream, not the character stream.
    107107    bool r = lex::tokenize_and_parse(first, last, word_count, g);
    108108
    109109    if (r) {    // success
  • libs/spirit/example/lex/strip_comments.cpp

     
    135135    strip_comments_tokens<lexer_type> strip_comments;           // Our lexer
    136136    strip_comments_grammar<iterator_type> g (strip_comments);   // Our parser
    137137
    138     // Parsing is done based on the the token stream, not the character
     138    // Parsing is done based on the token stream, not the character
    139139    // stream read from the input.
    140140    std::string str (read_from_file(1 == argc ? "strip_comments.input" : argv[1]));
    141141    base_iterator_type first = str.begin();
  • libs/spirit/example/lex/strip_comments.input

     
    134134    strip_comments_tokens<lexer_type> strip_comments;           // Our lexer
    135135    strip_comments_grammar<iterator_type> g (strip_comments);   // Our grammar
    136136
    137     // Parsing is done based on the the token stream, not the character
     137    // Parsing is done based on the token stream, not the character
    138138    // stream read from the input.
    139139    std::string str (read_from_file(1 == argc ? "strip_comments.input" : argv[1]));
    140140    base_iterator_type first = str.begin();
  • libs/spirit/example/lex/word_count.cpp

     
    146146    char const* first = str.c_str();
    147147    char const* last = &first[str.size()];
    148148
    149 /*<  Parsing is done based on the the token stream, not the character
     149/*<  Parsing is done based on the token stream, not the character
    150150     stream read from the input. The function `tokenize_and_parse()` wraps
    151151     the passed iterator range `[first, last)` by the lexical analyzer and
    152152     uses its exposed iterators to parse the toke stream.
  • libs/spirit/repository/doc/qi/keywords.qbk

     
    1313
    1414The keyword list operator, `kwd("k1")[a] / kwd("k2")[b]`,  works tightly with the kwd, ikwd, dkwd and idkwd directives
    1515to effeciently match keyword lists. As long as one of the keywords specified through the kwd, ikwd, dkwd or idkwd directive
    16 matches, the keyword will be immediatly followed by the the keyword's associated subject parser.
     16matches, the keyword will be immediatly followed by the keyword's associated subject parser.
    1717The parser will continue parsing input as long as the one of the keywords and it's associated parser succeed.
    1818Writing :
    1919(kwd("k1")[a] / kwd("k2")[b] / ... )