Ticket #9135: spirit-typo.patch
File spirit-typo.patch, 11.1 KB (added by , 9 years ago) |
---|
-
libs/spirit/doc/karma/numeric.qbk
1043 1043 ] 1044 1044 1045 1045 [tip The easiest way to implement a proper real number formatting policy is 1046 to derive a new type from the t he type `real_policies<>` while overriding1046 to derive a new type from the type `real_policies<>` while overriding 1047 1047 the aspects of the formatting which need to be changed.] 1048 1048 1049 1049 -
libs/spirit/doc/qi/numeric.qbk
813 813 [heading `RealPolicies` Specializations] 814 814 815 815 The easiest way to implement a proper real parsing policy is to derive a 816 new type from the t he type `real_policies` while overriding the aspects816 new type from the type `real_policies` while overriding the aspects 817 817 of the parsing which need to be changed. For example, here's the 818 818 implementation of the predefined `strict_real_policies`: 819 819 … … 1026 1026 [heading Boolean `Policies` Specializations] 1027 1027 1028 1028 The easiest way to implement a proper boolean parsing policy is to derive a 1029 new type from the t he type `bool_policies` while overriding the aspects1029 new type from the type `bool_policies` while overriding the aspects 1030 1030 of the parsing which need to be changed. For example, here's the 1031 1031 implementation of a boolean parsing policy interpreting the string `"eurt"` 1032 1032 (i.e. "true" spelled backwards) as `false`: -
libs/spirit/example/lex/example1.cpp
106 106 iterator_type iter = lex.begin(it, str.end()); 107 107 iterator_type end = lex.end(); 108 108 109 // Parsing is done based on the t he token stream, not the character109 // Parsing is done based on the token stream, not the character 110 110 // stream read from the input. 111 111 // Note, how we use the token_def defined above as the skip parser. It must 112 112 // be explicitly wrapped inside a state directive, switching the lexer -
libs/spirit/example/lex/example2.cpp
143 143 iterator_type iter = tokens.begin(it, str.end()); 144 144 iterator_type end = tokens.end(); 145 145 146 // Parsing is done based on the t he token stream, not the character146 // Parsing is done based on the token stream, not the character 147 147 // stream read from the input. 148 148 bool r = qi::parse(iter, end, calc); 149 149 -
libs/spirit/example/lex/example3.cpp
127 127 iterator_type iter = tokens.begin(it, str.end()); 128 128 iterator_type end = tokens.end(); 129 129 130 // Parsing is done based on the t he token stream, not the character130 // Parsing is done based on the token stream, not the character 131 131 // stream read from the input. 132 132 // Note how we use the lexer defined above as the skip parser. 133 133 bool r = qi::phrase_parse(iter, end, calc, qi::in_state("WS")[tokens.self]); -
libs/spirit/example/lex/example4.cpp
202 202 iterator_type iter = tokens.begin(it, str.end()); 203 203 iterator_type end = tokens.end(); 204 204 205 // Parsing is done based on the t he token stream, not the character205 // Parsing is done based on the token stream, not the character 206 206 // stream read from the input. 207 207 // Note how we use the lexer defined above as the skip parser. It must 208 208 // be explicitly wrapped inside a state directive, switching the lexer -
libs/spirit/example/lex/example5.cpp
247 247 iterator_type iter = tokens.begin(it, str.end()); 248 248 iterator_type end = tokens.end(); 249 249 250 // Parsing is done based on the t he token stream, not the character250 // Parsing is done based on the token stream, not the character 251 251 // stream read from the input. 252 252 // Note how we use the lexer defined above as the skip parser. It must 253 253 // be explicitly wrapped inside a state directive, switching the lexer -
libs/spirit/example/lex/example6.cpp
223 223 iterator_type iter = tokens.begin(it, str.end()); 224 224 iterator_type end = tokens.end(); 225 225 226 // Parsing is done based on the t he token stream, not the character226 // Parsing is done based on the token stream, not the character 227 227 // stream read from the input. 228 228 // Note how we use the lexer defined above as the skip parser. It must 229 229 // be explicitly wrapped inside a state directive, switching the lexer -
libs/spirit/example/lex/lexer_debug_support.cpp
84 84 language_tokens<lexer_type> tokenizer; // Our lexer 85 85 language_grammar<iterator_type> g (tokenizer); // Our parser 86 86 87 // Parsing is done based on the t he token stream, not the character87 // Parsing is done based on the token stream, not the character 88 88 // stream read from the input. 89 89 std::string str ("float f = 3.4\nint i = 6\n"); 90 90 base_iterator_type first = str.begin(); -
libs/spirit/example/lex/print_number_tokenids.cpp
94 94 print_numbers_tokenids<lexer_type> print_tokens; // Our lexer 95 95 print_numbers_grammar<iterator_type> print; // Our parser 96 96 97 // Parsing is done based on the t he token stream, not the character97 // Parsing is done based on the token stream, not the character 98 98 // stream read from the input. 99 99 std::string str (read_from_file(1 == argc ? "print_numbers.input" : argv[1])); 100 100 base_iterator_type first = str.begin(); -
libs/spirit/example/lex/print_numbers.cpp
91 91 print_numbers_tokens<lexer_type> print_tokens; // Our lexer 92 92 print_numbers_grammar<iterator_type> print; // Our parser 93 93 94 // Parsing is done based on the t he token stream, not the character94 // Parsing is done based on the token stream, not the character 95 95 // stream read from the input. 96 96 std::string str (read_from_file(1 == argc ? "print_numbers.input" : argv[1])); 97 97 base_iterator_type first = str.begin(); -
libs/spirit/example/lex/static_lexer/word_count_static.cpp
103 103 char const* first = str.c_str(); 104 104 char const* last = &first[str.size()]; 105 105 106 // Parsing is done based on the t he token stream, not the character stream.106 // Parsing is done based on the token stream, not the character stream. 107 107 bool r = lex::tokenize_and_parse(first, last, word_count, g); 108 108 109 109 if (r) { // success -
libs/spirit/example/lex/strip_comments.cpp
135 135 strip_comments_tokens<lexer_type> strip_comments; // Our lexer 136 136 strip_comments_grammar<iterator_type> g (strip_comments); // Our parser 137 137 138 // Parsing is done based on the t he token stream, not the character138 // Parsing is done based on the token stream, not the character 139 139 // stream read from the input. 140 140 std::string str (read_from_file(1 == argc ? "strip_comments.input" : argv[1])); 141 141 base_iterator_type first = str.begin(); -
libs/spirit/example/lex/strip_comments.input
134 134 strip_comments_tokens<lexer_type> strip_comments; // Our lexer 135 135 strip_comments_grammar<iterator_type> g (strip_comments); // Our grammar 136 136 137 // Parsing is done based on the t he token stream, not the character137 // Parsing is done based on the token stream, not the character 138 138 // stream read from the input. 139 139 std::string str (read_from_file(1 == argc ? "strip_comments.input" : argv[1])); 140 140 base_iterator_type first = str.begin(); -
libs/spirit/example/lex/word_count.cpp
146 146 char const* first = str.c_str(); 147 147 char const* last = &first[str.size()]; 148 148 149 /*< Parsing is done based on the t he token stream, not the character149 /*< Parsing is done based on the token stream, not the character 150 150 stream read from the input. The function `tokenize_and_parse()` wraps 151 151 the passed iterator range `[first, last)` by the lexical analyzer and 152 152 uses its exposed iterators to parse the toke stream. -
libs/spirit/repository/doc/qi/keywords.qbk
13 13 14 14 The keyword list operator, `kwd("k1")[a] / kwd("k2")[b]`, works tightly with the kwd, ikwd, dkwd and idkwd directives 15 15 to effeciently match keyword lists. As long as one of the keywords specified through the kwd, ikwd, dkwd or idkwd directive 16 matches, the keyword will be immediatly followed by the thekeyword's associated subject parser.16 matches, the keyword will be immediatly followed by the keyword's associated subject parser. 17 17 The parser will continue parsing input as long as the one of the keywords and it's associated parser succeed. 18 18 Writing : 19 19 (kwd("k1")[a] / kwd("k2")[b] / ... )