#7090 closed Feature Requests (wontfix)
Provide an option to parse only preprocessor directives
Reported by: | Owned by: | Hartmut Kaiser | |
---|---|---|---|
Milestone: | To Be Determined | Component: | wave |
Version: | Boost 1.49.0 | Severity: | Optimization |
Keywords: | Cc: |
Description
I use boost::wave to preprocess shader source files. I have a macro system that allows me to execute certain things if the code path the macro is in "survives" the preprocessing step. So far, so good.
My concern is the speed. One source file with about 100 lines takes about 5 MS to parse. From what I read in the docs, boost::wave uses a cpp-lexer to parse _all_ code-tokens. But for my application this is not necessary. What I need is _not_ a list of all tokens (that I can assemble into a result string), I want only the result string. Is there a way to turn this off? I have looked through all samples and tests but did not find a way to parse only preprocessor directives and ignore the rest of the file.
I have also read on this bugtracker that #define BOOST_WAVE_SUPPORT_THREADING 0 is supposed to make it much faster. However this is not an option for me because I can't ship the boost libraries, I expect the users to have them installed already.
Change History (11)
comment:1 by , 10 years ago
follow-up: 4 comment:2 by , 10 years ago
If this is not possible, I would appreciate a suggestion for an alternate library that is faster, or some tips how to make my own (is it possible to reuse code from boost::wave for that?) - what I need is only the preprocessor, I want the actual code tokens unchanged, so it should be possible to make a much faster library.
comment:3 by , 10 years ago
Replying to scrawl123@…:
My concern is the speed. One source file with about 100 lines takes about 5 MS to parse. From what I read in the docs, boost::wave uses a cpp-lexer to parse _all_ code-tokens. But for my application this is not necessary. What I need is _not_ a list of all tokens (that I can assemble into a result string), I want only the result string. Is there a way to turn this off? I have looked through all samples and tests but did not find a way to parse only preprocessor directives and ignore the rest of the file.
That's by design, so there is no way to turn it off.
I have also read on this bugtracker that #define BOOST_WAVE_SUPPORT_THREADING 0 is supposed to make it much faster. However this is not an option for me because I can't ship the boost libraries, I expect the users to have them installed already.
The Wave libraries shipped with Boost are generated from small cpp files which essentially just explicitly instantiate a couple of templates (see $BOOST_ROOT/libs/wave/src). You could easily add those 5 or 6 cpp files to your projects and be independent from the Boost.Wave binaries a user might have installed.
comment:4 by , 10 years ago
Resolution: | → wontfix |
---|---|
Status: | new → closed |
Replying to scrawl123@…:
If this is not possible, I would appreciate a suggestion for an alternate library that is faster, or some tips how to make my own (is it possible to reuse code from boost::wave for that?) - what I need is only the preprocessor, I want the actual code tokens unchanged, so it should be possible to make a much faster library.
I'm not aware of any library solution you could reuse, but there are definitely some open source preprocessors around you could have a look at. As a first step I'd disable threading in Wave as you outlined above and see how fast it will get after that change.
follow-up: 6 comment:5 by , 10 years ago
Thanks for your answer, I will try disable threading tomorrow.
I'm not aware of any library solution you could reuse, but there are definitely some open source preprocessors around you could have a look at.
Can you give me some hints? The only other one that I've found is libcpp (used by gcc), but it's written in C and I can't find any examples how to use it, reading the header doesn't explain anything either.
comment:6 by , 10 years ago
Replying to scrawl <scrawl123@…>:
I'm not aware of any library solution you could reuse, but there are definitely some open source preprocessors around you could have a look at.
Can you give me some hints? The only other one that I've found is libcpp (used by gcc), but it's written in C and I can't find any examples how to use it, reading the header doesn't explain anything either.
Well, there is clang, which has a separate preprocessor compiler pass and there is mcpp (http://mcpp.sourceforge.net/). Other than that, there is not much I know about. However, you might look at the various open sourced C/C++ compilers available.
follow-up: 8 comment:7 by , 10 years ago
mcpp is 10-15x faster than boost::wave, even though I had to copy my source string into a file first because it apparently only takes file input and no string from memory.
Here is quick guide, maybe it will help someone
- build mcpp-2.7.2/src as static lib with -DMCPP_LIB - #include "../mcpp/mcpp_lib.h" - add this code, in this example my method gets passed a std::vector<std::string> of definitions and an std::string include path mcpp_use_mem_buffers(1); int num_args = 4 + definitions.size()*2; char* args[num_args]; args[0] = "mcpp"; args[1] = "/tmp/test.shader"; // file you want to process args[2] = "-I"; std::vector<char> writable(includePath.size()+1); std::copy(includePath.begin(), includePath.end(), writable.begin()); char* include = &writable[0]; args[3] = include; std::vector<char> vectors[definitions.size()]; int i=4; int cur=0; for (std::vector<std::string>::iterator it = definitions.begin(); it != definitions.end(); ++it) { args[i] = "-D"; ++i; std::string val = *it; std::vector<char> writable2(val.size()+1); std::copy(val.begin(), val.end(), writable2.begin()); vectors[cur] = writable2; args[i] = &vectors[cur][0]; ++cur; ++i; } mcpp_lib_main(num_args, args); char* result = mcpp_get_mem_buffer (OUT);
comment:8 by , 10 years ago
Replying to scrawl <scrawl123@…>:
mcpp is 10-15x faster than boost::wave, even though I had to copy my source string into a file first because it apparently only takes file input and no string from memory.
FWIW, using Wave with threading disabled should make the times for both libraries comparable (while Wave being probably by a factor of 2 slower, but that's because of Wave doing much more stuff in the background). The reason for this dramatic difference is that Wave uses Spirit V1, which is utterly inefficient when threading is enabled.
follow-up: 10 comment:9 by , 10 years ago
Discovered an issue with mcpp, the #line directives are not GLSL compliant; then i looked at glcpp from mesa, but there it doesn't emit #line directives at all. I don't feel like hacking with a horrible bulk of C-code, so I guess I'm back to using boost::wave.
From what I can tell, in order to disable threading, I should add the files libs/wave/*.cpp to my project, and modify the includes to use my own wave_config.hpp instead of the one from the boost sources? Is that correct?
comment:10 by , 10 years ago
Replying to scrawl <scrawl123@…>:
Discovered an issue with mcpp, the #line directives are not GLSL compliant; then i looked at glcpp from mesa, but there it doesn't emit #line directives at all. I don't feel like hacking with a horrible bulk of C-code, so I guess I'm back to using boost::wave.
Cool.
From what I can tell, in order to disable threading, I should add the files libs/wave/*.cpp to my project,
Yes.
and modify the includes to use my own wave_config.hpp instead of the one from the boost sources? Is that correct?
There is no need to modify the sources. If you look at wave_config.hpp
you'll see that all configuration constants (such as threading) are handled as
#if !defined(BOOST_WAVE_SUPPORT_THREADING) #if defined(BOOST_HAS_THREADS) #define BOOST_WAVE_SUPPORT_THREADING 1 #else #define BOOST_WAVE_SUPPORT_THREADING 0 #endif #endif
Thus all you have to do is to pass -DBOOST_WAVE_SUPPORT_THREADING=0
on the commandline while compiling.
comment:11 by , 10 years ago
Thank you, it worked, and integrating the custom_line_directives sample with my code was surprisingly easy :)
5 MS does not sound like much, but if I do that for each shader permutation (there are usually hundreds of permutations for one uber-shader), it ends up with a loading time of a few seconds. This is ridiculous, considering that the _actual_ compiling on the GPU takes only about 0.05 MS per shader (as opposed to 5 MS for preprocessing).