Opened 15 years ago

Last modified 15 years ago

#1560 reopened Feature Requests

Performance testing with boost::test

Reported by: John Pavel <jrp@…> Owned by: Gennadiy Rozental
Milestone: To Be Determined Component: test
Version: Boost Development Trunk Severity: Optimization
Keywords: Cc:

Description

I have been trying to use boost::test to do some performance testing.

With the following macros:

// The basic idea of "test" is to call the function to be timed many 
// times (say up to 1000), throw away the slowest 10% of those times, 
// and average the rest. Why? <shrug> shoot-from-the-hip "theory" and 
// experience that its return is fairly consistent to 2, maybe 3 
// digits. The test function has a "time out" feature where it quits if 
// the accumulated measured time grows beyond 1 second (which may not be 
// appropriate for all tests). But it is easy to get bored when you 
// unexpectedly find yourself waiting 30 minutes for timing result, so 
// that's why it's there. Otoh, I put in a minimum repetition of at 
// least 10, no matter how long the measured time is, so you get at 
// least some accuracy (tweak knobs as you like). Note that the 
// accumulation/averaging happens in double, even though the data and 
// answer are float (just paranoia really). Weakness: If you're timing 
// something that is quicker than the minimum resolution of your timer, 
// this doesn't work. But otherwise, this is better than the 
// traditional loop inside the timer as it throws away those results 
// that happen to get interrupted by your email checker running. :-) 

#include <numeric>
#include <boost\timer.hpp>

template <class F> 
double 
time_tests(F f) // f is a function that returns its execution time
{ 
	std::vector<double> t; 

	// Store time of 10 executions of f
	unsigned int i;
	for (i = 0; i < 10; ++i) 
		t.push_back(f()); 

	double total_time = std::accumulate(t.begin(), t.end(), 0.0);

	// Keep running until at least 1s of results are available, or 1000 executions
	while (i < 1000 && total_time < 1.0) 
	{ 
		t.push_back(f()); 
		total_time += t.back(); 
		++i; 
	} 
	std::sort(t.begin(), t.end()); 
	t.resize(t.size() * 9 / 10); 
	return std::accumulate(t.begin(), t.end(), 0.0) / t.size(); 
} 


#define TIMED_TEST_CASE( test_name ) \
	double \
	time_test_##test_name() \
	{ \
		boost::timer t; \
		{ \
			test_name##_impl();  \
		} \
		return t.elapsed(); \
	} 

#define TIMED_AUTO_TEST_CASE( test_name ) \
	void \
	test_name##_impl(); \
	\
	TIMED_TEST_CASE( test_name ) \
	\
	BOOST_AUTO_TEST_CASE( test_name ) \
	{ \
		double execution_time = time_test_##test_name(); \
		boost::unit_test::unit_test_log.set_threshold_level( boost::unit_test::log_messages ); \
		BOOST_TEST_MESSAGE(BOOST_TEST_STRINGIZE( test_name ).trim( "\"" ) << " execution time: " << execution_time << "s"); \
		BOOST_CHECK( true ); \
	} \
	\
	inline void test_name##_impl()

I can define tests such as

// Boost.Test
//#define BOOST_AUTO_TEST_MAIN
// #include <boost/test/unit_test.hpp>
#define BOOST_TEST_MODULE allTests
#include <boost/test/unit_test.hpp>

#include "time_test.h"

#include <vector>


#define BaseT float

 
BOOST_AUTO_TEST_SUITE(vectors);

TIMED_AUTO_TEST_CASE( vector_test )
{
	unsigned int const v1_dim = 6;
	unsigned int const v2_dim = 4;
	unsigned int const v3_dim = 65535;

	std::vector<BaseT> v1(v1_dim, 1.0);
	std::vector< std::vector<BaseT> > v2(v2_dim, v1);
	std::vector< std::vector< std::vector<BaseT> > > v3(v3_dim, v2);

	
}

TIMED_AUTO_TEST_CASE( test2 )
{
    BOOST_CHECK( true );
}

TIMED_AUTO_TEST_CASE( test3 )
{
	for (int i=0; i<10000; i++)
	{
		BOOST_CHECK( true );
	}
}


BOOST_AUTO_TEST_SUITE_END();

This works, but is not particularly elegant. Is there a better solution (eg, involving defining a class that inherits from one of the existing ones)?

Change History (4)

comment:1 by Gennadiy Rozental, 15 years ago

Resolution: wontfix
Status: newclosed
Type: BugsFeature Requests

These macros have number of issues:

  1. Problem domain is unclear. What exactly are you trying to test?
  2. Number of parameters are using. none of them is configurable. It's unacceptable in generic solution
  3. boost:;timer is not a best device for use to measure performace.

I do plan to introduce some performace testing toosl. But these macro doesn't look right right way to go

comment:2 by jrp at dial dot pipex dot com, 15 years ago

Resolution: wontfix
Status: closedreopened

These macros have number of issues:

  1. Problem domain is unclear. What exactly are you trying to test?

I am just trying to do some array manipulation using a range of different packages.

  1. Number of parameters are using. none of them is configurable. It's unacceptable in generic solution

That is also a problem for me, as I would like to try different array lengths.

  1. boost:;timer is not a best device for use to measure performace.

Yes.

I do plan to introduce some performace testing toosl. But these macro doesn't look right right way to go

I agree. What I need is something that:

  • time the running of a test case (that is already there, if the right logging is enabled, although the output is messy)
  • can distinguish between fixture setup time and time to run the test
  • can be set to count the number of times that a test must be run before a specified period elapses
  • give an indication of whether the test takes a linear amount of time with respect to the test parameter (eg, the length of the array)

comment:3 by Gennadiy Rozental, 15 years ago

What you are looking for is not a timed test case, but specialized performance tester.

something like

template<Profiler, Func> Profiler::interval_type time_invocation( Func f );

additional parameters may include: unsigned num_times_to_repeat Profiler::interval_type total_test_time unsigned num_of attempts_to_ignore; double bottom_percent_to_ignore; double top_percent_to_ignore; unsigned minimal_number_of_attempts;

I would use named parameters interface to specify these.

To accomodatelast need I would use something like:

template<Profiler, Func, ParamIter, ExpectedComplexityFunc> Profiler::interval_type test_complexity( Func f, ParamIter first_param, ParamIter last_parm, ExpectedComplexityFunc ec );

this should test that performance numbers match predicate when collected over set of parameters

comment:4 by jrp at dial dot pipex dot com, 15 years ago

Thanks. Yes. It's a matter of resource. I wanted to spend time testing different approaches and algorithms rather than building a complete performance testing framework, particularly as I also want to test correctness.

This library seems to do a good deal of the job very well, but some enhancements along the lines described above -- particularly reporting results -- would help.

The Musser, Derge Saini STL bool (Chapter 19) provides a class for timing generic algorithms that has many of the features that I describe above. I don't know how easy it would be to build something like it into Boost.Test.

BTW, it would also be useful to have macros for the looser form of closeness testing to avoid warnings like

./perftest.cpp(773): error in "convolve_r2c_vector_in_place": difference between

x[j]/nSamples{2.76119927e-008} and yout(j){0} exceeds 1%

Note: See TracTickets for help on using tickets.