Opened 8 years ago
Last modified 8 years ago
#10850 new Patches
gcc-x86-implementation of "atomic_exchange_and_add" triggers intel's "unitialized variable" runtime check
Reported by: | Owned by: | Peter Dimov | |
---|---|---|---|
Milestone: | To Be Determined | Component: | smart_ptr |
Version: | Boost Development Trunk | Severity: | Problem |
Keywords: | Cc: |
Description
The current implementation of atomic_exchange_and_add for gcc x86 is (http://svn.boost.org/svn/boost/trunk/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp) as following:
inline int atomic_exchange_and_add( int * pw, int dv ) {
int r = *pw; *pw += dv; return r;
int r;
asm volatile (
"lock\n\t" "xadd %1, %0": "=m"( *pw ), "=r"( r ): outputs (%0, %1) "m"( *pw ), "1"( dv ): inputs (%2, %3 == %1) "memory", "cc" clobbers
);
return r;
}
This unfortunately triggers the "unitialized variable" runtime check of the Intel c++ compiler. Since r is actually superfluous, a simple patch can fix the problem:
inline int atomic_exchange_and_add( int * pw, int dv ) {
int r = *pw; *pw += dv; return r;
asm volatile (
"lock\n\t" "xadd %1, %0": "+m"( *pw ), "+r"( dv ): input/output (%0, %1)
:
"memory", "cc" clobbers
);
return dv;
} My colleague, who wrote the patch, checked that the generated assembler code is almost identical to the original one. "Almost" in the sense, that the removal of r changes some offsets.
Sorry for the messup of code blocks.
Here's the original code
Here's the patches version