Opened 4 years ago

Closed 4 years ago

#13606 closed Bugs (fixed)

sinc_pi incorrect taylor_0_bound

Reported by: minorlogic@… Owned by: John Maddock
Milestone: To Be Determined Component: math
Version: Boost 1.63.0 Severity: Optimization
Keywords: sinc_pi taylor_0_bound Cc: minorlogic@…

Description

from taylor_0_bound to taylor_2_bound all "result" values equal to "1.0" cause x2 can be greater epsilon only if x greater taylor_2_bound.

So bounds should be recalculated and properly applied.

Change History (5)

comment:1 by John Maddock, 4 years ago

Resolution: fixed
Status: newclosed

You're quite right: fixed in https://github.com/boostorg/math/commit/658945d50822adae83d77083a559e785ce18609c along with better tests.

comment:2 by anonymous, 4 years ago

Resolution: fixed
Status: closedreopened

Bounds is still incorrect. Should look like

  • Range0 for abs(x) in ( 0 , pow(eps, 1/2) ) result == 1.0, cause next Taylor x2/6 LESS than "epsilon" and 1.0-x2/6 == 1.0

For example (for double) if abs(x) == 1.0e-10. The x2 == 1.0e-20 and 1.0e-20/6 is less than double::epsilon, so 1.0 + 1.0e-20/6 == 1.0.

  • Range1 for abs(x) in ( pow(eps, 1/2) , pow(eps, 1/4) )

result == 1.0-x2/6, cause next Taylor x2*x2/120 LESS than "epsilon" even in Horner method. Important signs of x2*x2 is out of epsilon range.

  • Range2 for abs(x) in ( pow(eps, 1/4) , pow(eps, 1/6) )

result == 1.0 + x2 * (-1.0 + x2 / 20.0) / 6.0;, cause x6/7! LESS than "epsilon" even in Horner method.

NOTE: We can use approximate bounds from above ignoring Taylor series coefficients. Calculation of exact bounds is not trivial, and should use exact analytical precision bounds of all operations.

comment:3 by anonymous, 4 years ago

Apologies: you're quite correct that the powers of epsilon are all wrong, I confess when looking at this that it may be over-optimised in any case given that evaluation of sin(x) is usually pretty fast in any case. I'll come back to this shortly when I'm less rushed.

comment:4 by minorlogic@…, 4 years ago

  1. It only should optimize precision. Cause sin(x)/x is pretty fast and quite precise (it use hardware asm code on many platforms). sin(x)/x only should check x != 0. With float ranges close to zero, sin(x) strongly equal to x, and division of any floating point to itself must equal to 1.0. So sin(x)/x with x!= 0 is quite precise and save.
  1. Within range (pow(eps, 1/2) , pow(eps, 1/6)) taylor expansion provide a small precision improvement. It is small comparing to direct sin(x)/x but can provide several bits of precision (difference of errors about 5%-10% on most platforms).

It is not easy, to predict precision loss from sin(x)/x. Different platforms and compilers provides different sin(x) precision, and its error grows after division by x. Using Taylor approximation we can provide exact solution with known error upper bound (compared to float epsilon).

From other hand, if we use sin(x) on most of "sinc" ranges, we can use it in whole range, and final error depends from sin(x) implementation.

  1. For improvement of precision, ranges and errors should be carefully verified and tested ( against six(x)/x ) in ranges approximating "sinc".
  1. For fast and save implementation i propose use only one branch and range check (0, pow(eps, 1/4) ) and expansion with 1.0 - x2/6.

if(abs(x) < eps_root_four) {

return 1.0 - x2/6;

}

return six(x)/x;

comment:5 by John Maddock, 4 years ago

Resolution: fixed
Status: reopenedclosed
Note: See TracTickets for help on using tickets.