Opened 7 years ago

Closed 4 years ago

#12066 closed Library Submissions (fixed)

Accuracy of modified Bessel functions

Reported by: Pavel Holoborodko <pavel@…> Owned by: John Maddock
Milestone: To Be Determined Component: math
Version: Boost 1.61.0 Severity: Optimization
Keywords: Cc: pbristow@…

Description

This is not a bug report but proposal for enhancement.

Currently Boost is using rational minimax approximations to compute modified Bessel functions in double precision. In short, I propose to replace them with more accurate (and faster) approximations.

I happen to work in the field of high-accuracy computations and accuracy assessment of various software libraries is part of job.

Few months ago I tested accuracy of computing the modified Bessel functions by different software libraries including Boost.

Relative accuracy of every library was evaluated against "true" values computed using quadruple precision (in terms of machine epsilon). Reports with details and plots can be found by the links: (Trac doesn't allow multiple links, so there is only one - others can be easily found)

I0

The main conclusion is:

Rational approximation currently used in Boost were build by minimizing only theoretical approximation error, ignoring effects of finite precision floating-point arithmetic. Which makes them sub-optimal (e.g. Boost delivers the lowest accuracy for I0 with arguments in [0,15]).

To fix this, I searched for approximations which take into account the real-life rounding & cancellation effects - proposed approximations are included in reports.

In most of the cases, new methods deliver the lowest relative error even if compared to Chebyshev expansions and have simple structure (polynomials). I think it would be a big plus for Boost to use the updated coefficients and give highest accuracy among other libs.

My question are:

  1. Is this of any interest for Boost::math maintainers?
  2. If yes - I would be happy to help with updating the routines for I0, I1, K0 and K1. (Fork -> pull request would be enough, right?)

P.S. GSL has issues with accuracy (K0, K1) and probably it is not good idea to use it for evaluating accuracy against (I saw tables in Boost documentation with accuracy checks against GSL).

Change History (9)

comment:1 by Pavel Holoborodko <pavel@…>, 7 years ago

Link for I1: I1

comment:2 by Pavel Holoborodko <pavel@…>, 7 years ago

Report for K0: K0

comment:3 by Pavel Holoborodko <pavel@…>, 7 years ago

Report for K1: K1

comment:4 by John Maddock, 7 years ago

Cc: pbristow@… added

Yes we're obviously interested, and yes a PR would be fine.

We already have 35-digit test data and the tools to measure both performance and accuracy, so a before and after shoot-out is easy enough. I do like your scatter plots for presenting this kind of information BTW.

We don't use GSL to compute test values BTW - all the other libraries I've tested tend to have some serious issues over some parts of their domain. Rather we do comparative tests of both our lib and various others against our high-precision reference data.

If you have a list of test cases that generate the outlier results that could be useful to (obviously I could go hunting for them, but I'm hoping you can see them in your data right away). I suspect the difference between our results (and this is probably a weakness in our data), is that since we test at float precision as well as double and long double, and also insist on binary-exact input values, none of our input test values have more than 23-bit precision.

comment:5 by Pavel Holoborodko <pavel@…>, 7 years ago

@"...none of our input test values have more than 23-bit precision."

Probably this affects the accuracy tests. I use 50K uniformly distributed test points (Mersenne-Twister) generated in full double precision. Then compute function value with 34 digits accuracy ("true" value) and compare with double precision counterpart.

I think usage of extended precision is vital for measuring actual error of float/double precision routines, especially on magnitudes comparable to machine epsilon.

I will try to prepare PR in near future (anyone more experienced with boost is very welcome to do this, as coefficients are publicly available).

comment:6 by John Maddock, 7 years ago

I think usage of extended precision is vital for measuring actual error of float/double precision routines, especially on magnitudes comparable to machine epsilon.

Yes of course.

I will try to prepare PR in near future (anyone more experienced with boost is very welcome to do this, as coefficients are publicly available).

I'm tied up at present, but yes, I saw that your coefficients were available.

Ultimately we would need at least 80-bit precision as an option as well, so the general form of the new approximations is as useful as the coefficient values (we have the code needed to optimise/generate new coefficients, figuring out the form of the approximation is the hard part). Might be interesting to see if they'll stretch to 128-bit precision as well.

comment:7 by Pavel Holoborodko <pavel@…>, 7 years ago

So far I derived 128-bit coefficients for I0 - the same form of approximation can be used, as I used for double (it is a bit more accurate near x = 0 than originally proposed by Blair).

From preliminary tests, I1 needs different approximation from what I used for double and from Blair's. K0/K1 is on the way as well.

I will share everything as soon as I finish this - it takes time to run search for several different forms of approximation in extended precision and do other work on the same computer...

comment:8 by Pavel Holoborodko <pavel@…>, 7 years ago

Also quadruple requires different split points (e.g. for I0 split point is 7.75, but for quadruple it is 15.5).

comment:9 by John Maddock, 4 years ago

Resolution: fixed
Status: newclosed
Note: See TracTickets for help on using tickets.