Numerical Recipes the Art of Scientific Computing Subroutine Codes

Why not utilise Numerical Recipes?

Nosotros accept plant Numerical Recipes to be generally unreliable.

In wide outline, the reason is that Numerical Recipes values simplicity above other virtues that may ofttimes be more than of import. Circuitous issues frequently have complex solutions, or require complex processes to arrive at any solution whatever. This is not a new insight: H. Fifty Mencken (1880-1956) wrote

... there is always a well-known solution to every man problem -- smashing, plausible, and incorrect.
Prejudices, 2d series (1920), or equivalently
For every circuitous problem, there is a solution that is simple, neat, and wrong.
These lessons are, still, too frequently forgotten, and appear to take been forgotten in the instance of the planning and execution of Numerical Recipes. More specific reasons not to use Numerical Recipes are outlined below. There are excellent alternatives.

Paraphrased from a previously published article

Charles Lawson, Numerical Recipes: A boon for scientists and engineers, or not?, JPL ICIS Newsletter ix, 1 (January 1991)

"There is a series of books and associated software with the proper noun Numerical Recipes in the titles that provide descriptions of numerical algorithms and associated programs in pop programming languages...

"The good news is that this serial gives exceptionally broad coverage of computational topics that arise in scientific and engineering computing at a very reasonable price.... The bad news is that the quality and reliability of the mathematical exposition and the codes it contains are spotty. It is not safe, we have institute, to take discussions in the book as authoritative or to use the codes with confidence in the validity of the results.

"The authors are identified on the book jacket as `leading scientists' and [nosotros] have no reason to think that they are non. However, in that location is no claim that they have special competence in numerical analysis or mathematical software. At to the lowest degree in the parts of the book that [we] accept studied closely, they do not demonstrate any such competence.

"Published reviews of the book[s] have fallen into 2 classes: Testimonials and reviews by scientists [including Kenneth Wilson, Nobel Laureate] and engineers tend to extol the broad scope and convenience of the products, without seriously evaluating the quality, while reviews past numerical analysts are very critical of the quality of the discussions and the codes....

"Two reviews by numerical analysts are:

  1. Lawrence F. Shampine, Review of `Numerical Recipes, The Art of Scientific Computing', The American Mathematical Monthly 94, 9 (Nov 87) 889-892.
  2. Richard J. Hanson Cooking with `Numerical Recipes' on a PC, SIAM (Society for Industrial and Applied Mathematics) News 28, three (May 90) 18.
"Professor Shampine is a specialist in the numerical solution of ordinary differential equations (ODEs). He gives specific criticisms of chapter 15, which deals with ODEs, and says, in summary:
`This chapter describes numerical methods for ODE'due south from the viewpoint of 1970. If the authors had consulted an expert in the subject or read one of the practiced survey articles bachelor, I retrieve they would accept assessed the methods differently and presented more modern versions of the methods.'
[Since Shampine wrote this, the authors of NR have consulted a worker agile in the field. Unfortunately, a smashing many other experts in the field consider the advice they got to be very poor indeed -- extrapolation methods are near e'er substantially inferior to Runge-Kutta, Taylor's serial, or multistep methods.]

"He likewise remarks that adaptive methods for numerical quadrature issues are non treated in NR although they are much in favor by numerical analysts.

"Dr. Hanson is a sometime editor of the algorithms department of the Clan for Computing Machinery Transactions on Mathematical Software (ACM TOMS). He ran tests of the nonlinear to the lowest degree-squares codes from NR and made comparisons with published results of better known codes LMDIR from MINPACK and NL2SOL.... He plant the NR codes sometimes required up to 20 times as many iterations equally the comparison codes. He noted that the control of the Levenberg-Marquardt damping parameter was not sufficiently sophisticated, permitting overflow or underflow of to occur... the algorithm in NR is a very bare-bones implementation of the ideas presented in the referenced 1963 newspaper by Marquardt. Many significant enhancements of that idea have been given in the intervening 27 years. [We] would expect the codes LMDIR, NL2SOL, and their successors to be much more EFFICIENT AND RELIABLE [editor's emphasis].

"[Our] present attention to the NR products was initiated by calls for consultation .... Two involved the topics mentioned above.... Other calls led u.s. to scrutinize Sections 6.6, Spherical Harmonics and fourteen.six, Robust Interpretation

"...The discussion, algorithms, and code given in department six.half dozen is internally consistent and the choices of the recursions to employ in computing the associated Legendre functions are ones recommended by specialists in the topic as beingness stable. No alert is given, nonetheless, regarding the fact that in that location are a number of alternative conventions in use regarding signs and normalization factors.... [If i naively combined results from NR codes with results from other sources] one would probably obtain incorrect results.

"In reading the section on robust estimation, [we were] skeptical of Figure 14.six.1(b) that shows a `robust straight-line fit' looking substantially better than a `least-squares fit'....

"To bank check [our] doubts nigh this figure, [we] enlarged it and traced the points and the `fitted' lines onto graph paper to obtain data with which [to] experiment....

"We computed a least squares fit.... The detail `robust' method illustrated by figure 14.half-dozen.1(b) is not identified. Still, since the simply method for which NR attempts to give lawmaking in this expanse is L1 fitting, [we] computed an L1 fit to the data as an example of a `robust' fit.... [We] used a subroutine CL1, that was published in the algorithms department of ACM TOMS in 1980, to obtain an L1 fit in which [we] could have confidence. [We] also applied the NR code MEDFIT to the information and obtained a fit that agreed with the CL1 fit to about three decimal places.

..."Equally expected, the least-squares fit is not every bit far from the visual trend as in figure 14.6.one(b) and the L1 fit is non every bit close.... It appears that the lines labelled `fits' in the NR figure 14.vi.1(b) are not the result of any computed plumbing equipment at all, only are just suggestive lines drawn past the authors to buttress their enthusiasm for `robust' fitting. An uncritical reader would probably incorrectly assume that figure 14.6.1(b) illustrates the performance of actual algorithms.

"The objective function in an L1 plumbing fixtures problem is non differentiable at parameter values that cause the fitted line to interpolate one or more data points. The authors point some awareness of this fact simply non of all its consequences for a solution algorithm. Typically, the solution to this problem will interpolate two or more data points, and in the authors' algorithm, it would exist common for trial fits in the course of execution of the algorithm to interpolate at least one data betoken. ... suffice it to note that information technology is like shooting fish in a barrel to produce information sets for which the MEDFIT/ROFUNC code will fail.

"Ane information gear up which causes looping is [x = 1, two, 3; y = i, 1, 1]. Some other which causes looping in a unlike part of the code is [10 = 2, 3, iv; y = 1, 3, 2]. A data assault which the code terminates, but with a significantly wrong result is [ten = 3, 4, 5, 6, 7; y = 1, 3, 2, 4, 3]. Because of the faulty theoretical foundation, at that place is no reason to believe whatever detail result obtained past this code is correct, although by adventure information technology will sometimes get a correct outcome....

"Conclusions

"The authors of Numerical Recipes were not specialists in numerical assay or mathematical software prior to publication of this book and its software, and this deficiency shows WHENEVER WE TAKE A Close LOOK AT A TOPIC in the volume [editor'southward accent]. The authors take attempted to cover a very extensive range of topics. They have basically found `some' way to approach each topic rather than finding one of the best contemporary ways. In some cases they were patently non enlightened of standard theory and algorithms, and consequently devised approaches of their own. The MEDFIT code of section xiv.6 is a particularly unfortunate instance of this latter situation.

"I should independently check the validity of any information or codes obtained from `Numerical Recipes'...."

Editor'due south remarks

I have independently checked the codes for Bessel Functions and Modified Bessel Functions of the start kind and orders goose egg and one (J0, J1, I0, I1). The approximations given in NR are those to be found in the National Bureau of Standards Handbook of Special Functions, Applied Mathematics Series 55, which were published by Cecil Hastings in 1959. Although the approximations are accurate, they are not very precise: don't trust them beyond 6 digits. Coding them in "double precision" won't help. Much piece of work has been done in the approximation of special functions in the last 32 years.

We haven't investigated the quality of every i of the NR algorithms and codes, nor the exposition in every chapter of NR (we have more productive things to practice). But sampling randomly (based on calls for consultation) in four areas, and finding ALL FOUR faulty, we have very little confidence in the residuum.

Received in the postal service, in response to USENET postings:

(All but 1 of the following were sent to me earlier May 1993. The authors of Numerical Recipes take updated many routines -- and not updated others. And then some of the issues noted here may already accept been corrected. The authors maintain a collection of patches and issues reports.)

(27 Nov 1991)
"You can add the section on PDE'southward to the listing of `bad'. The discussion of relaxation solvers for elliptic PDE's starts off OK (in about 1950, but that is OK for a naive user if he is non in a hurry) but so fails to mention little details like boundary conditions! Their code has the implicit assumption that all elliptic problems take homogeneous Dirichlet boundary conditions!

"And so they have their little coding quirks, similar accessing their arrays the wrong way and putting unnecessary IF and MOD statements inside of inner loops....

"On the other hand, I did acquire something from their discussion of the Conjugate Gradient technique for solving systems of linear equations. I did non like their implementation, merely the discussion was OK."

And:

(27 Nov 1991)
"Your posts in sci.physics about Numerical Recipes friction match my experience. I've constitute that Numerical Recipes provide just enough information for a person to get himself into trouble, because afterward reading NR, one thinks that one understands what's going on. The i saving grace of NR is that information technology usually provides references; afterward one has been burned enough times, one learns to go straight to the references :-).

"Example: Department 9.5 claims that Laguerre's method, used for finding zeros of a polynomial, converges from whatever starting betoken. Co-ordinate to Ralston and Rabinowitz, yet, this is only true if all the roots of the polynomial are real. For example, Laguerre's method runs into difficulty for the polynomial f(ten) = 10^n + 1 if the initial guess is 0, considering f'(0) = f''(0) = 0."

And:

(27 Nov 1991)
"As an aside, I have simply received a preprint from Press describing what looks like affiliate 18 for NR -- about the discrete wavelet transform. Now, I can tell you that this stuff is wrong, every bit the results which are in his figures are not reconstructable using his routines. Don't know why notwithstanding, just information technology just doesn't work. If anyone out there is using these routines -- toss them. If y'all accept stock-still these routines or have other detached wavelet transform routines, I want to know about it. Thanks."

And:

(27 Nov 1991)
" ... And so let me offering my personal caveat: SVDCMP does non always piece of work. I found one case where the consequence is just wrong (fortunately it is piece of cake to cheque, but one doesn't usually do so). I translated the NR Fortran to C, and besides tried the NR C code. Both incorrect in the same way. I tried IMSL and Linpack in Fortran, and tried translating Linpack to C; all three produced correct answers...."

And:

(22 Apr 1992)
"The NR-recommended random number generators RAN1 and RAN2 should non exist used for any serious application. If y'all utilise the top bit of RAN1 to create a discrete random walk (plus or minus 1 with equal probability) of length 10,000, the variance volition be effectually 1500, far below the desired value of 10,000.

"Both are low-modulus generators with a shuffling buffer, in i case with the bottom bits twiddled with another low-modulus generator. The moduli are but too low for serious work, and the resulting generators fifty-fifty out too well."

And:

(23 Apr 1992)
"... It seems that everyone I talk to has a unlike function of the volume that they don't like (The office I hate most is the section on imitation annealing and the travelling salesman'south trouble- there are far meliorate approaches to the problem.)"

And:

(1 Mar 1993)
"Yes, I was another numerical infant in the forest, told the NR was the ultimate word (plain by professors and colleagues who had never used information technology!) and then I spent months trying to effigy out why their QL decomposition routine didn't work. I idea it was me...."

And:

(1 Mar 1993)
"May exist your `collected horror stories' will back up my bad experience with the FFT code: Please send these stories to me !"

And:

(26 Feb 1993)
"I recently encountered a bug in Numerical Recipes in C (dunno what edition). Look at the code for "ksprob()". When the routine fails to converge afterwards so many iterations, they return 0.0. But a quick glance at the formula reveals that the sum will not converge for modest (and they must exist small indeed), hence the correct value to return is ane.0, not 0.0.

"Additionally, it would be nice to circumspection users that this formula is only an asymptotic approximation to the true function (which nobody, apparently, has figured out even so), and that the method is horribly unstable for small ."

And:

(xxx Jun 1992)
"My experience with NR is brief - I used HEAPSORT and it crashed when I attempted to sort a vector of length ane. Like shooting fish in a barrel to fix
- just add IF(N.LE.1) RETURN
- merely indicates that the code is not as skillful as a reliable subroutine library, more than just a pedagogic text."

And:

(13 May 1996)
"Your web pages on NR are a valuable public service. My own horror story involves the Marquardt routine, which I've institute to exist totally unreliable in giving the right answer. It works for the extremely simple example given in the book, but when asked to fit a simple growth function, it gave a wrong answer somewhere near the starting values. A colleague and I wrote to Press et al. with these findings, only received a letter maxim, substantially, that we must take fabricated a mistake. I can assure you that our results were checked against several standard statistical packages (SAS and Systat), as well every bit against an older Fortran program, and NR gives the wrong answer with no alert or other indication of a mishap.

"I could add that the NR handling of the polytope (or "simplex") algorithm AMOEBA has a major flaw that I am aware of. NR does acknowledge the trend of this algorithm to end at local minima. They recommend restarting information technology to reduce this tendency, just the driver routine in the "NR Instance Book" (1988 version) omits this aspect. I take come across several applications that routinely requite the incorrect reply because the authors missed the explanation tucked at the end of the NR text or relied on the NR driver routine."

And:

(24 October 1996)
I've been doing an integration with the IDL routine QROMO, which is an open up Romberg algorithm based on the Numerical Recipes routine of the same name. I called information technology with eps=one.e-7 to ensure smoothen behavior. However, as I gradually changed the parameters affecting the integrand, I found the result jumped abruptly by a cistron of 1.002. In other words, the routine underestimated the fault by a factor of 2e4.

I've always liked Romberg integration only because it is ofttimes then fast. But I've found similar problems on occasion with the Numerical Recipes Fortran routine. The problem seemed to be that the routine will occasionally become a lucky guess every bit to the answer, and return prematurely. In the IDL routine I tried setting 1000=8 instead of K=5 and and so far take not take a problem (oasis't tested much yet though!) I've also tried to fix this in the past by requiring 2 successive good guesses. Only does anyone have another suggestion?

And:

(vii May 1997) .... I'chiliad currently playing with some of the Numerical Recipes Fourier transform code. I used the TWOFFT routine to transform two sets of information, but was puzzled by the fact that the aught frequency term was identical for both sets, for no obvious reason. I then ran the FOUR1 routine for just i of the sets, and although the not-goose egg frequency terms appear to be the aforementioned (over the limited gear up I sampled), the zero frequency term is definitely different....

.... in the case of the Fourier transform stuff, it might be wise for someone else to confirm my consequence. I'm sufficiently inexperienced with FFTs to not be certain if the upshot I saw was an artifact of my usage and/or agreement of how it's supposed to piece of work.

On those few occasions when I used Numerical Recipes as a starting indicate for code that I incorporated into my own library, I performed extensive testing to brand sure at that place weren't any means to cause crashes (like sectionalization by zero), and invariably I'd notice holes that needed to be patched. I've also found more efficient coding alternatives. Numerical recipes resorts to some floating point calculations in 1 of the sorting routines that I found a unproblematic integer alternative for (at least their floating signal stuff wasn't within a loop). I've besides got some experience with their Simplex implementation (AMOEBA), and discovered it could get trapped inside the routine and fail to converge. For instance, if the chi-foursquare hypersurface is sufficiently complex, then when the simplex is shrunk, information technology's possible that one of the vertices will observe itself at a *college* chi-foursquare value than it was before the simplex was shrunk! Lawmaking that locks upwardly in an internal loop is unacceptable.

And:

(4 June 1997)
The Numerical Recipes FFT is an good example of a routine that has clearly been designed for maximum simplicity and clarity at the expense of suitability for practical work. Not only is it limited to powers of two (which is particularly unfortunate in the case of multidimensional transforms), simply it is also very slow. A comparing of many FFT implementations shows that the NR code is much worse than other available software. This would be fine if NR made it clear that their lawmaking is a pedagogical demonstration just, but information technology does not--no mention is given (in my edition of NR) of the better FFT algorithms and implementation strategies that be.

And:

(thirteen January 1999)
I sent this study to bugs@nr.com on July 14, 1998, which they didn't acknowledge. The "benchfft" web page that I refered to in my e-mail no longer exists. Text of message follows: This is a paragraph of a letter I sent to the maintainers of the fftw web site, discussing the accuracy results at http://theory.lcs.mit.edu/~benchfft/results/accuracy.html The lesser line is that the Numerical Recipes code for FFTs is not as accurate as the best codes available, and four1 and realft (or drealft) may not be suitable for utilise as the basis for a fast bignum arithmetic bundle, which is, past coincidence, an example given in Department 20.6 of NR. [Quoting another usenet posting:] You mention that Mayer (simple), NAPACK, Nielson, and Singleton "use unstable iterative generators for trigonometric functions". I practise non dispute this, but unstable is in the eye of the beholder. The Numerical Recipes C code uses a generator for an FFT of $N$ points that has error of size $\sqrt{N}$ units in the last identify at the end of the assortment, i.e., information technology loses $\log_2\sqrt{N}$ $.25 in the calculation of the trigonometric function, and hence in the FFT. A well-designed FFT volition take a generator that loses just $\log_2\sqrt{\log_2 N}$ bits in the last identify in the generator. This small difference, which is noticeable when calculating the large FFTs needed for bignum arithmetic, may brand the Numerical Recipes routine unusable for this application, specially considering the real-to-circuitous wrapper realft, uses the aforementioned generator, and doubles the size of the problem (on input, and on output). (I have the accuracy output of bench for sizes upward to 4 mega-whatevers, if you'd similar to look at it---the average relative error for nrc_four1 is 100 times every bit large as the average relative mistake for the best routines with 4 meg points.)

And:

(9 March 1999)
Thanks for your informative page about possible pitfalls with Numerical Recipes. I've but had a similar feel. After reading your folio, I decided nevertheless to be lazy and re-create one of the NR routines for sorting, thinking to myself "information technology's insertion sort, how could they get it wrong? I could code it in xv minutes, or copy it from Numerical Recipes in five." Using the online source of the second edition, I promptly went to the Insertion Sort (pg.330), and copied information technology into my editor, fired up gcc, and got foreign output. As it turns out, there's a typo, and the comparison in the main for loop should be "j < northward", not "j <= due north", else you read off the end of the array. What'south bothersome is that this is 10 lines of code. Y'all'd retrieve they could check it -- and this is the 2nd edition! And a sorting routine to boot, well-nigh as elementary equally they get. I guess one always pays for laziness. I'll be writing my own quicksort. Another correspondent reports (6 May 1999), concerning this remark:
Since this detail post actually gave a specific line that was in error for the second edition, I opened my book to take a expect. I believe the poster did not realize that NR specified a unit offset array arrow. Thus, j <=north does Not run off the cease of the assortment.

This criticism was offered on (2 Sept 1999):

I accept simply recently seen your web folio Re NR. I was thoroughly disappointed with the arroyo you accept taken both in terms of completeness and fairness. In this respect I urge you to consider: ane) Many of the errors/bugs/complaints you have listed are dated 1991. This is 1999 almost 2000, are y'all sure that these complaints are relevant? Does your spider web deserve updating? two) Yous practise not seem to have posted too many letters indicated user's with "good" experiences. Have you made an effort to collect any such? Do you consider the NR response a sufficient instance for this? 3) You practise not seem to have posted too many letters indicating curt comings of commercial (and very expensive) packages. For example IMSL has had and nonetheless does have a large number of shortcomings. Because that IMSL is 200 TIMES MORE EXPENSIVE (and that's just for a PC, the ratio for CRAY etc is considerably greater), uses dated formats/methods, does not have very skilful manuals etc., some justifiable bug tin can be raised in favour of NR on at least a "value basis". 4) Some of the complaints ascend from users who by their ain admission do not know too much. Are they expecting to become math experts simply by the purchase of a very inexpensive software package? Practice they believe that buying an expensive package would have prevented them from "blowing themselves up"? I expect that letters of that type do not belong on your web site when because the merits of such a package, and server only to distort the affair. 5) In relation to 4) above would you because indicating on your site that fifty-fifty some simple numerical tasks require a certain bones mathematical knowledge and these as well equally the more circuitous ones are not for the uninitiated: No more than a volume on medicine would fix someone to become a surgeon. vi) If you are going to "poo-poo" such a bundle, would you considering doing a niggling work to determine a "benchmark", and indicate which packages are "proficient" (and possible why)? For case, the public domain libs such as LAPACK, etc etc are in many ways much much better than commercial packages, but this type of info is also missing from your web site. Or at the very to the lowest degree if you lot are going to "poo-poo" one parcel you should "poo-poo" all packages (as certainly there is no such thing as a perfect package). There are various other lesser points, but i hope that this collection provides that management of my thoughts. For the record, i obtained my PhD in applied math as pertaining to numerical methods for PDEs, i have been using and developing numerical methods for over 20 years on a wide variety of machines and libs including NAG, IMSL MINPACK, LINPACK, LAPACK, NR etc, too as a considerable number of my own projects.

And:

(28 September 1999)
* Complex Hermitian matrix diagonalisation: I ran a c program which generated a complex matrix and then called to the diagonalization subroutines from Numerical Recipes and to mathc90. The take hold of in Numerical Recipes is that the diagonalisation method required for a circuitous matrix A + iB is to convert it to the real matrix:
        A  B    -B  A      
which is twice the size of the original complex matrix. Information technology is this real matrix that gets diagonalised. Yet, the results seem to have a lower accurateness than the ones obtained from mathc90. The example given in the user guide to mathc90 was less accurately solved by Numerical Recipes. seconds required for solving matrices of:
dimension:        Numerical Recipes           mathc90 ---------         -----------------           -------                   MacG3  | Cray-J90          G3  | Cray-J90  xc x ninety             1 s |     7 s           1 s |     1 s 180 x 180           17 s |    57 due south           iii s |     7 s 270 10 270          132 south |   191 s          12 southward |    22 s 360 x 360          500 s |   432 s          43 s |    53 south                     time with NR / time with mathc90                              six-11 in the G3                              7-9  in the Cray      
The plan was exactly the aforementioned in both machines and compiled with the optimization option -O3. It was run interactively in the Cray nether weather condition of normal utilize (i.e. several users logged in and with batch jobs running in the background). [Editor'southward note: The software in the package mathc90 is based on EISPACK. See Netlib.]

I don't know if the senders want to exist publicly identified. If you're interested in contacting them, send me e-mail, and I'll inquire them to contact you lot.


Our experience, and that of many others, is that it is all-time to go numerical software from reliable sources. The easiest and cheapest is Netlib, which includes the collected algorithms from ACM Transactions on Mathematical Software (which have all been refereed), and a peachy many other algorithms that have withstood the scrutiny of the peers of the authors, merely in means different from the formal journal refereeing process. The editor of this folio has nerveless links to several other sources.
Compiled past W. Van Snyder

And for a response to some of the to a higher place, meet the NR response.

phamruld1985.blogspot.com

Source: https://www.uwyo.edu/buerkle/misc/wnotnr.html

0 Response to "Numerical Recipes the Art of Scientific Computing Subroutine Codes"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel