freebsd-src/lib/msun/src/s_cbrtf.c

74 lines
1.9 KiB
C
Raw Normal View History

/* s_cbrtf.c -- float version of s_cbrt.c.
* Conversion to float by Ian Lance Taylor, Cygnus Support, ian@cygnus.com.
* Debugged and optimized by Bruce D. Evans.
*/
/*
* ====================================================
* Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
*
* Developed at SunPro, a Sun Microsystems, Inc. business.
* Permission to use, copy, modify, and distribute this
1995-05-30 06:51:47 +01:00
* software is freely granted, provided that this notice
* is preserved.
* ====================================================
*/
2008-02-22 02:30:36 +00:00
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "math.h"
#include "math_private.h"
/* cbrtf(x)
* Return cube root of x
*/
1995-05-30 06:51:47 +01:00
static const unsigned
B1 = 709958130, /* B1 = (127-127.0/3-0.03306235651)*2**23 */
B2 = 642849266; /* B2 = (127-127.0/3-24/3-0.03306235651)*2**23 */
float
cbrtf(float x)
{
Use double precision internally to optimize cbrtf(), and change the algorithm for the second step significantly to also get a perfectly rounded result in round-to-nearest mode. The resulting optimization is about 25% on Athlon64's and 30% on Athlon XP's (about 25 cycles out of 100 on the former). Using extra precision, we don't need to do anything special to avoid large rounding errors in the third step (Newton's method), so we can regroup terms to avoid a division, increase clarity, and increase opportunities for parallelism. Rearrangement for parallelism loses the increase in clarity. We end up with the same number of operations but with a division reduced to a multiplication. Using specifically double precision, there is enough extra precision for the third step to give enough precision for perfect rounding to float precision provided the previous steps are accurate to 16 bits. (They were accurate to 12 bits, which was almost minimal for imperfect rounding in the old version but would be more than enough for imperfect rounding in this version (9 bits would be enough now).) I couldn't find any significant time optimizations from optimizing the previous steps, so I decided to optimize for accuracy instead. The second step needed a division although a previous commit optimized it to use a polynomial approximation for its main detail, and this division dominated the time for the second step. Use the same Newton's method for the second step as for the third step since this is insignificantly slower than the division plus the polynomial (now that Newton's method only needs 1 division), significantly more accurate, and simpler. Single precision would be precise enough for the second step, but doesn't have enough exponent range to handle denormals without the special grouping of terms (as in previous versions) that requires another division, so we use double precision for both the second and third steps.
2006-01-05 07:57:31 +00:00
double r,T;
float t;
int32_t hx;
u_int32_t sign;
u_int32_t high;
GET_FLOAT_WORD(hx,x);
sign=hx&0x80000000; /* sign= sign(x) */
hx ^=sign;
if(hx>=0x7f800000) return(x+x); /* cbrt(NaN,INF) is itself */
/* rough cbrt to 5 bits */
if(hx<0x00800000) { /* zero or subnormal? */
if(hx==0)
return(x); /* cbrt(+-0) is itself */
SET_FLOAT_WORD(t,0x4b800000); /* set t= 2**24 */
t*=x;
GET_FLOAT_WORD(high,t);
SET_FLOAT_WORD(t,sign|((high&0x7fffffff)/3+B2));
} else
SET_FLOAT_WORD(t,sign|(hx/3+B1));
/*
* First step Newton iteration (solving t*t-x/t == 0) to 16 bits. In
* double precision so that its terms can be arranged for efficiency
* without causing overflow or underflow.
*/
Use double precision internally to optimize cbrtf(), and change the algorithm for the second step significantly to also get a perfectly rounded result in round-to-nearest mode. The resulting optimization is about 25% on Athlon64's and 30% on Athlon XP's (about 25 cycles out of 100 on the former). Using extra precision, we don't need to do anything special to avoid large rounding errors in the third step (Newton's method), so we can regroup terms to avoid a division, increase clarity, and increase opportunities for parallelism. Rearrangement for parallelism loses the increase in clarity. We end up with the same number of operations but with a division reduced to a multiplication. Using specifically double precision, there is enough extra precision for the third step to give enough precision for perfect rounding to float precision provided the previous steps are accurate to 16 bits. (They were accurate to 12 bits, which was almost minimal for imperfect rounding in the old version but would be more than enough for imperfect rounding in this version (9 bits would be enough now).) I couldn't find any significant time optimizations from optimizing the previous steps, so I decided to optimize for accuracy instead. The second step needed a division although a previous commit optimized it to use a polynomial approximation for its main detail, and this division dominated the time for the second step. Use the same Newton's method for the second step as for the third step since this is insignificantly slower than the division plus the polynomial (now that Newton's method only needs 1 division), significantly more accurate, and simpler. Single precision would be precise enough for the second step, but doesn't have enough exponent range to handle denormals without the special grouping of terms (as in previous versions) that requires another division, so we use double precision for both the second and third steps.
2006-01-05 07:57:31 +00:00
T=t;
r=T*T*T;
T=T*((double)x+x+r)/(x+r+r);
/*
* Second step Newton iteration to 47 bits. In double precision for
* efficiency and accuracy.
*/
Use double precision internally to optimize cbrtf(), and change the algorithm for the second step significantly to also get a perfectly rounded result in round-to-nearest mode. The resulting optimization is about 25% on Athlon64's and 30% on Athlon XP's (about 25 cycles out of 100 on the former). Using extra precision, we don't need to do anything special to avoid large rounding errors in the third step (Newton's method), so we can regroup terms to avoid a division, increase clarity, and increase opportunities for parallelism. Rearrangement for parallelism loses the increase in clarity. We end up with the same number of operations but with a division reduced to a multiplication. Using specifically double precision, there is enough extra precision for the third step to give enough precision for perfect rounding to float precision provided the previous steps are accurate to 16 bits. (They were accurate to 12 bits, which was almost minimal for imperfect rounding in the old version but would be more than enough for imperfect rounding in this version (9 bits would be enough now).) I couldn't find any significant time optimizations from optimizing the previous steps, so I decided to optimize for accuracy instead. The second step needed a division although a previous commit optimized it to use a polynomial approximation for its main detail, and this division dominated the time for the second step. Use the same Newton's method for the second step as for the third step since this is insignificantly slower than the division plus the polynomial (now that Newton's method only needs 1 division), significantly more accurate, and simpler. Single precision would be precise enough for the second step, but doesn't have enough exponent range to handle denormals without the special grouping of terms (as in previous versions) that requires another division, so we use double precision for both the second and third steps.
2006-01-05 07:57:31 +00:00
r=T*T*T;
T=T*((double)x+x+r)/(x+r+r);
Use double precision internally to optimize cbrtf(), and change the algorithm for the second step significantly to also get a perfectly rounded result in round-to-nearest mode. The resulting optimization is about 25% on Athlon64's and 30% on Athlon XP's (about 25 cycles out of 100 on the former). Using extra precision, we don't need to do anything special to avoid large rounding errors in the third step (Newton's method), so we can regroup terms to avoid a division, increase clarity, and increase opportunities for parallelism. Rearrangement for parallelism loses the increase in clarity. We end up with the same number of operations but with a division reduced to a multiplication. Using specifically double precision, there is enough extra precision for the third step to give enough precision for perfect rounding to float precision provided the previous steps are accurate to 16 bits. (They were accurate to 12 bits, which was almost minimal for imperfect rounding in the old version but would be more than enough for imperfect rounding in this version (9 bits would be enough now).) I couldn't find any significant time optimizations from optimizing the previous steps, so I decided to optimize for accuracy instead. The second step needed a division although a previous commit optimized it to use a polynomial approximation for its main detail, and this division dominated the time for the second step. Use the same Newton's method for the second step as for the third step since this is insignificantly slower than the division plus the polynomial (now that Newton's method only needs 1 division), significantly more accurate, and simpler. Single precision would be precise enough for the second step, but doesn't have enough exponent range to handle denormals without the special grouping of terms (as in previous versions) that requires another division, so we use double precision for both the second and third steps.
2006-01-05 07:57:31 +00:00
/* rounding to 24 bits is perfect in round-to-nearest mode */
return(T);
}