I found that I have a WeBWorK problem with the following lines

$a2 = non_zero_random(-50,50,1);

$a2 = $a2/10;

$r2 = non_zero_random(-5,0,0.1); #negative is more interesting with inequality

$b2 = $a2 - $r2;

and $d2 was displayed as

-2.22044604925031e-16

I'd obviously decided to get step sizes of 0.1 for $a2 by dividing consecutive integers by 10, but why does the definition for $r2 work sometimes but not other times, that is, why does WeBWorK hit a value very close to but not equal to zero?

### why -2.22044604925031e-16?

by Bruce Yoshiwara -
In reply to Bruce Yoshiwara
Thursday, 3 September 2009, 6:57 AM

### Re: why -2.22044604925031e-16?

by Jason Aubrey -
Hi Bruce,

Can you post the problem code and the seed value that is generating this value? Also, do you mean that $b2 has that value or $r2?

Jason

Can you post the problem code and the seed value that is generating this value? Also, do you mean that $b2 has that value or $r2?

Jason

In reply to Jason Aubrey
Thursday, 3 September 2009, 2:52 PM

### Re: why -2.22044604925031e-16?

by Bruce Yoshiwara -
The weird value of $b2 occurred with seed 4642

$b1 = random(2,5,1);

$r1 = random(1,9,1);

$a1 = $b1 + $r1;

$c1 = random(1,9,1);

$ans1 = random(1,9,1);

$temp = $r1 + $c1;

$d1 = $temp * $ans1 ;

$a2 = non_zero_random(-50,50,1);

$a2 = $a2/10;

$r2 = non_zero_random(-5,0,0.1); #negative is more interesting with inequality

$b2 = $a2 - $r2;

$c2 = random(-5,5,0.1);

$ans2 = non_zero_random(-5,5,0.1);

$d2 = $r2 * $ans2 + $c2 ;

######################################

# Main text

BEGIN_TEXT

Solve each equation.

$BR

a. \[ $a1 y - $b1 y = $d1 - $c1 y \]

$PAR

\( y = \) \{ ans_rule(15) \}

$PAR

b. \[$a2 w ? {$c2} = $b2 w ? {$d2}\]

$PAR

\( w = \) \{ ans_rule(15) \}

$BR

END_TEXT

########################

OK, I just ran this inserting $r2 before the first equation just for fun, and it displayed as -1.2. So the -1.2 (that was the value of $a2, -12/10) is different from the -1.2 (the value of $r2) that came from stepping from -5 by steps of 0.1?

$b1 = random(2,5,1);

$r1 = random(1,9,1);

$a1 = $b1 + $r1;

$c1 = random(1,9,1);

$ans1 = random(1,9,1);

$temp = $r1 + $c1;

$d1 = $temp * $ans1 ;

$a2 = non_zero_random(-50,50,1);

$a2 = $a2/10;

$r2 = non_zero_random(-5,0,0.1); #negative is more interesting with inequality

$b2 = $a2 - $r2;

$c2 = random(-5,5,0.1);

$ans2 = non_zero_random(-5,5,0.1);

$d2 = $r2 * $ans2 + $c2 ;

######################################

# Main text

BEGIN_TEXT

Solve each equation.

$BR

a. \[ $a1 y - $b1 y = $d1 - $c1 y \]

$PAR

\( y = \) \{ ans_rule(15) \}

$PAR

b. \[$a2 w ? {$c2} = $b2 w ? {$d2}\]

$PAR

\( w = \) \{ ans_rule(15) \}

$BR

END_TEXT

########################

OK, I just ran this inserting $r2 before the first equation just for fun, and it displayed as -1.2. So the -1.2 (that was the value of $a2, -12/10) is different from the -1.2 (the value of $r2) that came from stepping from -5 by steps of 0.1?

In reply to Bruce Yoshiwara
Thursday, 3 September 2009, 6:23 PM

### Re: why -2.22044604925031e-16?

by Davide Cervone -
Welcome to the wonderful world of floating-point arithmetic. You are correct that the two versions of 1.2 are slightly different. Such small errors are inherent in the computations performed by WeBWorK, which uses double-precision floating-point reals under the hood. These support about 16 digits of precision, so this value, which is on the order of 10^-16, is due to the truncation and round-off errors in the computations to produce the two numbers in $a2 and $r2.

There is really no way to avoid these errors with this type of computation. For example, consider your .1, which is easily represented in decimal. But when converted to binary, it is a repeating "decimal" number, and since only a finite number of digits are stored, even a number like .1 can not be accurately represented in binary. So computations that involve it may accumulate that error. Different computations will accumulate different errors, but as long as these stay in the least-significant digits, they are harmless. For instance, the routines that convert to decimal for printing usually round off the results so that the small errors don't show up in the results.

Usually these errors do not cause problems, but if they move up from the least-significant digits into the more-significant digits, then they can become problematic. This can happen in several ways. One way is if a process is iterated many times and the errors accumulate and get bigger and bigger. That usually takes a long time, and doesn't usually affect WeBWorK computations.

The most important mechanism that moves these errors from the least-significant digits to the most-significant ones is called

This is the type of error that you are seeing here. There is really no way to avoid it, but you could do extra checks to see if the absolute value of the result is small and then set the result to 0 instead, for example.

Hope that clarifies things a bit.

Davide

There is really no way to avoid these errors with this type of computation. For example, consider your .1, which is easily represented in decimal. But when converted to binary, it is a repeating "decimal" number, and since only a finite number of digits are stored, even a number like .1 can not be accurately represented in binary. So computations that involve it may accumulate that error. Different computations will accumulate different errors, but as long as these stay in the least-significant digits, they are harmless. For instance, the routines that convert to decimal for printing usually round off the results so that the small errors don't show up in the results.

Usually these errors do not cause problems, but if they move up from the least-significant digits into the more-significant digits, then they can become problematic. This can happen in several ways. One way is if a process is iterated many times and the errors accumulate and get bigger and bigger. That usually takes a long time, and doesn't usually affect WeBWorK computations.

The most important mechanism that moves these errors from the least-significant digits to the most-significant ones is called

*subtractive cancelation*, and it occurs when two numbers are subtracted that are equal in their most significant digits. For example, suppose you have computed pi in two ways and get 3.1415943 and 3.1415972, both pretty precise, and with 8 significant digits. The errors are in the least significant digits, where it is supposed to be. If you end up subtracting the first from the second, you will get .000029, which is a number with only 2 digits of precision, so a dramatic loss of precision has occurred. Moreover, the two significant digits that you end up with are the "junk" digits from the originals, and so are completely unreliable.This is the type of error that you are seeing here. There is really no way to avoid it, but you could do extra checks to see if the absolute value of the result is small and then set the result to 0 instead, for example.

Hope that clarifies things a bit.

Davide

In reply to Davide Cervone
Thursday, 3 September 2009, 7:45 PM

### Re: why -2.22044604925031e-16?

by Bruce Yoshiwara -
Thanks. I was afraid of that, but I'm relieved it's more than my being insane.