Added by: ~James, 2015 I 21

Added by: ~James, 2015 I 21

Hi Tomek,

Sorry, this is a really noob question, but I can't find the answer in the samples.

I am writing a code for a finite difference piece of software. I receive my values as doubles and there is nothing that I can do about that. I sum the numbers several million times hence processor errors are a big problem. Let's say I pass my numbers to my addition function, which looks like

double highprecision(int numArgs, ...) {

ttmath::Big <1,4> sum(0.0), num[10];

double endsum;

va_list args;

va_start(args,numArgs);

cout << endl << "High Precision Code" << endl;

for (int i = 0; i < numArgs; i++) {

num[i] = va_arg(args,double);

sum += num[i];

cout << "num[" << i << "] =" << num[i] << endl;

cout << "high precision sum = " << sum << endl;

}

endsum = sum.ToDouble();

cout << "normal precison sum = " << endsum << endl;

va_end(args);

return endsum;

}

If call it in the main program with just two values such as highprecision(2, 9560.0, 0.000001) then the output that I get from this program is:

num[0] = 9560.0

num[1] = 0.0000010000000000000000000634865345...

sum = 9560.0

high precision sum = 9560.0000010000000000000000000634865

normal precision sum = 9560.00000099 <- This is the loss of precision that i'm trying to avoid.

How can I prevent this from happening?

Thanks, James

Added by: tomek, 2015 I 21, Last modified: 2015 I 21

This is not possible when using binary floating point numbers:http://en.wikipedia.org/wiki/Floating_point#Accuracy_problem s