Ohhhh, I think I have found the actual issue and it might not be float related at all.
Gimme a few minutes and I will get back with more details.
Edit: yup, not related to float, there is something fishy with integer computations as soon as a call to clock() is involved.
Here's a small test program:
Code: Select all
void main()
{
typedef unsigned long time_t;
clock_t clockValue;
time_t seconds, seconds_rounded, mod;
int blob;
blob = 100;
printf("blob/100 : %d\n", blob/100);
printf("blob/CLOCKS_PER_SEC : %d\n", blob/CLOCKS_PER_SEC);
printf("\n");
clockValue = 2;
printf("clockValue : %d\n", clockValue);
printf("clockValue*100 : %d\n", clockValue*100);
printf("clockValue*100/100 : %d\n", clockValue*100/100);
printf("\n");
seconds = clockValue * 100;
printf("seconds(=clockValue*100) : %d\n", seconds);
printf("seconds/CLOCKS_PER_SEC : %d\n", seconds/CLOCKS_PER_SEC);
printf("\n");
clockValue = clock();
printf("clockValue : %d\n", clockValue);
printf("clockValue*100 : %d\n", clockValue*100);
printf("clockValue*100/100 : %d\n", clockValue*100/100);
printf("\n");
blob = 300;
printf("blob/100 : %d\n", blob/100);
printf("blob/CLOCKS_PER_SEC : %d\n", blob/CLOCKS_PER_SEC);
printf("\n");
seconds = clockValue * 100;
printf("seconds(=clockValue*100) : %d\n", seconds);
printf("seconds/CLOCKS_PER_SEC : %d\n", seconds/CLOCKS_PER_SEC);
}
And here its actual output:
Notice that, once clock() has been called, the computations involving the variable "seconds" are nonsensical.
However, the same computations involving the variable "blop" stay undisturbed, maybe because it is an "int" rather than an "unsigned long"?
Edit2: The "unsigned long" hypothesis is wrong, cf below.
Here is a program which demonstrates more precisely that the problem comes from assigning the result of a call to clock() to a variable and that it propagates to other variables to which this value is assigned.
Here it is:
Code: Select all
void main()
{
//clock_t clockValue;
unsigned long clockValue;
unsigned long blob_l;
clockValue = 2;
blob_l = clockValue * 100;
printf("blob_l : %d\n", blob_l);
printf("blob_l/100 : %d\n", blob_l/100);
printf("\n");
clockValue = clock();
printf("clockValue : %d\n", clockValue);
printf("clockValue*100 : %d\n", clockValue*100);
printf("clockValue*100/100 : %d\n", clockValue*100/100);
printf("\n");
blob_l = 2 * 100;
printf("blob_l : %d\n", blob_l);
printf("blob_l/100 : %d\n", blob_l/100);
printf("\n");
blob_l = clockValue * 100;
printf("blob_l : %d\n", blob_l);
printf("blob_l/100 : %d\n", blob_l/100);
printf("\n");
}
And its nonsensical output:
As you can see, if blob_l is assigned the same value as clockValue * 100, it computes just fine.
But if it is assigned clockValue*100, when clockValue comes from clock(), then blob_l computations get messed up even though blob_l's value is actually still correct.
I initially suspected that clock() was corrupting memory or something but it modifies nothing but the A and X registers so I doubt that is the case.
My intuition is that the issue lies with how the compiler handles variable assignation. I have not (yet) looked at the generated assembly code though.
Edit3: just looked at the generated assembly code and I cannot for the hell of me figure out what is wrong. The code seems to properly assign the variables and call the proper mul and div functions, yet the result is wrong when the computation depends in some way from the result of clock()...
It is late and I am tired so I will stop there for now.