For performance purpose, I'm currently porting some C code to assembly language and I'm facing a problem that I can't understand.
Perhaps someone can help me.
I use OSDK, the code is written in C and I use asm(" ...") to locally replace C code with assembly.
I've got the following variables defined in zero page in a .s file:
Code: Select all
_dda3StartValue .dsb 1
_dda3EndValue .dsb 1
_dda3NbVal .dsb 1
_dda3Increment .dsb 1
Code: Select all
extern unsigned char dda3StartValue;
extern unsigned char dda3EndValue;
extern unsigned char dda3NbVal;
extern signed char dda3Increment;
Code: Select all
if (dda3EndValue > dda3StartValue) {
dda3NbVal = dda3EndValue-dda3StartValue;
dda3Increment = 1;
} else {
dda3NbVal = dda3StartValue-dda3EndValue;
dda3Increment = -1;
}
Code: Select all
.(
lda _dda3StartValue:
cmp _dda3EndValue:
bcs else:
lda _dda3EndValue: sec: sbc _dda3StartValue: sta _dda3NbVal:
lda #1: sta _dda3Increment:
jmp endif
else
lda _dda3StartValue: sec: sbc _dda3EndValue: sta _dda3NbVal:
lda #$FF: sta _dda3Increment:
endif
.)
What's even more strange is that I used the same translation for the same algo for just a different variable dda1 instead of dda3 and it works nice.
Is it possible that the bcs comes to cross a page and causes the issue. I really can't understand the phenomena.
NB: I know there's a possible optim in the else by avoiding the re-read _dda3StartValue .. it is not the topic here .. I just need to understand what make the behaviour different between the C and the ASM.
Thanks