Image mapping : Peer review needed

The Oric video chip is not an easy beast to master, so any trick or method that allows to achieve nice visual results is welcome. Don't hesitate to comment (nicely) other people tricks and pictures :)
User avatar
jbperin
Flying Officer
Posts: 165
Joined: Wed Nov 06, 2019 11:00 am
Location: Valence, France

Re: Image mapping : Peer review needed

Post by jbperin »

ThomH wrote:
Mon Sep 21, 2020 10:21 pm
the classic serial implementation for affine texturing a polygon is just two adds per pixel. In pseudo-C:

Code: Select all

for(x = start ... end) {
    pixel[x][y] = pixel[texture_x][texture_y];
    texture_x += offset_x;
    texture_y += offset_y;
}
... with appropriate modifications to fixed point. You can set up offset_x/y at the start of each scan line either by tracking them along each edge and doing a divide, or by plane or barycentric calculations.
Thank you ThomH for reading and your comment.

For now I've managed to put the calculation under the following form:

Code: Select all

for(x = start ... end) {
    pixel[x][y] = pixel[texture_x][texture_y];
    texture_x = K * (y-y0) - R * (x-x0);
    texture_y = S * (x-x0) - T * (y-y0);
}
			with (x0, y0) screen coordinates of reference point of texture
			and K, R, S and T being values computed with trigonometry

Which is similar to what you describe because:
  • Along a given scanline, terms (y-y0) don't vary so they can be considered as constant and then it can be written like that:

    Code: Select all

        texture_x = C_1 - R * (x-x0);
        texture_y = S * (x-x0) - C_2;
  • As for terms (x-x0), they only increment by one from one pixel to the following one.
    So, considering the nth pixel in the scanline, its texture coordinates are:

    Code: Select all

        texture_x[n] = C_1 - R * (x-x0);
        texture_y[n] = S * (x-x0) - C_2;
    and considering its immediate neighbour on its right

    Code: Select all

        texture_x[n+1] = C_1 - R * ((x+1)-x0) = C_1 - R*(x-x0) -R = texture_x[n] - R
        texture_y[n+1] = S * ((x+1)-x0) - C_2 = S*(x-x0) - C_2 + S = texture_y[n] + S
    
So it lead to the algo you've given.

Code: Select all

    texture_x[n+1] =  texture_x[n] -R
    texture_y[n+1] =  texture_y[n] + S
The problem is that R and S are numbers like 2.6 or 3.4 (with non neglictable decimal part) that I can't just round after a division.
I have R and S under the shape Ra/Rb and Sa/Sb with Rb<Ra and Sb<Sa.


Either I do the division and work with Q8.8 fixed point values to keep precision and accuracy.

Code: Select all

R = Ra / Rb 
S = Sa / Sb
for(x = start ... end) {
    pixel[x][y] = pixel[texture_x][texture_y];
    texture_x -= R;
    texture_y += S;
}
Or I use an algo as the following one to keep the fractionnal representation

Code: Select all

for(x = start ... end) {
    pixel[x][y] = pixel[texture_x][texture_y];
	
    texture_x 	+= -Rb;
    ErrorX 		= Ra - texture_x;
    If (|ErrorX*2| > Rb) Then
        texture_x -= Ra
		
    texture_y 	+= Sb;
    ErrorY 		= Sa - texture_y;
    If (|ErrorY*2| > Sb) Then
        texture_y -= Sa
	
}
In first case, I have a 16 bits division (at the beginning of the scanline) followed by two 16 bits additions for each pixel.
In second case, I have no division and then 3 or 4 8-bits additions for each pixel.

So it's hard to figure out what's more efficient.

For now I only protyped in python the formulae given at the beginning of this post (i.e I haven't integrated the iterative formulation of the texture_x/y calculation)

I use this simple texture file:
Image

When I do a projection on a plane 3D surface (it's not real perspective fill .. it's just image stretching ) by using floating point arithmetic it gives the following result: (I draw triangles to see the shape onto which I map the image)
ImageMapFloat.JPG
And when i use 8 bits integer calculation (with lookup tables for euclidian norm, arctan, sin and cosine), it gives:
ImageMapInteger.JPG
It doesn't look too dirty but last steps of calculation (the ones that compute R=Ra/Rb and S=Sa/Sb) are still done on floating point values and they are not incremental. So it might get a bit dirtier when I will deal with that.

What are plane or barycentric calculations ?

sam
Officer Cadet
Posts: 54
Joined: Sun Jul 09, 2017 3:28 pm
Location: Brest (France)
Contact:

Re: Image mapping : Peer review needed

Post by sam »

It look like something is wrong with the rendering. The circle surrounding the A is not convex when projected (look at the upper right quadrant: it points inward when projected). Correct projection should preserve concavity of shapes & "straightness" of segments (look at the right side of the A-letter.. it isn't straight after projection). You should try to project a (possibly rotated) checkerboard to clearly view the perturbing geometrical aberrations this simplified projections creates.

And may be this: on the original source the image is centered, but it doesn't look centered wrt to the bounding box when projected. It looks to me a bit shifted to the left, that is closer to the viewer than it needs to be. One can further see this when comparing the space to the drawing from the left or right edge of the bounding box. They both look the same. However the right part of the projection being further away, the remaining space should look "smaller" than the space to the left. I'm not sure I explain this correctly.. but something looks weird considering the left/right space around the projected drawing and the bounding box.

my 2 cents.

sam.

User avatar
jbperin
Flying Officer
Posts: 165
Joined: Wed Nov 06, 2019 11:00 am
Location: Valence, France

Re: Image mapping : Peer review needed

Post by jbperin »

sam wrote:
Tue Sep 22, 2020 6:15 pm
It look like something is wrong.
You should try to project a (possibly rotated) checkerboard
sam.
Oups indeed .. :shock: :x :x
Failure.JPG
It seems the algo need a fix .. Thank you for your remark

ThomH
Flying Officer
Posts: 222
Joined: Thu Oct 13, 2016 9:55 pm

Re: Image mapping : Peer review needed

Post by ThomH »

No, it's not perspective correct — it's an affine rendering. So exactly what you'd see on an original Playstation. As per discussion above, the problem is that texture (x, y) aren't linear in screen space; (x/z, y/z, 1/z) are but then you need at least one divide per pixel to get back to regular (x, y).

Old DOS titles tend to divide only every e.g. 16 pixels, and then linearly interpolate in between. That's very obvious in Descent, slightly less so in Quake. Later, more enlightened Playstation titles subdivide entire polygons. Subdivision was Sony's answer to just about everything on the PS1, including clipping, so there was direct library support for that.

Alternatively, go the Wolfenstein/Doom way and avoid divides entirely by just drawing walls in vertical strips and floors in horizontal. In both cases you're stepping along a line where z is constant. So, again, no divides. As long as you do the work at the edges correctly.

User avatar
Dbug
Site Admin
Posts: 3461
Joined: Fri Jan 06, 2006 10:00 pm
Location: Oslo, Norway
Contact:

Re: Image mapping : Peer review needed

Post by Dbug »

What Thom wrote.

Basically, in the specific case of a labyrinth where the walls are always vertical, there are plenty of optimizations that can be done which solve the specific issue you have (which would not work at all in something like a 6DOF* game like Descent), and if you choose to have the eyes (camera) at the center of the walls, you just need to rasterize one slope (top or bottom), which gives you the other one by symetry, and also the total height of the vertical segment, and you can use a precomputed table of vertical increments to move in your source texture :)








* 6 degrees of freedom

Post Reply