Monday, June 21, 2021

Field Of View

Computer monitors come in different aspect ratios. From the old school 4:3 to the ridiculous 32:9.

In my 3D games, I have to choose a Field-Of-View for the camera. Wide angle, or narrow angle views? To do this, the convention is to set a fixed vertical field of view, and then adopt the horizontal field of view that falls out of it, based on the image aspect ratio. I use this for all my 3D games: set the vertical F.o.V.

Fixing the Vertical Field of View, means that users on 4:3 monitors see less of the world, and users on ultra wide screen see more of the world. This could pose an unfair advantage.

An additional problem is that ultra wide aspect ratios lead to ultra wide angle views, which lead to a lot of distorted shapes in the corners of the screen.

But fixing the Horizontal Field of View instead, would mean that users on wide-screen would see less of the world (at top and bottom parts of screen.) This made me wonder... what if I were to fix the angle for the diagonal field instead? Fixed for all users. And then let the Horizontal/Vertical angles fall out of this.

Definitely worth an experiment! So let's do some trigonometry!

Problem: given a fixed angle for the diagonal field of view... what are the corresponding Vertical and Horizontal fields of view?

Given the fixed diagonal field of view angle α, and a given image height h, and image diagonal d, we need merely two equations to derive the vertical field of view angle β.

And with that angle β we can set up our projection matrix the usual way.

When we do all this, we go from extremist views (with fixed vertical FoV) like these:

...to more moderate views (with fixed diagonal FoV) like the ones below. Note that the view angles change less than in the case above, between aspect ratios. The narrow aspect ratio does not show that much less on left/right, and for what it loses on the sides, it wins a little on top/bottom. I think this approach is a keeper!

For completeness, the code, which is trivial:
float vertical_fov_from_diagonal_fov( float a, float w, float h )
{
        const float d = sqrtf(w*w + h*h);
        const float m = d / ( 2 * tanf( a / 2 ) );

        const float b = 2 * atanf( h / (2*m) );
        return b;
}
  

Monday, June 14, 2021

Fire in OpenGL

(Just 120 particles, looks better in 60Hz and without GIF artifacts.)

This is a quick 'n dirty approach for getting a fire visualization using OpenGL.

Fire blends somewhat with its background, so we will be using blending. But unlike regular transparency, fire works a little different: it only adds light to your framebuffer, never removes it. For this, the blendmode GL_ONE, GL_ONE is perfect!

Ok, now we know how to draw it, which leads to the question, on what to draw. I find that the majority of approaches you find on the web go for textured billboard. But I think that it is better to go with geometry instead, that is un-textured. By adding some shape to the geometry that you draw, you can save on the number of particles you need to draw. I opted for a spikey kind of vortex, as shown below.

From the picture above you can see there is already an inherent counter-clockwise rotation in the model, which we should replicate in the animation. So as we draw each particle, we should keep rotating the billboard we draw for it. Of course, the billboard needs to be oriented to the camera as well! Which means we have to do a little math in the vertex shader.

#version 150

in mediump vec4 position;	// per vert: vertex position.

in mediump vec4 bdpos;		// per inst: bill board pos.
in mediump vec4 rgb;		// per inst: colour.
in mediump float scale;		// per inst: scale.
in mediump float angle;		// per inst: angle.

out lowp vec3 colour;		// to fragment shader: colour.

uniform highp mat4 modelcamviewprojmat;
uniform highp mat4 camtrf;

void main()
{
	vec4 camx   = camtrf[0];
	vec4 campos = camtrf[3];
	highp vec4 scaledpos;
	scaledpos.xyz = position.xyz * scale;
	scaledpos.w = 1.0;
	vec3 z = bdpos.xyz - campos.xyz;
	z = normalize(z);
	vec3 camy = camtrf[1].xyz;
	vec3 xx = cross( camy, z );
	vec3 yy = cross( xx, z );
	vec3 x = xx *  cos(angle) + yy * sin(angle);
	vec3 y = xx * -sin(angle) + yy * cos(angle);
	mat4 trf = mat4(1.0);
	trf[0].xyz = x;
	trf[1].xyz = y;
	trf[2].xyz = z;
	trf[3] = bdpos;
	highp vec4 tpos = trf * scaledpos;
	colour = rgb.rgb;
	gl_Position  = modelcamviewprojmat * tpos;
}
  

Let us break down that vertex shader. Like any vertex shader for 3D graphics, it transforms a vertex (position) using a modelviewprojection matrix, no surprised there.

All the other inputs for this shader are per-instance attributes, not per-vertex attributes. So you need to call glVertexAttribDivisor() for them, with value 1.

Every particle billboard has a position (bdpos,) a colour (rgb,) a size (scale,) and a rotation (angle.)

The scale we just apply to the model's vertex position before we transform the vertex to clip-space. And the colour is passed onto the fragment shader, as-is.

Note that we don't just pass the modelviewprojection matrix, but also a second matrix: the camera transformation. This is just the view matrix, before it got inverted from camera to view. We need this, so that we can reconstruct a proper orientation for our billboard. We create a model transform for the billboard with Z pointing to our camera. This is basically (xx,yy,z, bdpos). But we skew the xx,yy axes with the particle's rotation angle, so that we end up with the (x,y,z, bdpos) transform. To skew them, the new x,y are both just a linear combination of the old xx,yy with the cos/sin factors.

Once we applied, this per-particle transformation to the (scaled) vertex position, we get our transformed position (tpos.) That, can finally be transformed with the modelviewproj matrix to give us our final result.

That covers the GPU-side of things. But we still need to fill in the proper rgb/scale/angle values on the CPU-side.

The scale is easy: because gas disperses as it burns, we just have to make the particle grow in size, as it ages. The angle is easy too, I just apply a rotational velocity that scales with the linear velocity of the particle, always in the same direction (counter clockwise.)

That leaves us with the colours. We use GL_ONE, GL_ONE, and it is best to use low colour values, so that sharp boundaries between particle and no particle are not too obvious. Also, we should slowly fade-out our particles. So as the particle ages, I make it go fainter! And for the chromatic transitions, just make the particle go from whitish, to yellowish to reddish, and you should be good.

I am using this approach in my project for Real Time Global Illumination, where I use the particles as light sources.