Tuesday, September 21, 2021

LED Bars

Two LED bar graphs. You would think they would be much alike, but no.

On the left: LTA-1000Y (2017 Thailand.) On the right: FJB10Y.

The polarity is reversed. The LTA has anode on the side of the first pin. The FBJ has the cathode on the side of the first pin.

The obvious difference: face colour grey vs face colour black.

Light colour is similar, but brightness is much different. I need to use 470 ohm resistor with the LTA, and a 2K ohm resister with the FJB.

Monday, September 20, 2021

Arduino

Last year, I did a project with the Arduini Micro kit. This is an ATmega32U4 microcontroller.

With it, I built a USB input device, with knobs, to tune my noise fields for Ring Miner.

In that project, I was simultaneously writing USB HID data, reading turn knobs, and writing OLED displays. In order to avoid stalls I had to modify the standard lib so I could skip writing USB data if there was no space available in the buffer, which is what happens when the PC is not reading the USB HID device.

I have recently picked up the Arduino again. This time to see if I can steer LED bar graphs with it. I noticed that out of the box, Ubuntu's arduino IDE will not run. You need to patch a library before it will run.

I have been learning on how to drive LEDs with a constant current shift register. I think I will end up using the TLC5917IN chip. Unfortunately, it is only 8 bit, I would have preferred 10 bits, but it was available in-stock, and affordable.

The simplicity of micro controllers is refreshing, in this age of super complexity. Close to the metal, and straightforward.

When coding for micro controllers, use static PROGMEM const qualifier for your data that you want to reside in ROM, instead of RAM.

Sunday, July 4, 2021

Catching a fall (procedurally.)

I have been doing a lot of procedural animation, lately. Making a humanoid walk, and swing a torch at spiders.

I also needed a proper way to catch a fall, with the legs. Because I already had the Inverse Kinematics sorted, I just needed to shift the pelvis down, and up, after a fall.

To do this convincingly, I need to drop down hard at the start, then ease at the nadir, and ease again on the way back to the zenith. For this, piecewise functions come to the rescue. I ended up using this:

Monday, June 21, 2021

Field Of View

Computer monitors come in different aspect ratios. From the old school 4:3 to the ridiculous 32:9.

In my 3D games, I have to choose a Field-Of-View for the camera. Wide angle, or narrow angle views? To do this, the convention is to set a fixed vertical field of view, and then adopt the horizontal field of view that falls out of it, based on the image aspect ratio. I use this for all my 3D games: set the vertical F.o.V.

Fixing the Vertical Field of View, means that users on 4:3 monitors see less of the world, and users on ultra wide screen see more of the world. This could pose an unfair advantage.

An additional problem is that ultra wide aspect ratios lead to ultra wide angle views, which lead to a lot of distorted shapes in the corners of the screen.

But fixing the Horizontal Field of View instead, would mean that users on wide-screen would see less of the world (at top and bottom parts of screen.) This made me wonder... what if I were to fix the angle for the diagonal field instead? Fixed for all users. And then let the Horizontal/Vertical angles fall out of this.

Definitely worth an experiment! So let's do some trigonometry!

Problem: given a fixed angle for the diagonal field of view... what are the corresponding Vertical and Horizontal fields of view?

Given the fixed diagonal field of view angle α, and a given image height h, and image diagonal d, we need merely two equations to derive the vertical field of view angle β.

And with that angle β we can set up our projection matrix the usual way.

When we do all this, we go from extremist views (with fixed vertical FoV) like these:

...to more moderate views (with fixed diagonal FoV) like the ones below. Note that the view angles change less than in the case above, between aspect ratios. The narrow aspect ratio does not show that much less on left/right, and for what it loses on the sides, it wins a little on top/bottom. I think this approach is a keeper!

For completeness, the code, which is trivial:
float vertical_fov_from_diagonal_fov( float a, float w, float h )
{
        const float d = sqrtf(w*w + h*h);
        const float m = d / ( 2 * tanf( a / 2 ) );

        const float b = 2 * atanf( h / (2*m) );
        return b;
}
  

Monday, June 14, 2021

Fire in OpenGL

(Just 120 particles, looks better in 60Hz and without GIF artifacts.)

This is a quick 'n dirty approach for getting a fire visualization using OpenGL.

Fire blends somewhat with its background, so we will be using blending. But unlike regular transparency, fire works a little different: it only adds light to your framebuffer, never removes it. For this, the blendmode GL_ONE, GL_ONE is perfect!

Ok, now we know how to draw it, which leads to the question, on what to draw. I find that the majority of approaches you find on the web go for textured billboard. But I think that it is better to go with geometry instead, that is un-textured. By adding some shape to the geometry that you draw, you can save on the number of particles you need to draw. I opted for a spikey kind of vortex, as shown below.

From the picture above you can see there is already an inherent counter-clockwise rotation in the model, which we should replicate in the animation. So as we draw each particle, we should keep rotating the billboard we draw for it. Of course, the billboard needs to be oriented to the camera as well! Which means we have to do a little math in the vertex shader.

#version 150

in mediump vec4 position;	// per vert: vertex position.

in mediump vec4 bdpos;		// per inst: bill board pos.
in mediump vec4 rgb;		// per inst: colour.
in mediump float scale;		// per inst: scale.
in mediump float angle;		// per inst: angle.

out lowp vec3 colour;		// to fragment shader: colour.

uniform highp mat4 modelcamviewprojmat;
uniform highp mat4 camtrf;

void main()
{
	vec4 camx   = camtrf[0];
	vec4 campos = camtrf[3];
	highp vec4 scaledpos;
	scaledpos.xyz = position.xyz * scale;
	scaledpos.w = 1.0;
	vec3 z = bdpos.xyz - campos.xyz;
	z = normalize(z);
	vec3 camy = camtrf[1].xyz;
	vec3 xx = cross( camy, z );
	vec3 yy = cross( xx, z );
	vec3 x = xx *  cos(angle) + yy * sin(angle);
	vec3 y = xx * -sin(angle) + yy * cos(angle);
	mat4 trf = mat4(1.0);
	trf[0].xyz = x;
	trf[1].xyz = y;
	trf[2].xyz = z;
	trf[3] = bdpos;
	highp vec4 tpos = trf * scaledpos;
	colour = rgb.rgb;
	gl_Position  = modelcamviewprojmat * tpos;
}
  

Let us break down that vertex shader. Like any vertex shader for 3D graphics, it transforms a vertex (position) using a modelviewprojection matrix, no surprised there.

All the other inputs for this shader are per-instance attributes, not per-vertex attributes. So you need to call glVertexAttribDivisor() for them, with value 1.

Every particle billboard has a position (bdpos,) a colour (rgb,) a size (scale,) and a rotation (angle.)

The scale we just apply to the model's vertex position before we transform the vertex to clip-space. And the colour is passed onto the fragment shader, as-is.

Note that we don't just pass the modelviewprojection matrix, but also a second matrix: the camera transformation. This is just the view matrix, before it got inverted from camera to view. We need this, so that we can reconstruct a proper orientation for our billboard. We create a model transform for the billboard with Z pointing to our camera. This is basically (xx,yy,z, bdpos). But we skew the xx,yy axes with the particle's rotation angle, so that we end up with the (x,y,z, bdpos) transform. To skew them, the new x,y are both just a linear combination of the old xx,yy with the cos/sin factors.

Once we applied, this per-particle transformation to the (scaled) vertex position, we get our transformed position (tpos.) That, can finally be transformed with the modelviewproj matrix to give us our final result.

That covers the GPU-side of things. But we still need to fill in the proper rgb/scale/angle values on the CPU-side.

The scale is easy: because gas disperses as it burns, we just have to make the particle grow in size, as it ages. The angle is easy too, I just apply a rotational velocity that scales with the linear velocity of the particle, always in the same direction (counter clockwise.)

That leaves us with the colours. We use GL_ONE, GL_ONE, and it is best to use low colour values, so that sharp boundaries between particle and no particle are not too obvious. Also, we should slowly fade-out our particles. So as the particle ages, I make it go fainter! And for the chromatic transitions, just make the particle go from whitish, to yellowish to reddish, and you should be good.

I am using this approach in my project for Real Time Global Illumination, where I use the particles as light sources.

Monday, May 24, 2021

Things that farming Chia will make you aware of.

When you farm Chia, the proof-of-space blockchained coin, you learn a few new things.

  • Which PCIe generation your motherboard offers
  • NVMe performance
  • NVMe endurance
  • The fstrim command
  • XFS options like crc=0
  • What the term JBOD means
  • The performance of NFS over your LAN
  • The thermal characteristics of your drives
  • The difference between peak write bandwidth and sustained write bandwidth
  • The Crucial P1 and PNY XLR8 have incredibly slow write bandwidth. Avoid!

So far, this has been a fully educational excercise, with zero financial pay-off: My 6 drives can't compete with the netspace. But stream-lining your computer for plotting was a fun exercise in optimization.

If nothing else, I at least got a rather popular Open Source Tool out of it, in my portfolio: ChiaHarvestGraph and its sister tools ChiaPlotGraph and ChiaHeightGraph.

Tuesday, May 4, 2021

Directional Light

I have implemented directional light for Practica Boxel. It simulates a harsh distant sun.

Because there is only one bounce, there are fully black shadows with 0% light in the scene. I'm not sure I like it.

It's main advantage is that it is easier to render in chunks, as there is always just one lightsource, which is always the same. No need to cull lightsources, which simplifies things a lot. As local lightsources pop into view, the delta in light is too jarring, so this circumvents that.

I've decided to go back to my original experiments where I have a night-scene with just 1 torch light.

And with one hand-held light source, I find that a first-person camera is a no-go: because if light and camera are close to eachother, the lighting becomes un-interesting, no shadows, not much indirect light, so the whole GLobal Illumination is wasted on it.

Which leaves me with...

A night-time world, one hand-held lightsource (torch) and a third-person camera.

Think: A 3D Ant Attack clone, set at night, and with Global Illumination.

So, when doing a local point-light, I should modify how I shoot my photons again! I started with a cosine weighted hemi-sphere direction. Then I added directional light. And for a torch, I would need a uniform omni-directional light.

So, blue-noise sampling of a sphere it is, then. Alan Wolfe came to the rescue with his suggestion of using Mitchell's Best Candidate algorithm to create progressive blue noise.

As his approach was O(N³) I had to weaken it a bit, but reducing the number of candidates. I implemented it in C, using 16-threads and also 16-wide SIMD for conceptually handling 256 candidates simultaneously. By leaving it running over-night, with my CPU at full blast, I was able to create a data-set of 2M samples that look reasonably like blue-noise.

2 million sample points, viewed from the inside:

So, night-time-Ant-Attack, here I come...