The Developer’s Cry

Yet another blog by a hobbyist programmer

Rock steady frame rates with raylib

Many years ago I wrote right here at devcry about how to get rock solid frame rates with SDL. This time, we are going to use raylib. Why raylib, you may ask. Well, I felt like changing graphics library, and raylib can use some good press. This post is two-fold; the first part sets up a game window using raylib, the second part explains a different (better) way of achieving a stable frame rate.

raylib and letterboxing

SDL is highly regarded as the gold standard, go-to library when it comes to gamedev. I don’t doubt that it is. If you read this blog then you know that I have been with SDL for ages. However, SDL is not sacred to me, and the mess that Rust-SDL is drove me to raylib. (Note that we are coding this little project in C, not Rust, but still). Raylib is easy and straightforward.

Let’s create a window and setup the screen so that it has a custom render resolution that automatically scales and does letterboxing. As an example, we will make a 2D arcade game with a pixelated look, using a super blocky resolution of 320×240. The window itself is fullHD 1920×1080, although it may also be set to fullscreen (which can easily be 4K or more in this day and age). We do letterboxing; there will be black borders along the sides.

First create a window:

#include <raylib.h>

int window_w = 1920;
int window_h = 1080;

/* ... */

    InitWindow(window_w, window_h, "my game");

Wut, no error checking? Indeed, if there is something so wrong with the system that it can’t even create a window, then raylib will simply produce an error message and exit.

We will be rendering to a offscreen buffer texture with the size of our game screen resolution:

const int screen_w = 320;
const int screen_h = 240;
RenderTexture2D offscreen;

/* ... */

    offscreen = LoadRenderTexture(screen_w, screen_h);

When rendering, the offscreen texture gets stretched over the window. We do letterboxing so that the image appears properly scaled, but not distorted. The letterbox rectangle is recalculated every time we resize the window (or when toggling fullscreen).

Rectangle letterbox = {0, 0, 1920, 1080};

void resize_window(int w, int h) {
    SetWindowSize(w, h);
    window_w = w;
    window_h = h;

    letterbox.width = screen_w;
    letterbox.height = screen_h;

    float ratio_x = window_w / (float)screen_w;
    float ratio_y = window_h / (float)screen_h;
    float ratio = fminf(ratio_x, ratio_y);
    float offset_x = (window_w - ratio * screen_w) * 0.5f;
    float offset_y = (window_h - ratio * screen_h) * 0.5f;
    letterbox = (Rectangle){offset_x, offset_y,
        ratio * screen_w, ratio * screen_h};

The renderer in raylib is (surprise, surprise) powered by OpenGL. You can moan about OpenGL being deprecated, but I stopped caring; I just want some graphics to be displayed, whatever gets the job done. An OpenGL-like renderer means we are going to “begin” and “end” drawing. Also mind that we are first rendering to our offscreen texture, and then render that to the viewport display. (I would say “viewport”, but raylib does not concern itself with a viewport as in OpenGL).

void render(void) {
    Camera2D cam = {
        .target = (Vector2){0, 0},
        .offset = (Vector2){0, 0},
        .rotation = 0.0f,
        .zoom = 1.0f


    /* ... insert other draw calls here! ... */


void render_present(void) {
    /* render offscreen to display */


    const Rectangle render_src = { 0, 0, screen_w, screen_h };
    const Vector2 render_origin = {0, 0};
    DrawTexturePro(offscreen.texture, render_src, letterbox,
        render_origin, 0.0f, WHITE);

Alright, we have a picture! All is black, and no frame rates yet.

Frame rate versus tick rate

We are going for 60 fps, which is normal nowadays, and should be no problem whatsoever unless we’re developing Crysis 6 for raspberry pi. The main game loop will look something like:


while (!WindowShouldClose()) {

The update() function runs the game logic and physics. In ancient times, sprites would simply move pixel by pixel. We don’t do that anymore. Game objects are controlled by a physics simulation, which can be as simple as

dx = v.x * speed * dt
dy = v.y * speed * dt

meaning the x,y position of a game object changes according to its direction vector and speed.

Then, in old times, we would render the scene and sleep for an amount of time and do all sorts of tricks to synchronize with the vertical blanking interval of the CRT (Cathode Ray Tube). We don’t do that anymore, nobody uses a true CRT anymore, but mostly because you can’t do accurate sleeping in a multitasking operating system. So then, we would track the time passed since last frame and use that as delta time for our physics simulation, which is the method used in my old SDL post. It usually looks fine, but it’s not great because the timing is variable, and it causes the collision detection to be based on this variable timing too.

A trivial collision detection routine only checks for intersecting circles or rectangles. This is a problem for fast-moving objects; they may tunnel through walls if the time step was too long. This can be fixed by doing a lot of difficult math (which nobody likes), or we can take the easy route and simply make our time step smaller and run the discrete simulation multiple times for even a single frame. This uses more CPU, but modern computers are fast, and we have plenty of time between frames anyway.

The solution is to decouple the physics from the frame rate; let the game run at a much higher internal tick rate than the 60 Hz frame rate. How high this tick rate should be is up to you, really. My simple arcade game runs at 200 Hz.

const double timestep = 1.0 / 200.0;

void update(void) {
    static bool time_once = false;
    static double t0 = 0.0;
    static double simulation_time = 0.0;

    if (!time_once) {
        t0 = GetTime();
        time_once = true;

    /* ... insert code to handle input here  ... */

    double t1 = GetTime();
    double elapsed = t1 - t0;
    if (elapsed > 0.25) {
        // too much; ignore
        elapsed = 0.25;
    t0 = t1;
    simulation_time += elapsed;
    while (simulation_time >= timestep) {
        simulation_time -= timestep;


The tick rate of 200 Hz does not align well with the 60 Hz frame rate. That’s fine; it doesn’t need to be a clean multiple. However, note that the time spent between frames is still variable (the operating system may be doing all kinds of things), but the physics run at a fixed rate. This means that the physics is not (and can not be) perfectly synchronized with the frame rate—consequently, it jitters! We don’t like jitter.

In order to get rid of the jitter we are going to render a fake frame that is an interpolation of the simulation. At the end of the update() loop there is a fraction of simulation time left that represents the jitter. Rather than simulating with a small delta t (remember, we use a constant tick rate) we interpolate where the game objects would be before the previous tick, but after that small delta t.

void move_entity(Entity* e) {
    // save old position as "snapshot"
    e->snapshot = e->pos;

    // simple physics (discrete integration)
    e->pos.x += e->direction.x * e->speed * timestep;
    e->pos.y += e->direction.y * e->speed * timestep;

float lerp(float x, float y, float a) {
    // linear interpolation
    return (1.0f - a) * x + a * y;

void interpolate_entity(Entity* e, float alpha) {
    // update snapshot to be an interpolated position
    e->snapshot.x = lerp(e->snapshot.x, e->pos.x, alpha);
    e->snapshot.y = lerp(e->snapshot.y, e->pos.y, alpha);

void update(void) {
    /* ... run physics simulation as above ... */

    // finally interpolate using remaining simulation time
    float alpha = simulation_time / timestep;

void render(void) {
    /* ... render game objects at
    interpolated snapshot position ... */

Realize that it is a visual trick, an optical illusion; what you see is not exactly what was physically simulated. It’s weird when you think about it, but the result looks buttery smooth.

The original idea of interpolating game state came from the post Fix Your Timestep!