Fournex
Open Source
Stable C ABI
Python via cffi (optional)

Fast, Composable RL Environments in C

Build and benchmark agents against GridWorld, CartPole, and MountainCar with a stable, vendor-neutral C ABI. Opt into Python bindings via cffi — no CPython extensions required.

Modular Envs

Swap dynamics & reward models with a stable ABI.

Bare‑metal Speed

Tight C loops. Zero Python in hot paths.

Bindings by Choice

Use C directly or import via cffi in Python.

Cross‑tooling

Works with C, C++, Zig, Rust FFI, and more.

Install

MIT Licensed

C (source)

git clone https://github.com/your-org/rl-c-envs && cd rl-c-envs && make

Python (cffi)

pip install cffi && pip install rl-c-envs
// env.h — stable C ABI (excerpt)
// Initialize an environment by ID (gridworld, cartpole, mountaincar)
typedef struct rl_env rl_env;

// Create / destroy
tl_rl_api rl_env* rl_create(const char* env_id);
tl_rl_api void     rl_destroy(rl_env* env);

// Reset / step
// action is discrete (int); returns reward, done, and writes next state into user buffer
// returns 0 on success

tl_rl_api int rl_reset(rl_env* env, float* state_out, int* state_len);
tl_rl_api int rl_step(rl_env* env, int action, float* state_out, int* state_len, float* reward_out, int* done_out);

Environments

3

gridworld • cartpole • mountaincar

ABI Stability

v1

semantic, backwards compatible

Bindings

cffi

optional Python integration

TinyGym‑C
v0.1
C99 • CMake • cffi

A tiny, lightweight alternative to Gym — written in C

Minimal RL environments with a stable C API, example agents, optional Python bindings, and a small Raylib-based viewer. Portable, reproducible, and easy to embed.

Core idea

Provide a minimal set of RL environments exposed via a stable C API (ABI-safe), easy to call from any language.

  • Clean, vendor‑neutral C interface
  • Small codebase that's easy to read
  • Embeddable in games, tools, or experiments

Why

Most RL stacks are heavy and Python-centric. TinyGym‑C is small, fast, and portable.

  • Minimal deps, fast builds
  • Great for teaching and demos
  • Works well in constrained environments

Features at v0.1

Focused feature set to start simple and reproducible.

  • Envs: Gridworld (discrete), CartPole (classic control)
  • Agents: Q‑learning, REINFORCE (policy gradients)
  • Viewer: Raylib real‑time playback
  • Python via cffi (no CPython extensions)
  • Deterministic seeding + unit tests

Developer experience

Plain C99 with CMake, tested and formatted for CI and releases.

  • C99 + CMake build system
  • Cross‑platform: Linux, macOS (Windows planned)
  • Tests, formatting, CI, and docs

Viewer + bindings

Watch agents play and script experiments from Python if you want to.

  • Raylib viewer for real‑time visualization
  • cffi bindings for Python and Jupyter
  • Reproducible runs with seeding
Raylib
cffi
Deterministic

In short: TinyGym‑C is a minimal C library for reinforcement learning environments and example agents, with a Python bridge and a simple viewer. It’s a stripped‑down, embeddable version of Gym — small enough to understand fully, but practical for learning, experimenting, or extending.