Member-only story
🏐 Ultimate Volleyball: A multi-agent reinforcement learning environment built using Unity ML-Agents
Train reinforcement learning agents to play Volleyball

Inspired by Slime Volleyball Gym, I built a 3D Volleyball environment for training reinforcement learning agents using Unity’s ML-Agents toolkit. The full project is open-source and available at: 🏐 Ultimate Volleyball.
In this article, I share an overview of the implementation details, challenges, and learnings from designing the environment to training an agent in it. For a background on ML-Agents, please check out my Introduction to ML-Agents article.
Versions used: Release 18 (June 9, 2021)
Python package: 0.27.0
Unity package: 2.1.0
🥅 Setting up the court
Having no previous experience with game design or 3D modeling, I found Unity’s wide library of free assets and sample projects extremely useful.
Here’s what I used:
- Agent Cube prefabs from the ML-Agents sample projects
- Volleyball prefab & sand material from free Beach Essentials Asset Pack
- Net material from Free Grids & Nets Materials Pack
The rest of the court (net posts, walls, goal, and floor) were all just resized and rotated cube objects pieced together.
The floor is actually made up of 2 layers:
- Thin purple and blue-side goals on top with a ‘trigger’ collider
- A walkable floor below
The goals detect when the ball hits the floor, while the walkable floor provides collision physics for the ball.
Some other implementation details to note:
- The agents look like cubes, but have sphere colliders to help them control the ball trajectory.
- I also added an invisible boundary around the court. I found that during training, agents may shy away from learning to hit the ball at all if you penalize them for hitting the ball out of…