0..1

Self-playing simulation game using the language and mechanics of image editing

Still image
Still image
Still image

Concept

0..1 is a self-playing simulation game that uses the language and mechanics of image editing. The canvas becomes the playing field on which different editing tools take turns moving around as autonomous agents. With the explicit absence of a win state the game becomes an open-ended system which perpetually interacts with itself by reacting to the ever-changing landscape it creates through the agents' actions.

The players

Each player represents one of a set of image editing operations. I have implemented a selection of basic tools as you would find them in standard image editing software, but also added a few custom ones.

Tools overview (visual)
Tools overview (semantic)
  1. Brush: solid color brush strokes
  2. Pixel randomizer: randomly swaps pixels
  3. Sampler: samples from a different image or canvas
  4. Eraser: restores the original state of the canvas
  5. Smudge: displace / blur
  6. Sharpen: unsharp masking
  7. Selection: select a region and protect it against changes
  8. Inpainting: fills a selected region with interpolated content
  9. Channel shifter: acts like a prism by shifting the R, G, and B channel in different directions

The tools represent different character classes, with different character traits. They each have different ideas about ‘how the world should look like’. The paint brush and sampler are the ones that can bring in new information, whereas the eraser on the other end of the spectrum wants to ‘undo’ any change by reverting the canvas to its original state. It wants things to go back to ‘the way things used to be’. Similarly the inpainting tool can delete parts of the image and fill it with interpolated information from the surrounding pixels. The selection tool, located in the center, can protect a region from being changed. It wants things to simply ‘stay as they’ are. The smudge and the sharpen tool are antagonists of sorts: One blurs hard edges and shifts things slightly, whereas the other wants things to be clearly defined, ideally black and white. The randomizer causes causes a bit of chaos with by scrambling the pixels it can find. The RGB shifter is special in that it can seeminlgy create color where there was no color before. The tools were deliberately selected to span a wide gamut. The space they are located in is designed to cause friction, and with the potential for interesting things to happen.

The interesting part is indeed when the agents / tools interact with one another. This can produce secondary effects in the form of new visual artifacts. For instance, the RGB shifter tool’s effect on its own may not be very noticable, but the inpainting tool can amplify the colors to a great degree. Similarly, when the sharpen tool moves over parts of the image that have been smudged or inpainted before, we can often observe a distinct black and white line pattern appearing.

Example of secondary visual effects, caused by interaction of different tools

Example of secondary visual effects, caused by interaction of different tools

Game mechanics

The rules of the game are fairly simple. The game is a turn-based and played in rounds. First all agents get placed onto the canvas and initialized. They get assigned a random position, a tool, a brush, and an objective. Once that is done, the first round begins. Before the start of every round, the players “throw a dice” to determine the order in which they are allowed to make their moves. Once it was every player’s turn, the system checks if at least one player was able to make a move. If yes, the game continues with the next round. If the game got stuck (which can happen sometimes), a new game is initiated.

Game mechanics flowchart

Game mechanics flowchart

(“Mutable mode” is one of the two modes and is discussed later in this document.)

Perception: Heatmaps

Players in the simulation perceive their environment in different ways. They each are sensitive to certain aspects of the canvas, which in turn defines their objective. Some are attracted to blurry parts of the canvas, others to sharpness and high contrast. Some agents like color (as opposed to shades of grey and their lack of saturation), others are interested in those parts of the image that have not been touched yet. Some react to structural features such as lines, whereas other players might be sensitive to different levels of brightness. Based on this, each player identifies their personal region(s) of interest which they navigate towards (and over) when it is their turn to move.

What the agents ‘see’ (depending on what image feature they are sensitive to)

What the agents ‘see’ (depending on what image feature they are sensitive to)

The heatmaps are gradients with values ranging from 0 (black) to 1 (white) 1 being the highest intensity, 0 the lowest. Agents can be attracted by high intensities (for instance color), or the inverse (absence of color). This makes for 12 possible objectives in total.

Perception: Visual range

The radius determines how far the player can see

The radius determines how far the player can see

The agents can only see a certain distance, based on a radius around their current position on the map. Some are able to perceive the entire environment, others are more “short-sighted” (to various degrees) and can only see what’s in the immediate vicinity.

Example of a heatmap with increasingly smaller visual range

Example of a heatmap with increasingly smaller visual range

The visual range directly influences a agent’s movement. If it’s view is severely limited it can only make relatively short moves. An agent that can see the entire canvas is able to make much longer moves and bigger gestures.

The difference in behavior between a short-sighted agent (left) and one with perfect vision (right)

The difference in behavior between a short-sighted agent (left) and one with perfect vision (right)

Pathfinding

Pathfinding is commonly used in video games with non-human players (bots). This project too employs pathfinding for determining the optimal path for an agent to move along. Before any one agent’s turn to move, a request is made to the server, including the current state of the canvas and information about the currently active agent. The server will then calculate the path and return it to the client.

System architecture schematic

System architecture schematic

The main steps of the algorithm that is running on the server is illustrated below:

Pathfinding algorithm steps illustrated

Pathfinding algorithm steps illustrated

With the current canvas image as input, the detection algorithm for the respective heatmap type runs. The result is then blurred and normalized to values between 0 and 1. In the next step a threshold is applied, which turns the heatmap into a binary image. The white blobs are ‘regions of interest’ for the agent. In order to determine how to best move over/ along the shape of the blobs, the skeleton of the shape is calculated. Think of it as the ‘center line’ of the blob. These lines are then vectorized, simplified, and filtered.

The last three steps in the diagram are for finding the way from the agent’s current position to the closest point on the longest of the found skeletons. This process is illustrated below.

Visualization of the paths generated for different player positions

Visualization of the paths generated for different player positions

As you can see, for some starting positions the closest point will coincide with an end point of the skeleton. In the other cases the agent will move towards one of the points on the skeleton and from there on move further in the direction of the longer remaining part of the skeleton path. Some agents will notice that on the way to their blob of interest, they will encounter other blobs nearbay. In such a case it is permitted to take a slight detour to visit the other blob first in order to ‘do as much damage as possible’ in one move. For determining the path from the agent’s current position to its destination blob the A* algorithm is employed, using the underlying heatmap as weights.

In order to make the motion seem more organic, the path that gets sent back form the server is then smoothed by the client, before executing the agent’s move along it.

Mode 1: Static

The system has two different modes it can run in: static and mutable. In static mode, the canvas is initialized with noise (random greyscale values) as a seed, and the configurations of the agents cannot change once they have been set, at the beginning of the game.

Still of static mode

Mode 2: Mutable

In mutable mode, the agents’ configuration is not fixed. All values that make up the configuration (brush size, brush shape, color, hardness, agent objective, etc.) are encoded in the canvas itself. The canvas is the data and the other way around.

Illustration of how player configuration data is encoded in the canvas

Each block encodes one value of an agent’s configuration

That means by interacting and changing the landscape, the agents also change themselves. They are simultaneously reading and writing the environment.

Different stages of the canvas image (top row) and the corresponding ‘data view’ (bottom row)

Different stages of the canvas image (top row) and the corresponding ‘data view’ (bottom row)

Game mechanics flowchart, with mutable mode highlighted

After every player’s move, the system reads back the data encoded in the canvas, and updates the agent’s configurations accordingly.

All projects
Arts & Culture
Design tool for generating typographic posters and animations
Arts & Culture
Interactive timeline exhibit for a city archive
Art
Generative audio-visual artwork that fuses color and motion of multiple videos
Arts & Culture
Interactive archive of the activities of a Master’s programme
Art
Generative animations based on continuous application of a filter kernel
Arts & Culture
Data analysis and visualizations of one year worth of photos
Public Interest
User-driven online propaganda tool on the topic of net neutrality
Applied
Tools for analyzing, visualizing, and comparing formal characteristics of movies
Applied
Finding interesting configurations for a generative artwork through data analysis
Commercial
Type foundry website with integrated online store
Experiment
Increasing the chances of finding mushrooms through geospatial data analysis
Arts & Culture
macOS screensaver for collectors of Orb (lite) NFTs by Harm van den Dorpel
Arts & Culture
Web3 site for minting NFTs continuously generated by a live art installation
Commercial
Website that lets users virtually place the product into their physical surroundings
Art
0..1
Self-playing simulation game using the language and mechanics of image editing
Applied
Finger tracking tool
Experiment
Simulating blobs of fluids with particles
Experiment
Force-based simulation of bands of particles
Experiment
Modified JPEG encoder for generating glitchy image effects
Art
A website that keeps eye contact
Art
A website that is all its past versions
Art
A website that lets you leave something behind for the next visitor
Art
A website that is just its analytics report
Commercial
Point-of-sale software for opticians to help customers choose the right lenses
Experiment
Mapping line drawings onto street networks
Commercial
Bespoke website for the release of the Logical typeface by Edgar Walthert
Misc
Proof-of-concept for an alternative, more powerful Are.na client
Experiment
GPS trace replay tool
Misc
Browser extension that collects texts of how designers describe themselves
Arts & Culture
Interactive animation for a music festival announcement page
Commercial
Website for a graphic design studio
Arts & Culture
Independent music publishing and streaming platform
Arts & Culture
Parametric typeface generator tool as part of a visual identity system
Public Interest
Graphical modelling tool for describing attack scenarios
Applied
Visualization showing how a text document got written and edited over time
Experiment
Experiments with movie image data