Sixth 3D - Realtime 3D engine

Table of Contents

1. Introduction

example.png

Sixth 3D is a realtime 3D rendering engine written in pure Java. It runs entirely on the CPU — no GPU required, no OpenGL, no Vulkan, no native libraries. Just Java.

The motivation is simple: GPU-based 3D is a minefield of accidental complexity. Drivers are buggy or missing entirely. Features you need aren't supported on your target hardware. You run out of GPU RAM. You wrestle with platform-specific interop layers, shader compilation quirks, and dependency hell. Every GPU API comes with its own ecosystem of pain — version mismatches, incomplete implementations, vendor-specific workarounds. I want a library that "just works".

Sixth 3D takes a different path. By rendering everything in software on the CPU, the entire GPU problem space simply disappears. You add a Maven dependency, write some Java, and you have a 3D scene. It runs wherever Java runs.

This approach is quite practical for many use-cases. Modern systems ship with many CPU cores, and those with unified memory architectures offer high bandwidth between CPU and RAM. Software rendering that once seemed wasteful is now a reasonable choice where you need good-enough performance without the overhead of a full GPU pipeline. Java's JIT compiler helps too, optimizing hot rendering paths at runtime.

Beyond convenience, CPU rendering gives you complete control. You own every pixel. You can freely experiment with custom rendering algorithms, optimization strategies, and visual effects without being constrained by what a GPU API exposes. Instead of brute-forcing everything through a fixed GPU pipeline, you can implement clever, application-specific optimizations.

Sixth 3D is part of the larger Sixth project, with the long-term goal of providing a platform for 3D user interfaces and interactive data visualization. It can also be used as a standalone 3D engine in any Java project. See the demos for examples of what it can do today.

2. Minimal example

Resources to help you understand the Sixth 3D library:

Brief tutorial:

Here we guide you through creating your first 3D scene with Sixth 3D engine.

Prerequisites:

  • Java 21 or later installed
  • Maven 3.x
  • Basic Java knowledge

2.1. Add Dependency to Your Project

Add Sixth 3D to your pom.xml:

<dependencies>
    <dependency>
        <groupId>eu.svjatoslav</groupId>
        <artifactId>sixth-3d</artifactId>
        <version>1.3</version>
    </dependency>
</dependencies>

<repositories>
    <repository>
        <id>svjatoslav.eu</id>
        <name>Svjatoslav repository</name>
        <url>https://www3.svjatoslav.eu/maven/</url>
    </repository>
</repositories>

2.2. Create Your First 3D Scene

Here is a minimal working example:

import eu.svjatoslav.sixth.e3d.geometry.Point3D;
import eu.svjatoslav.sixth.e3d.gui.ViewFrame;
import eu.svjatoslav.sixth.e3d.math.Transform;
import eu.svjatoslav.sixth.e3d.renderer.raster.Color;
import eu.svjatoslav.sixth.e3d.renderer.raster.ShapeCollection;
import eu.svjatoslav.sixth.e3d.renderer.raster.shapes.composite.solid.SolidPolygonRectangularBox;

public class MyFirstScene {
    public static void main(String[] args) {
        // Create the application window
        ViewFrame viewFrame = new ViewFrame();

        // Get the collection where you add 3D shapes
        ShapeCollection shapes = viewFrame.getViewPanel().getRootShapeCollection();

        // Add a red box at position (0, 0, 0)
        Transform boxTransform = new Transform(new Point3D(0, 0, 0), 0, 0);
        SolidPolygonRectangularBox box = new SolidPolygonRectangularBox(
                new Point3D(-50, -50, -50),
                new Point3D(50, 50, 50),
                Color.RED
        );
        box.setTransform(boxTransform);
        shapes.addShape(box);

        // Position your camera
        viewFrame.getViewPanel().getCamera().setLocation(new Point3D(0, -100, -300));

        // Update the screen
        viewFrame.getViewPanel().repaintDuringNextViewUpdate();
    }
}

Compile and run MyFirstScene class. New window should open that will display 3D scene with red box.

Navigating the scene:

Input Action
Arrow Up / W Move forward
Arrow Down / S Move backward
Arrow Left Move left (strafe)
Arrow Right Move right (strafe)
Mouse drag Look around (rotate camera)
Mouse scroll wheel Move up / down

Movement uses physics-based acceleration for smooth, natural motion. The faster you're moving, the more acceleration builds up, creating an intuitive flying experience.

3. In-depth understanding

3.1. Vertex

V (x, y, z) x y

A vertex is a single point in 3D space, defined by three coordinates: x, y, and z. Every 3D object is ultimately built from vertices. A vertex can also carry additional data beyond position.

  • Position: (x, y, z)
  • Can also store: color, texture UV, normal vector
  • A triangle = 3 vertices, a cube = 8 vertices
  • Vertex maps to Point3D class in Sixth 3D engine.

3.2. Edge

V₁ V₂ V₃ edge

An edge is a straight line segment connecting two vertices. Edges define the wireframe skeleton of a 3D model. In rendering, edges themselves are rarely drawn — they exist implicitly as boundaries of faces.

  • Edge = line from V₁ to V₂
  • A triangle has 3 edges
  • A cube has 12 edges
  • Wireframe mode renders edges visibly
  • Edge is related to and can be represented by the Line class in Sixth 3D engine.

3.3. Face (Triangle)

V₁ V₂ V₃ FACE

A face is a flat surface enclosed by edges. In most 3D engines, the fundamental face is a triangle — defined by exactly 3 vertices. Triangles are preferred because they are always planar (flat) and trivially simple to rasterize.

  • Triangle = 3 vertices + 3 edges
  • Always guaranteed to be coplanar
  • Quads (4 vertices) = 2 triangles
  • Complex shapes = many triangles (a "mesh")
  • Face maps to SolidPolygon or TexturedPolygon in Sixth 3D.

3.4. Coordinate System (X, Y, Z)

X right / left Y up / down Z depth (forward/back) Origin (0, 0, 0) (3, 4, 0)

Every point in 3D space is located using three perpendicular axes originating from the origin (0, 0, 0). The X axis runs left–right, the Y axis runs up–down, and the Z axis represents depth.

  • Right-handed vs left-handed systems differ in which direction +Z points
  • Right-handed: +Z towards viewer (OpenGL)
  • Left-handed: +Z into screen (DirectX)

3.5. Normal Vector

unit normal (perpendicular to surface) Light L · N = brightness

A normal is a vector perpendicular to a surface. It tells the renderer which direction a face is pointing. Normals are critical for lighting — the angle between the light direction and the normal determines how bright a surface appears.

  • Face normal: one normal per triangle
  • Vertex normal: one normal per vertex (averaged from adjacent faces for smooth shading)
  • dot(L, N) → surface brightness
  • Flat shading → face normals
  • Gouraud/Phong → vertex normals + interpolation

3.6. Mesh

triangulated section

A mesh is a collection of vertices, edges, and faces that together define the shape of a 3D object. Even curved surfaces like spheres are approximated by many small triangles — more triangles means a smoother appearance.

  • Mesh data = vertex array + index array
  • Index array avoids duplicating shared vertices
  • Cube: 8 vertices, 12 triangles
  • Smooth sphere: hundreds–thousands of triangles
  • vertices[] + indices[] → efficient storage
  • In Sixth 3D engine:
    • AbstractCoordinateShape: base class for single shapes with vertices (triangles, lines). Use when creating one primitive.
    • AbstractCompositeShape: groups multiple shapes into one object. Use for complex models that move/rotate together.

3.7. Winding Order & Backface Culling

CCW V₁ V₂ V₃ FRONT FACE ✓ CW BACK FACE ✗ (culled — not drawn)

The order in which a triangle's vertices are listed determines its winding order. Counter-clockwise (CCW) typically means front-facing. Backface culling skips rendering triangles that face away from the camera — a major performance optimization.

  • CCW winding → front face (visible)
  • CW winding → back face (culled)
  • Saves ~50% of triangle rendering
  • Normal direction derived from winding order via cross(V₂-V₁, V₃-V₁)

In Sixth 3D, backface culling is optional and disabled by default. Enable it per-shape:

3.8. Working with Colors

Sixth 3D uses its own Color class (not java.awt.Color):

import eu.svjatoslav.sixth.e3d.renderer.raster.Color;

// Using predefined colors
Color red = Color.RED;
Color green = Color.GREEN;
Color blue = Color.BLUE;

// Create custom color (R, G, B, A)
Color custom = new Color(255, 128, 64, 200); // semi-transparent orange

// Or use hex string
Color hex = new Color("FF8040CC"); // same orange with alpha

4. Source code

This program is free software: released under Creative Commons Zero (CC0) license

Program author:

Getting the source code:

4.1. Understanding the Sixth 3D source code

5. Future ideas

  • Read this as example, and apply improvements/fixes where applicable: http://blog.rogach.org/2015/08/how-to-create-your-own-simple-3d-render.html
  • Improve triangulation. Read: https://ianthehenry.com/posts/delaunay/
  • Partial region/frame repaint: when only one small object changed on the scene, it would be faster to re-render that specific area.
    • Once partial rendering works, in would be easy to add multi-core rendering support. So that each core renders it's own region of the screen.
  • Anti-aliasing. Would improve text readability. If antialiazing is too expensive for every frame, it could be used only for last frame before animations become still and waiting for user input starts.

5.1. Render only visible polygons

Very high-level idea description:

  • This would significantly reduce RAM <-> CPU traffic.
  • General algorithm description:
    • For each horizontal scanline:
      • sort polygon edges from left to right
      • while iterating and drawing pixels over screen X axis (left to right) track next appearing/disappearing polygons.
        • For each polygon edge update Z sorted active polygons list.
        • Only draw pixel from the top-most polygon.
          • Only if polygon area is transparent/half-transparent add colors from the polygons below.
  • As a bonus, this would allow to track which polygons are really visible in the final scene for each frame.
    • Such information allows further optimizations:
      • Dynamic geometry simplification:
        • Dynamically detect and replace invisible objects from the scene with simplified bounding box.
        • Dynamically replace boudnig box with actual object once it becomes visible.
      • Dynamically unload unused textures from RAM.

Created: 2026-03-06 Fri 01:16

Validate