1. Dashboard
  2. Getting Started
  3. GitHub
  4. API
  5. Articles
  6. Level Editor
  7. Members
    1. Recent Activity
    2. Users Online
    3. Staff
    4. Search Members
  8. Forum
  9. Discord
  • Login
  • Register
  • Search
Advanced Tutorial
  • Everywhere
  • Advanced Tutorial
  • Articles
  • Pages
  • Forum
  • More Options
  1. GFX-Engine
  2. Articles
  3. Advanced Tutorial

Raycasting Strategies

  • Andy
  • January 5, 2026 at 8:05 PM
  • 276 Views
  • 0 Comments

This tutorial demonstrates three different raycasting approaches in GFX-Next and explains when and why to use each of them. It introduces the underlying concepts behind ray queries in the engine and then walks through each approach with practical examples, highlighting their strengths, limitations, and typical use cases. The tutorial compares factors such as performance, accuracy, and integration complexity, helping developers choose the most appropriate method for tasks like physics queries, visibility checks, or gameplay interaction. By the end, readers will have a clear understanding of how raycasting fits into the GFX-Next architecture and how to apply each approach effectively in real projects.

Contents [hideshow]
  1. Purpose of This Tutorial
  2. Raycasting Is a User Decision
  3. The Ray in the World Space
    1. What is a raycast?
    2. What are raycasts used for?
    3. How do you create a ray?
      1. a) Ray between two points
      2. b) Ray from screen space (mouse position)
  4. Physics-Based Raycasting (Engine System)
    1. Description
    2. Characteristics
  5. CPU Mesh Raycasting (User-Level Utility)
    1. Description
    2. Important Requirement
    3. Use cases
  6. GPU Mesh Raycasting (Compute Shader)
    1. Description
    2. Characteristics
  7. Visual Debugging (User Logic)
  8. Integration With Gameplay Logic
  9. Choosing the Right Approach
  10. Design Philosophy
  11. Summary

1. Purpose of This Tutorial

This tutorial shows how a developer can implement multiple raycasting strategies in a GFX-Next application and switch between them at runtime. You can find the project to this tutorial within the examples repository: GFX-Examples/Examples/Raycasting/Raycasting at main · Andy16823/GFX-Examples.

If you have problems with executing the test solution you need to install this nuget package as well NuGet Gallery | GFX.BulletSharp 1.0.1 which includes the binarys and bindings for Bullet3

The goal is not to document engine internals, but to demonstrate:

  • how different systems can coexist
  • how GFX-Next exposes the necessary low-level hooks
  • how trade-offs are handled explicitly by the user

2. Raycasting Is a User Decision

GFX-Next does not provide a single unified raycasting abstraction.

Instead, developers choose between:

  • physics-based queries
  • CPU mesh intersection
  • GPU-based raycasting (compute shaders)

Each approach serves a different purpose.

The example application uses a user-defined enum to switch between them:

C#
enum RaycastType
{
    Physics,
    CPUMesh,
    GPUMesh
}

This enum exists only in the example.


3. The Ray in the World Space

Before we can start using raycasts, we need to understand what a raycast is, what it is used for, and how a ray is created.

What is a raycast?

Raycasting is one of the most fundamental techniques in game development.

A ray is an invisible, mathematical line that starts at a point and extends infinitely in a given direction.
A raycast tests whether this ray intersects with objects in the world and, if so, which object it hits first.

Raycasts are commonly used to let the player interact with the game world.

For example, in a role-playing game such as World of Warcraft, clicking on an NPC performs a raycast from the camera through the mouse cursor into the scene. The game then determines which object the player is pointing at.

2. What are raycasts used for?

Raycasts are used in many core gameplay and engine systems, including:

  • Object selection and picking
  • Player interaction (clicking, using, inspecting)
  • Weapon hit detection in first-person shooters
  • Visibility checks and line-of-sight tests
  • AI perception and sensing
  • Editor tools (gizmos, handles, selection)

In all of these cases, raycasting answers a simple but crucial question:

Quote

“What does this line hit first?”

3. How do you create a ray?

There are multiple ways to construct a ray. The two most common types are:

a) Ray between two points

This is the simplest form of ray creation.
You define:

  • an origin point
  • a direction (or a target point)

This type of ray is often used for physics queries, AI checks, or bullet trajectories.

b) Ray from screen space (mouse position)

This is the most common ray type for player interaction.

Here, the ray:

  • starts at the camera
  • passes through a 2D screen position (e.g. the mouse cursor)
  • continues into the 3D world

This process requires projecting 2D screen coordinates into world space, which involves the camera’s projection and view matrices.

GFX provides a helper function for this exact purpose.
It takes the active camera and a 2D screen position and returns a world-space ray that can be used for raycasting.

C#
var ray = Ray.FromScreenPoint(_camera, Window.GetViewport(), Viewport.Width / 2, Viewport.Height / 2);

now that we are understanding what raycast are we can select our system to handle the raycast for us.


4. Physics-Based Raycasting (Engine System)

Description

Physics raycasts operate on the physics world, using colliders and acceleration structures.

C#
result = Raycast.PerformRaycast(ray, _scene.PhysicsHandler);

Characteristics

  • Uses existing physics colliders
  • Very fast
  • Suitable for gameplay logic

Limitations:

  • Collider-based only
  • No triangle-level precision

This is typically the default choice for gameplay.


5. CPU Mesh Raycasting (User-Level Utility)

Description

CPU mesh raycasting performs explicit ray–triangle intersection on mesh data.

C#
result = MeshRaycast.IntersectsMesh(ray, _testCube.Transform, _testCube.Mesh);
if(result.hit)
{
    result.hitElement = _testCube;
}

Important Requirement

CPU mesh raycasting requires CPU-side mesh data.

For this reason, the example disables automatic CPU memory freeing:

C#
this.FreeCPUResources = false;

This decision is intentional and explicit.

Use cases

  • editor picking
  • debugging tools
  • exact hit testing
  • offline queries

6. GPU Mesh Raycasting (Compute Shader)

Description

GPU raycasting is implemented entirely by the user via a compute shader.

C#
result = _computeRaycaster.PerformRaycast(ray, _testCube.Transform, _testCube.Mesh);
if (result.hit)
{
    result.hitElement = _testCube;
}

The engine provides:

  • compute shader support
  • buffer management
  • synchronization primitives

The raycasting logic itself is user-defined.

Characteristics

  • Scales well for large meshes
  • Highly parallel
  • No CPU triangle iteration

Trade-offs:

  • More setup complexity
  • GPU–CPU synchronization cost
  • You need to turnoff VSync in order to use this with GFX.
  • Overkill for simple queries

7. Visual Debugging (User Logic)

All raycast methods in the example update a shared debug element:

C#
debugSphere.Transform.Position = hitPosition;

This visualization:

  • is not part of the engine
  • demonstrates correctness
  • keeps rendering decoupled from logic

8. Integration With Gameplay Logic

Raycasting integrates cleanly with gameplay behaviors.

In the example:

  • movement is handled by a user-defined first-person behavior
  • the camera is synced manually
  • ray origin and direction are fully controlled

None of this logic is engine-mandated.


9. Choosing the Right Approach

ApproachTypical Use
Physics raycastGameplay, interaction
CPU mesh raycastEditor tools, precision
GPU raycastLarge meshes, many rays

GFX-Next deliberately does not hide these choices.


10. Design Philosophy

This tutorial highlights a core GFX-Next principle:

Quote

The engine provides systems —
the user decides how to combine them.

There is:

  • no forced abstraction
  • no automatic fallback
  • no implicit behavior

Summary

This example demonstrates how advanced features in GFX-Next:

  • are composed at the user level
  • expose performance and precision trade-offs
  • work identically in runtime and editor contexts
  • remain fully explicit and debuggable

Categories

  1. Advanced Tutorial 1
  2. Getting Started 8
  3. Default Category 1
  4. Reset Filter

Partner & Supporter

  1. Privacy Policy
  2. Legal Notice
Powered by WoltLab Suite™