- Purpose of This Tutorial
- Raycasting Is a User Decision
- The Ray in the World Space
- Physics-Based Raycasting (Engine System)
- CPU Mesh Raycasting (User-Level Utility)
- GPU Mesh Raycasting (Compute Shader)
- Visual Debugging (User Logic)
- Integration With Gameplay Logic
- Choosing the Right Approach
- Design Philosophy
- Summary
1. Purpose of This Tutorial
This tutorial shows how a developer can implement multiple raycasting strategies in a GFX-Next application and switch between them at runtime. You can find the project to this tutorial within the examples repository: GFX-Examples/Examples/Raycasting/Raycasting at main · Andy16823/GFX-Examples.
If you have problems with executing the test solution you need to install this nuget package as well NuGet Gallery | GFX.BulletSharp 1.0.1 which includes the binarys and bindings for Bullet3
The goal is not to document engine internals, but to demonstrate:
- how different systems can coexist
- how GFX-Next exposes the necessary low-level hooks
- how trade-offs are handled explicitly by the user
2. Raycasting Is a User Decision
GFX-Next does not provide a single unified raycasting abstraction.
Instead, developers choose between:
- physics-based queries
- CPU mesh intersection
- GPU-based raycasting (compute shaders)
Each approach serves a different purpose.
The example application uses a user-defined enum to switch between them:
enum RaycastType
{
Physics,
CPUMesh,
GPUMesh
}
This enum exists only in the example.
3. The Ray in the World Space
Before we can start using raycasts, we need to understand what a raycast is, what it is used for, and how a ray is created.
What is a raycast?
Raycasting is one of the most fundamental techniques in game development.
A ray is an invisible, mathematical line that starts at a point and extends infinitely in a given direction.
A raycast tests whether this ray intersects with objects in the world and, if so, which object it hits first.
Raycasts are commonly used to let the player interact with the game world.
For example, in a role-playing game such as World of Warcraft, clicking on an NPC performs a raycast from the camera through the mouse cursor into the scene. The game then determines which object the player is pointing at.
2. What are raycasts used for?
Raycasts are used in many core gameplay and engine systems, including:
- Object selection and picking
- Player interaction (clicking, using, inspecting)
- Weapon hit detection in first-person shooters
- Visibility checks and line-of-sight tests
- AI perception and sensing
- Editor tools (gizmos, handles, selection)
In all of these cases, raycasting answers a simple but crucial question:
Quote“What does this line hit first?”
3. How do you create a ray?
There are multiple ways to construct a ray. The two most common types are:
a) Ray between two points
This is the simplest form of ray creation.
You define:
- an origin point
- a direction (or a target point)
This type of ray is often used for physics queries, AI checks, or bullet trajectories.
b) Ray from screen space (mouse position)
This is the most common ray type for player interaction.
Here, the ray:
- starts at the camera
- passes through a 2D screen position (e.g. the mouse cursor)
- continues into the 3D world
This process requires projecting 2D screen coordinates into world space, which involves the camera’s projection and view matrices.
GFX provides a helper function for this exact purpose.
It takes the active camera and a 2D screen position and returns a world-space ray that can be used for raycasting.
var ray = Ray.FromScreenPoint(_camera, Window.GetViewport(), Viewport.Width / 2, Viewport.Height / 2);
now that we are understanding what raycast are we can select our system to handle the raycast for us.
4. Physics-Based Raycasting (Engine System)
Description
Physics raycasts operate on the physics world, using colliders and acceleration structures.
result = Raycast.PerformRaycast(ray, _scene.PhysicsHandler);
Characteristics
- Uses existing physics colliders
- Very fast
- Suitable for gameplay logic
Limitations:
- Collider-based only
- No triangle-level precision
This is typically the default choice for gameplay.
5. CPU Mesh Raycasting (User-Level Utility)
Description
CPU mesh raycasting performs explicit ray–triangle intersection on mesh data.
result = MeshRaycast.IntersectsMesh(ray, _testCube.Transform, _testCube.Mesh);
if(result.hit)
{
result.hitElement = _testCube;
}
Important Requirement
CPU mesh raycasting requires CPU-side mesh data.
For this reason, the example disables automatic CPU memory freeing:
this.FreeCPUResources = false;
This decision is intentional and explicit.
Use cases
- editor picking
- debugging tools
- exact hit testing
- offline queries
6. GPU Mesh Raycasting (Compute Shader)
Description
GPU raycasting is implemented entirely by the user via a compute shader.
result = _computeRaycaster.PerformRaycast(ray, _testCube.Transform, _testCube.Mesh);
if (result.hit)
{
result.hitElement = _testCube;
}
The engine provides:
- compute shader support
- buffer management
- synchronization primitives
The raycasting logic itself is user-defined.
Characteristics
- Scales well for large meshes
- Highly parallel
- No CPU triangle iteration
Trade-offs:
- More setup complexity
- GPU–CPU synchronization cost
- You need to turnoff VSync in order to use this with GFX.
- Overkill for simple queries
7. Visual Debugging (User Logic)
All raycast methods in the example update a shared debug element:
debugSphere.Transform.Position = hitPosition;
This visualization:
- is not part of the engine
- demonstrates correctness
- keeps rendering decoupled from logic
8. Integration With Gameplay Logic
Raycasting integrates cleanly with gameplay behaviors.
In the example:
- movement is handled by a user-defined first-person behavior
- the camera is synced manually
- ray origin and direction are fully controlled
None of this logic is engine-mandated.
9. Choosing the Right Approach
GFX-Next deliberately does not hide these choices.
10. Design Philosophy
This tutorial highlights a core GFX-Next principle:
QuoteThe engine provides systems —
the user decides how to combine them.
There is:
- no forced abstraction
- no automatic fallback
- no implicit behavior
Summary
This example demonstrates how advanced features in GFX-Next:
- are composed at the user level
- expose performance and precision trade-offs
- work identically in runtime and editor contexts
- remain fully explicit and debuggable