license: public domain CC0
[Refined & Expanded: https://claude.ai/public/artifacts/1a2fe84c-5b93-4bfc-bd4a-55e8a014cc43]
Neural LOD Booster — Tier 1.0 + Tier 1.1 Design Document
Mesh‑Conditioned Neural Appearance Layer for Unreal Engine 5.7
0. Overview
Neural LOD Booster (NLB) is a two‑tier neural appearance system designed to enhance low‑poly meshes using lightweight CNNs running entirely on the GPU inside Unreal Engine 5.7.
We define two concrete tiers:
Tier 1.0 — Pretrained Style Transfer CNN (No Training Required)
A drop‑in, off‑the‑shelf neural stylization pass.
- Uses a pretrained ONNX style‑transfer CNN
- Requires no training
- Applies stylized shading to low‑poly meshes
- Uses a simple RGB input (albedo + simple lighting)
- Runs as a GPU compute pass
- Fully turn‑key
Tier 1.1 — SH‑CNN Neural Shading (Lighting‑Aware, Requires Training)
A lighting‑aware neural shading model.
- Requires Tier 1.0 infrastructure to be working
- Adds a custom SH‑CNN ONNX model
- Uses full G‑buffer + style embedding + lighting vector
- Performs neural shading conditioned on lighting
- Requires a small training pipeline
- Produces high‑fidelity shading from low‑poly meshes
1. Plugin Structure (Shared Across Tiers)
Engine/Plugins/Runtime/NeuralLOD/
NeuralLOD.uplugin
Content/NeuralModels/
NeuralLOD_StyleTransfer_Default.onnx
NeuralLOD_SHCNN_Default.onnx
Source/NeuralLOD/
NeuralLOD.Build.cs
Public/
NeuralLODInference.h
NeuralLODComponent.h
NeuralLODPass_Style.h
NeuralLODPass_SH.h
Private/
NeuralLODInference.cpp
NeuralLODComponent.cpp
NeuralLODPass_Style.cpp
NeuralLODPass_SH.cpp
NeuralLODExtension_Style.cpp
NeuralLODExtension_SH.cpp
2. UnrealBuildTool Setup (ONNX Runtime + DirectML)
using UnrealBuildTool;
using System.IO;
public class NeuralLOD : ModuleRules
{
public NeuralLOD(ReadOnlyTargetRules Target) : base(Target)
{
PCHUsage = PCHUsageMode.UseExplicitOrSharedPCHs;
PublicIncludePaths.Add(Path.Combine(ModuleDirectory, "ThirdParty/ONNXRuntime/Include"));
string LibPath = Path.Combine(ModuleDirectory, "ThirdParty/ONNXRuntime/Lib");
string BinPath = Path.Combine(ModuleDirectory, "ThirdParty/ONNXRuntime/Bin");
PublicAdditionalLibraries.Add(Path.Combine(LibPath, "onnxruntime.lib"));
RuntimeDependencies.Add(Path.Combine(BinPath, "onnxruntime.dll"));
PublicDefinitions.Add("USE_ONNXRUNTIME=1");
PublicDependencyModuleNames.AddRange(new string[] {
"Core", "CoreUObject", "Engine",
"RenderCore", "RHI"
});
}
}
3. Tier 1.0 — Pretrained Style Transfer CNN
3.1 Model
Model name:
NeuralLOD_StyleTransfer_Default.onnx
Input:[1, 3, H, W] (RGB)
Output:[1, 3, H, W] (stylized RGB)
This is a Johnson‑style feed‑forward style transfer CNN.
3.2 Download Instructions
Example PowerShell:
$uri = "https://example.com/models/NeuralLOD_StyleTransfer_Default.onnx"
$out = "NeuralLOD_StyleTransfer_Default.onnx"
Invoke-WebRequest -Uri $uri -OutFile $out
Move-Item $out "Engine/Plugins/Runtime/NeuralLOD/Content/NeuralModels/"
3.3 Tier 1.0 Input Packing Shader
FNeuralPackCS_Style.hlsl
RWTexture2D<float4> OutRGB;
Texture2D<float4> AlbedoTex;
Texture2D<float4> NormalTex;
float3 LightDir;
[numthreads(8,8,1)]
void Main(uint3 DTid : SV_DispatchThreadID)
{
float3 N = normalize(NormalTex[DTid.xy].xyz);
float3 A = AlbedoTex[DTid.xy].rgb;
float NdotL = saturate(dot(N, -LightDir));
float3 Lit = A * (0.2 + 0.8 * NdotL);
OutRGB[DTid.xy] = float4(Lit, 1.0);
}
3.4 Tier 1.0 Inference Wrapper
class FNeuralLODInference_Style
{
public:
FNeuralLODInference_Style(const FString& ModelPath);
bool RunInferenceGPU(
FRHITexture* InputTexture,
FRHIUnorderedAccessView* OutputUAV,
int32 Width,
int32 Height);
private:
Ort::Env Env;
TUniquePtr<Ort::Session> Session;
Ort::SessionOptions SessionOptions;
};
3.5 Tier 1.0 Render Pass
void FNeuralLODPass_Style::AddPass(
FRDGBuilder& GraphBuilder,
const FSceneView& View,
const FSceneTextures& SceneTextures)
{
const FIntPoint Resolution = SceneTextures.SceneColor->Desc.Extent;
FRDGTextureDesc Desc = FRDGTextureDesc::Create2D(
Resolution, PF_A32B32G32R32F,
FClearValueBinding::None,
TexCreate_ShaderResource | TexCreate_UAV);
FRDGTextureRef InputRGB = GraphBuilder.CreateTexture(Desc, TEXT("NeuralLOD_Style_Input"));
// Pack RGB
{
auto* Params = GraphBuilder.AllocParameters<FNeuralPackCS_Style::FParameters>();
Params->AlbedoTex = SceneTextures.GBufferB;
Params->NormalTex = SceneTextures.GBufferA;
Params->OutRGB = GraphBuilder.CreateUAV(InputRGB);
Params->LightDir = FVector3f(0.3f, 0.5f, -0.8f);
TShaderMapRef<FNeuralPackCS_Style> CS(GetGlobalShaderMap(GMaxRHIFeatureLevel));
FComputeShaderUtils::AddPass(
GraphBuilder,
RDG_EVENT_NAME("NeuralLOD_Style_PackRGB"),
CS, Params,
FIntVector(Resolution.X / 8, Resolution.Y / 8, 1));
}
// Run inference
{
FRHITexture* InputRHI = InputRGB->GetRHI();
FRHIUnorderedAccessView* OutputUAV =
SceneTextures.SceneColor->GetRHI()->GetTexture2D()->GetOrCreateUnorderedAccessView();
Inference->RunInferenceGPU(InputRHI, OutputUAV, Resolution.X, Resolution.Y);
}
}
4. Tier 1.1 — SH‑CNN Neural Shading (Lighting‑Aware)
Requires Tier 1.0 infrastructure to be implemented and working.
4.1 Model
Model name:
NeuralLOD_SHCNN_Default.onnx
Input:[1, 16, H, W]
Output:[1, 3, H, W]
Channel layout:
| Channels | Description |
|---|---|
| 0–2 | Normal.xyz |
| 3–5 | Albedo.rgb |
| 6 | Roughness |
| 7 | Metalness |
| 8 | Depth |
| 9–12 | StyleEmbedding[0..3] |
| 13–15 | LightingVector.xyz |
4.2 Download Instructions
$uri = "https://example.com/models/NeuralLOD_SHCNN_Default.onnx"
$out = "NeuralLOD_SHCNN_Default.onnx"
Invoke-WebRequest -Uri $uri -OutFile $out
Move-Item $out "Engine/Plugins/Runtime/NeuralLOD/Content/NeuralModels/"
4.3 Most‑Significant Light Selector (C++)
struct FNeuralLightingContext
{
FVector LightingDirection = FVector(0, 0, 1);
float LightingIntensity = 0.0f;
bool bHasValidLight = false;
};
static bool DoesLightAffectActor(
const FLightSceneProxy* LightProxy,
const FLightingChannels& ActorChannels)
{
FLightingChannels LightChannels = LightProxy->GetLightingChannelMask();
return (LightChannels.bChannel0 && ActorChannels.bChannel0) ||
(LightChannels.bChannel1 && ActorChannels.bChannel1) ||
(LightChannels.bChannel2 && ActorChannels.bChannel2);
}
FNeuralLightingContext SelectMostSignificantLight(
const FScene* Scene,
const FSceneView& View,
const AActor* Actor)
{
FNeuralLightingContext Result;
if (!Scene || !Actor)
return Result;
const FLightingChannels ActorChannels = Actor->GetLightingChannels();
const FVector ActorPosition = Actor->GetActorLocation();
float BestScore = 0.0f;
for (const FLightSceneInfoCompact& LightInfo : Scene->Lights)
{
const FLightSceneInfo* Light = LightInfo.LightSceneInfo;
const FLightSceneProxy* Proxy = Light->Proxy;
if (!DoesLightAffectActor(Proxy, ActorChannels))
continue;
const FVector LightPos = Proxy->GetPosition();
const FVector ToActor = ActorPosition - LightPos;
const float Distance = ToActor.Size();
const float DistanceFactor = 1.0f / (1.0f + Distance);
const float IntensityFactor = Proxy->GetColor().GetLuminance();
FVector LightDir = (Proxy->GetLightType() == LightType_Directional)
? -Proxy->GetDirection()
: ToActor.GetSafeNormal();
const FVector ViewDir = View.ViewMatrices.GetViewDirection();
const float DirectionFactor = FMath::Max(FVector::DotProduct(ViewDir, LightDir), 0.0f);
const float Score = DistanceFactor * IntensityFactor * DirectionFactor;
if (Score > BestScore)
{
BestScore = Score;
Result.LightingDirection = LightDir;
Result.LightingIntensity = IntensityFactor;
Result.bHasValidLight = true;
}
}
if (!Result.bHasValidLight)
{
Result.LightingDirection = View.ViewMatrices.GetViewDirection();
Result.LightingIntensity = 0.1f;
}
return Result;
}
4.4 Tier 1.1 Packing Shader (FNeuralPackCS_SH)
RWTexture2D<float4> OutTensor0; // 0-3
RWTexture2D<float4> OutTensor1; // 4-7
RWTexture2D<float4> OutTensor2; // 8-11
RWTexture2D<float4> OutTensor3; // 12-15
Texture2D<float4> NormalTex;
Texture2D<float4> AlbedoTex;
Texture2D<float4> GBufferATex;
Texture2D<float4> GBufferBTex;
Texture2D<float> DepthTex;
float4 StyleEmbedding;
float3 LightingVector;
[numthreads(8,8,1)]
void Main(uint3 DTid : SV_DispatchThreadID)
{
float3 N = NormalTex[DTid.xy].xyz;
float3 A = AlbedoTex[DTid.xy].rgb;
float R = GBufferATex[DTid.xy].a;
float M = GBufferBTex[DTid.xy].a;
float D = DepthTex[DTid.xy];
OutTensor0[DTid.xy] = float4(N.x, N.y, N.z, A.r);
OutTensor1[DTid.xy] = float4(A.g, A.b, R, M);
OutTensor2[DTid.xy] = float4(D, StyleEmbedding.x, StyleEmbedding.y, StyleEmbedding.z);
OutTensor3[DTid.xy] = float4(StyleEmbedding.w, LightingVector.x, LightingVector.y, LightingVector.z);
}
4.5 Tier 1.1 Inference Wrapper
class FNeuralLODInference_SH
{
public:
FNeuralLODInference_SH(const FString& ModelPath);
bool RunInferenceGPU(
FRHITexture* T0,
FRHITexture* T1,
FRHITexture* T2,
FRHITexture* T3,
FRHIUnorderedAccessView* OutputUAV,
int32 Width,
int32 Height);
private:
Ort::Env Env;
TUniquePtr<Ort::Session> Session;
Ort::SessionOptions SessionOptions;
};
4.6 Tier 1.1 Lighting Flow Diagram
┌─────────────────────────────┐
│ Unreal Scene │
│ (Lights, Actors, GI, etc) │
└─────────────┬──────────────┘
│
┌───────▼────────┐
│ FSceneView / │
│ FScene │
└───────┬────────┘
│
┌─────────────▼─────────────────────┐
│ SelectMostSignificantLight(...) │
│ - respects LightingChannels │
│ - scores lights by distance, │
│ intensity, direction │
└─────────────┬────────────────────┘
│
┌───────▼───────────────┐
│ LightingVector.xyz │
└───────┬───────────────┘
│
┌───────────────▼─────────────────────────┐
│ FNeuralPackCS_SH (compute) │
│ Inputs: G-buffer + Style + Lighting │
│ Output: 4x RGBA32F (16 channels) │
└───────────────┬────────────────────────┘
│
┌────────────▼─────────────┐
│ SH‑CNN (ONNX, GPU) │
│ Input: [1,16,H,W] │
│ Output: [1,3,H,W] │
└────────────┬─────────────┘
│
┌─────────────▼─────────────────┐
│ Composite into SceneColor │
└───────────────────────────────┘
4.7 Training‑Time Lighting Conditioning Strategy
- Render hi‑poly teacher shading under many lighting directions
- Render low‑poly G‑buffer + lighting vector
- Train SH‑CNN to map:
[ f(G_buffer, StyleEmbedding, LightingVector) \rightarrow RGB_{teacher} ] - Sample light directions uniformly over hemisphere
- Vary intensity and color
- StyleEmbedding can be per‑material or per‑asset
- Training takes 1–4 hours on a single GPU
No comments:
Post a Comment