Monday, February 2, 2026
indictment
i come not to bury php
license: public domain CC0
NOTE: expanded version here: https://claude.ai/public/artifacts/64cf67f1-6067-4187-b15f-fdf4402274efDesign Document: A Modern PHP‑Inspired Web Runtime for Node + TypeScript
1. Introduction: The Spirit of PHP
PHP earned its place in web history not because it was elegant, but because it was immediately useful. Its defining virtues were:
- Zero‑friction iteration — edit a file, refresh the browser, see the result.
- Stateless execution — every request starts fresh, no global state leaks.
- Trivial deployment — copy a file to a server and it runs.
- Beginner‑friendly failure model — crashes affect only the current request.
- Massive accessibility — no build steps, no daemons, no containers.
These qualities created a development loop that felt like a REPL for the web. You didn’t “build” an app — you tinkered with it, live, with instant feedback.
But PHP’s strengths came bundled with limitations:
- No type safety
- Inconsistent standard library
- Weak module system
- Poor tooling
- Limited modern ergonomics
- No REPL, no notebook‑style exploration
- Primitive deployment safety
The goal of this project is to preserve the spirit of PHP while modernizing everything else, using Node + TypeScript as the foundation.
2. Vision: Expanding and Modernizing PHP’s Wins for Node + TypeScript
The core idea is simple:
Bring PHP’s frictionless iteration and deployment model into the TypeScript + Node ecosystem, enhanced with modern tooling, REPL workflows, and programmable safety.
This means:
- TypeScript as the first‑class language
- Node as the long‑lived host
- VM contexts as the per‑request sandbox
- A TS loader that eliminates build steps
- A local development loop that feels like a notebook
- A remote REPL for staging and production
- A sync‑based deployment model
- A programmable friction pipeline for safe deploys
- VSCode integration for diagnostics and deploy UX
- A batteries‑included web standard library
The result is a system that feels like:
- PHP’s simplicity
- Node’s ecosystem
- TypeScript’s safety
- Cloudflare Workers’ isolation model
- Jupyter’s iterative workflow
- Git’s extensibility
All fused into a single, coherent developer experience.
3. High‑Level Goals
3.1 Zero‑Friction Local Development
Local iteration should feel like breathing:
tsweb devstarts everything- Hot reload on file change
- Instant TS → JS transform
- Automatic context reload
- Browser auto‑refresh
- Inline VSCode diagnostics
- Local REPL for experimentation
No build step. No bundler. No config.
3.2 Safe, Structured Deployment
Deployment should be:
- sync‑based (like PHP)
- secure (key‑based auth)
- environment‑aware (staging vs prod)
- programmable (hook pipeline)
- deliberate (double confirmations, branch checks)
3.3 Stateless Request Execution
Each request runs in a fresh VM context:
- No global state leakage
- Deterministic behavior
- Easy debugging
- Safe remote REPL
3.4 Batteries‑Included Web Standard Library
A cohesive TS‑native stdlib:
- Request/response helpers
- Cookie/session utilities
- File uploads
- Routing
- DB connectors
- HTML templating
3.5 Canonical Setup Across All OSes
Everything should “just work”:
- npm package + CLI
- optional Docker image
- VSCode extension
- minimal configuration
4. Architecture Overview
┌──────────────────────────────────────────┐
│ tsweb CLI │
│ dev, deploy, repl, logs, diff │
└───────────────────┬──────────────────────┘
│
▼
┌──────────────────────────────────────────┐
│ Environment & Friction Engine │
│ staging/prod rules, hooks, confirmations │
└───────────────────┬──────────────────────┘
│
▼
┌──────────────────────────────────────────┐
│ Secure Deployment Channel │
│ signed payloads, key-based auth │
└───────────────────┬──────────────────────┘
│
▼
┌──────────────────────────────────────────┐
│ Remote TS Runtime │
│ VM pool, TS loader, hot reload, REPL │
└──────────────────────────────────────────┘
5. Core Components
5.1 TypeScript Loader
A fast, production‑grade TS loader:
- Uses swc/esbuild for TS → JS
- Caches aggressively
- Works inside VM contexts
- Supports hot reload
- Integrates with VSCode diagnostics
This eliminates the build step entirely.
5.2 VM Context Pool
Node’s VM module provides per‑request isolation:
- Fresh global scope per request
- Deterministic teardown
- No state leakage
- Optional snapshotting for speed
This recreates PHP’s statelessness.
5.3 Web Standard Library
A cohesive, batteries‑included API:
Request,Response(WHATWG)- Cookies, sessions
- File uploads
- Routing
- DB connectors
- HTML templating
- JSON helpers
This replaces Node’s low‑level primitives with something closer to PHP’s ergonomics.
5.4 Local Development Loop
tsweb dev provides:
- File watcher
- Hot reload
- Automatic TS compilation
- Browser auto‑refresh
- Local REPL
- Inline VSCode diagnostics
This is the modern equivalent of “edit file → refresh browser.”
5.5 Remote REPL
A secure REPL for staging and production:
- Evaluate code in isolated contexts
- Inspect logs
- Run smoke tests
- Debug issues
- Validate migrations
This is a superpower PHP never had.
5.6 Deployment System
Deployments are:
- sync‑based (diff of changed files)
- signed (key‑based auth)
- environment‑aware
- safe (friction pipeline)
- fast (no build step)
Deployment flow:
- Compute diff
- Run friction hooks
- Sign payload
- Upload to server
- Server verifies signature
- Server reloads VM contexts
- Health checks
- Swap traffic
5.7 Programmable Friction Pipeline
Friction is a first‑class concept.
Each environment defines a pipeline of hooks:
{
"environments": {
"staging": {
"friction": ["confirm_environment", "show_diff"]
},
"prod": {
"friction": [
"confirm_environment",
"show_diff",
"require_clean_working_tree",
"require_on_main",
"double_confirm",
"type_environment_name"
]
}
}
}
Hooks are just TypeScript functions.
Teams can add:
- Slack notifications
- Jira ticket checks
- Peer approval
- Deployment windows
- Custom validations
This prevents accidental deploys without slowing down intentional ones.
6. Developer Workflow
6.1 Local Development
npx tsweb dev
- Edit files
- Save
- Browser reloads
- Errors appear in VSCode
- REPL available
6.2 Deploy to Staging
npx tsweb deploy staging
- Light friction
- Diff preview
- Single confirmation
6.3 Remote Debugging
npx tsweb repl staging
6.4 Deploy to Production
npx tsweb deploy prod
- Heavy friction
- Branch enforcement
- Clean working tree required
- Double confirmation
- Type environment name
7. Why This System Matters
This design:
- preserves PHP’s magic
- fixes PHP’s weaknesses
- leverages Node’s ecosystem
- embraces TypeScript’s safety
- integrates with modern tooling
- supports REPL‑driven development
- enforces safe, deliberate deployments
- remains simple, predictable, and fast
It’s not a framework.
It’s not a serverless platform.
It’s not a bundler.
It’s a developer experience layer that makes building web apps feel effortless again — but without sacrificing safety, structure, or modern ergonomics.
Sunday, February 1, 2026
gigo
anti-progress bar
there's just about nothing more teeth grinding than ui ux that sucks with respect to progress indicators.
the worst is mostly anything ever done in windows.
so much animation and shit, but so little actual information or context or help or, you know, shit that actually works.
conplexity
patents
memcpy
cnn Tier1 full
license: public domain CC0
[Refined & Expanded: https://claude.ai/public/artifacts/1a2fe84c-5b93-4bfc-bd4a-55e8a014cc43]
Neural LOD Booster — Tier 1.0 + Tier 1.1 Design Document
Mesh‑Conditioned Neural Appearance Layer for Unreal Engine 5.7
0. Overview
Neural LOD Booster (NLB) is a two‑tier neural appearance system designed to enhance low‑poly meshes using lightweight CNNs running entirely on the GPU inside Unreal Engine 5.7.
We define two concrete tiers:
Tier 1.0 — Pretrained Style Transfer CNN (No Training Required)
A drop‑in, off‑the‑shelf neural stylization pass.
- Uses a pretrained ONNX style‑transfer CNN
- Requires no training
- Applies stylized shading to low‑poly meshes
- Uses a simple RGB input (albedo + simple lighting)
- Runs as a GPU compute pass
- Fully turn‑key
Tier 1.1 — SH‑CNN Neural Shading (Lighting‑Aware, Requires Training)
A lighting‑aware neural shading model.
- Requires Tier 1.0 infrastructure to be working
- Adds a custom SH‑CNN ONNX model
- Uses full G‑buffer + style embedding + lighting vector
- Performs neural shading conditioned on lighting
- Requires a small training pipeline
- Produces high‑fidelity shading from low‑poly meshes
1. Plugin Structure (Shared Across Tiers)
Engine/Plugins/Runtime/NeuralLOD/
NeuralLOD.uplugin
Content/NeuralModels/
NeuralLOD_StyleTransfer_Default.onnx
NeuralLOD_SHCNN_Default.onnx
Source/NeuralLOD/
NeuralLOD.Build.cs
Public/
NeuralLODInference.h
NeuralLODComponent.h
NeuralLODPass_Style.h
NeuralLODPass_SH.h
Private/
NeuralLODInference.cpp
NeuralLODComponent.cpp
NeuralLODPass_Style.cpp
NeuralLODPass_SH.cpp
NeuralLODExtension_Style.cpp
NeuralLODExtension_SH.cpp
2. UnrealBuildTool Setup (ONNX Runtime + DirectML)
using UnrealBuildTool;
using System.IO;
public class NeuralLOD : ModuleRules
{
public NeuralLOD(ReadOnlyTargetRules Target) : base(Target)
{
PCHUsage = PCHUsageMode.UseExplicitOrSharedPCHs;
PublicIncludePaths.Add(Path.Combine(ModuleDirectory, "ThirdParty/ONNXRuntime/Include"));
string LibPath = Path.Combine(ModuleDirectory, "ThirdParty/ONNXRuntime/Lib");
string BinPath = Path.Combine(ModuleDirectory, "ThirdParty/ONNXRuntime/Bin");
PublicAdditionalLibraries.Add(Path.Combine(LibPath, "onnxruntime.lib"));
RuntimeDependencies.Add(Path.Combine(BinPath, "onnxruntime.dll"));
PublicDefinitions.Add("USE_ONNXRUNTIME=1");
PublicDependencyModuleNames.AddRange(new string[] {
"Core", "CoreUObject", "Engine",
"RenderCore", "RHI"
});
}
}
3. Tier 1.0 — Pretrained Style Transfer CNN
3.1 Model
Model name:
NeuralLOD_StyleTransfer_Default.onnx
Input:[1, 3, H, W] (RGB)
Output:[1, 3, H, W] (stylized RGB)
This is a Johnson‑style feed‑forward style transfer CNN.
3.2 Download Instructions
Example PowerShell:
$uri = "https://example.com/models/NeuralLOD_StyleTransfer_Default.onnx"
$out = "NeuralLOD_StyleTransfer_Default.onnx"
Invoke-WebRequest -Uri $uri -OutFile $out
Move-Item $out "Engine/Plugins/Runtime/NeuralLOD/Content/NeuralModels/"
3.3 Tier 1.0 Input Packing Shader
FNeuralPackCS_Style.hlsl
RWTexture2D<float4> OutRGB;
Texture2D<float4> AlbedoTex;
Texture2D<float4> NormalTex;
float3 LightDir;
[numthreads(8,8,1)]
void Main(uint3 DTid : SV_DispatchThreadID)
{
float3 N = normalize(NormalTex[DTid.xy].xyz);
float3 A = AlbedoTex[DTid.xy].rgb;
float NdotL = saturate(dot(N, -LightDir));
float3 Lit = A * (0.2 + 0.8 * NdotL);
OutRGB[DTid.xy] = float4(Lit, 1.0);
}
3.4 Tier 1.0 Inference Wrapper
class FNeuralLODInference_Style
{
public:
FNeuralLODInference_Style(const FString& ModelPath);
bool RunInferenceGPU(
FRHITexture* InputTexture,
FRHIUnorderedAccessView* OutputUAV,
int32 Width,
int32 Height);
private:
Ort::Env Env;
TUniquePtr<Ort::Session> Session;
Ort::SessionOptions SessionOptions;
};
3.5 Tier 1.0 Render Pass
void FNeuralLODPass_Style::AddPass(
FRDGBuilder& GraphBuilder,
const FSceneView& View,
const FSceneTextures& SceneTextures)
{
const FIntPoint Resolution = SceneTextures.SceneColor->Desc.Extent;
FRDGTextureDesc Desc = FRDGTextureDesc::Create2D(
Resolution, PF_A32B32G32R32F,
FClearValueBinding::None,
TexCreate_ShaderResource | TexCreate_UAV);
FRDGTextureRef InputRGB = GraphBuilder.CreateTexture(Desc, TEXT("NeuralLOD_Style_Input"));
// Pack RGB
{
auto* Params = GraphBuilder.AllocParameters<FNeuralPackCS_Style::FParameters>();
Params->AlbedoTex = SceneTextures.GBufferB;
Params->NormalTex = SceneTextures.GBufferA;
Params->OutRGB = GraphBuilder.CreateUAV(InputRGB);
Params->LightDir = FVector3f(0.3f, 0.5f, -0.8f);
TShaderMapRef<FNeuralPackCS_Style> CS(GetGlobalShaderMap(GMaxRHIFeatureLevel));
FComputeShaderUtils::AddPass(
GraphBuilder,
RDG_EVENT_NAME("NeuralLOD_Style_PackRGB"),
CS, Params,
FIntVector(Resolution.X / 8, Resolution.Y / 8, 1));
}
// Run inference
{
FRHITexture* InputRHI = InputRGB->GetRHI();
FRHIUnorderedAccessView* OutputUAV =
SceneTextures.SceneColor->GetRHI()->GetTexture2D()->GetOrCreateUnorderedAccessView();
Inference->RunInferenceGPU(InputRHI, OutputUAV, Resolution.X, Resolution.Y);
}
}
4. Tier 1.1 — SH‑CNN Neural Shading (Lighting‑Aware)
Requires Tier 1.0 infrastructure to be implemented and working.
4.1 Model
Model name:
NeuralLOD_SHCNN_Default.onnx
Input:[1, 16, H, W]
Output:[1, 3, H, W]
Channel layout:
| Channels | Description |
|---|---|
| 0–2 | Normal.xyz |
| 3–5 | Albedo.rgb |
| 6 | Roughness |
| 7 | Metalness |
| 8 | Depth |
| 9–12 | StyleEmbedding[0..3] |
| 13–15 | LightingVector.xyz |
4.2 Download Instructions
$uri = "https://example.com/models/NeuralLOD_SHCNN_Default.onnx"
$out = "NeuralLOD_SHCNN_Default.onnx"
Invoke-WebRequest -Uri $uri -OutFile $out
Move-Item $out "Engine/Plugins/Runtime/NeuralLOD/Content/NeuralModels/"
4.3 Most‑Significant Light Selector (C++)
struct FNeuralLightingContext
{
FVector LightingDirection = FVector(0, 0, 1);
float LightingIntensity = 0.0f;
bool bHasValidLight = false;
};
static bool DoesLightAffectActor(
const FLightSceneProxy* LightProxy,
const FLightingChannels& ActorChannels)
{
FLightingChannels LightChannels = LightProxy->GetLightingChannelMask();
return (LightChannels.bChannel0 && ActorChannels.bChannel0) ||
(LightChannels.bChannel1 && ActorChannels.bChannel1) ||
(LightChannels.bChannel2 && ActorChannels.bChannel2);
}
FNeuralLightingContext SelectMostSignificantLight(
const FScene* Scene,
const FSceneView& View,
const AActor* Actor)
{
FNeuralLightingContext Result;
if (!Scene || !Actor)
return Result;
const FLightingChannels ActorChannels = Actor->GetLightingChannels();
const FVector ActorPosition = Actor->GetActorLocation();
float BestScore = 0.0f;
for (const FLightSceneInfoCompact& LightInfo : Scene->Lights)
{
const FLightSceneInfo* Light = LightInfo.LightSceneInfo;
const FLightSceneProxy* Proxy = Light->Proxy;
if (!DoesLightAffectActor(Proxy, ActorChannels))
continue;
const FVector LightPos = Proxy->GetPosition();
const FVector ToActor = ActorPosition - LightPos;
const float Distance = ToActor.Size();
const float DistanceFactor = 1.0f / (1.0f + Distance);
const float IntensityFactor = Proxy->GetColor().GetLuminance();
FVector LightDir = (Proxy->GetLightType() == LightType_Directional)
? -Proxy->GetDirection()
: ToActor.GetSafeNormal();
const FVector ViewDir = View.ViewMatrices.GetViewDirection();
const float DirectionFactor = FMath::Max(FVector::DotProduct(ViewDir, LightDir), 0.0f);
const float Score = DistanceFactor * IntensityFactor * DirectionFactor;
if (Score > BestScore)
{
BestScore = Score;
Result.LightingDirection = LightDir;
Result.LightingIntensity = IntensityFactor;
Result.bHasValidLight = true;
}
}
if (!Result.bHasValidLight)
{
Result.LightingDirection = View.ViewMatrices.GetViewDirection();
Result.LightingIntensity = 0.1f;
}
return Result;
}
4.4 Tier 1.1 Packing Shader (FNeuralPackCS_SH)
RWTexture2D<float4> OutTensor0; // 0-3
RWTexture2D<float4> OutTensor1; // 4-7
RWTexture2D<float4> OutTensor2; // 8-11
RWTexture2D<float4> OutTensor3; // 12-15
Texture2D<float4> NormalTex;
Texture2D<float4> AlbedoTex;
Texture2D<float4> GBufferATex;
Texture2D<float4> GBufferBTex;
Texture2D<float> DepthTex;
float4 StyleEmbedding;
float3 LightingVector;
[numthreads(8,8,1)]
void Main(uint3 DTid : SV_DispatchThreadID)
{
float3 N = NormalTex[DTid.xy].xyz;
float3 A = AlbedoTex[DTid.xy].rgb;
float R = GBufferATex[DTid.xy].a;
float M = GBufferBTex[DTid.xy].a;
float D = DepthTex[DTid.xy];
OutTensor0[DTid.xy] = float4(N.x, N.y, N.z, A.r);
OutTensor1[DTid.xy] = float4(A.g, A.b, R, M);
OutTensor2[DTid.xy] = float4(D, StyleEmbedding.x, StyleEmbedding.y, StyleEmbedding.z);
OutTensor3[DTid.xy] = float4(StyleEmbedding.w, LightingVector.x, LightingVector.y, LightingVector.z);
}
4.5 Tier 1.1 Inference Wrapper
class FNeuralLODInference_SH
{
public:
FNeuralLODInference_SH(const FString& ModelPath);
bool RunInferenceGPU(
FRHITexture* T0,
FRHITexture* T1,
FRHITexture* T2,
FRHITexture* T3,
FRHIUnorderedAccessView* OutputUAV,
int32 Width,
int32 Height);
private:
Ort::Env Env;
TUniquePtr<Ort::Session> Session;
Ort::SessionOptions SessionOptions;
};
4.6 Tier 1.1 Lighting Flow Diagram
┌─────────────────────────────┐
│ Unreal Scene │
│ (Lights, Actors, GI, etc) │
└─────────────┬──────────────┘
│
┌───────▼────────┐
│ FSceneView / │
│ FScene │
└───────┬────────┘
│
┌─────────────▼─────────────────────┐
│ SelectMostSignificantLight(...) │
│ - respects LightingChannels │
│ - scores lights by distance, │
│ intensity, direction │
└─────────────┬────────────────────┘
│
┌───────▼───────────────┐
│ LightingVector.xyz │
└───────┬───────────────┘
│
┌───────────────▼─────────────────────────┐
│ FNeuralPackCS_SH (compute) │
│ Inputs: G-buffer + Style + Lighting │
│ Output: 4x RGBA32F (16 channels) │
└───────────────┬────────────────────────┘
│
┌────────────▼─────────────┐
│ SH‑CNN (ONNX, GPU) │
│ Input: [1,16,H,W] │
│ Output: [1,3,H,W] │
└────────────┬─────────────┘
│
┌─────────────▼─────────────────┐
│ Composite into SceneColor │
└───────────────────────────────┘
4.7 Training‑Time Lighting Conditioning Strategy
- Render hi‑poly teacher shading under many lighting directions
- Render low‑poly G‑buffer + lighting vector
- Train SH‑CNN to map:
[ f(G_buffer, StyleEmbedding, LightingVector) \rightarrow RGB_{teacher} ] - Sample light directions uniformly over hemisphere
- Vary intensity and color
- StyleEmbedding can be per‑material or per‑asset
- Training takes 1–4 hours on a single GPU
cnn -> nerf
license: public domain CC0
Neural LOD Booster (NLB)
A Neural Appearance Layer for Real‑Time Engines
Technical Design Document — Unified Revision (with ObjectField‑Based NeRF Pipeline)
1. Introduction
This document describes a two‑tier neural rendering architecture:
Tier 1 — Mesh‑Conditioned Neural LOD Booster (NLB)
A practical, shippable system where tiny per‑model CNNs reconstruct high‑fidelity shading from low‑poly meshes.Tier 2 — Neural Scene Graph Renderer (NSGR)
A future‑facing extension where a NeRF‑like model renders the scene directly from structured object data (ObjectFields), eliminating the need for meshes on the GPU.
The core principle:
NLB is not a geometry system. It is an appearance system that sits on top of whatever low‑poly geometry representation the engine uses.
Tier 1 uses meshes as the representation.
Tier 2 uses ObjectFields instead.
2. Tier 1 — Mesh‑Conditioned Neural LOD Booster (Baseline System)
(This section remains unchanged — it’s the practical, shippable system.)
3. Tier 2 — Neural Scene Graph Renderer (NSGR)
A future extension that eliminates meshes from the GPU entirely
Tier 2 extends the Mesh+CNN system into a scene‑level neural renderer that uses a NeRF‑like model conditioned on structured object data.
The key shift:
Tier 1 is mesh‑conditioned neural shading.
Tier 2 is object‑conditioned neural rendering.
Meshes disappear from the rendering path.
3.1 Motivation
Pure NeRF worlds are beautiful but unusable for games because they lack:
- object identity
- physics
- determinism
- editing
- consistency
We fix this by inserting a structured semantic layer.
3.2 Object Fields: The Missing Middle Layer
We introduce a universal, engine‑friendly representation:
**ObjectField = {
type,
position,
orientation,
boundingVolume,
material/styleEmbedding,
physicalProperties,
optional lowPolyProxy (for physics only)
}**
This gives:
- physics engines → colliders, rigid bodies
- gameplay systems → semantic identity
- neural renderer → appearance conditioning
3.3 Nuance: Tier 2 does not require meshes on the GPU
This is the crucial distinction.
Tier 1
- Needs low‑poly meshes on GPU
- Mesh → G‑buffer → CNN → shading
Tier 2
- Does not need meshes at all
- ObjectFields → NeRF‑like renderer → pixels
- Physics uses bounding volumes, not meshes
- Rendering is fully neural
Meshes only exist offline (for training) or CPU‑side for physics if needed.
3.4 NeRF as a Conditional Neural Renderer
The NeRF‑like model becomes a giant CNN that renders the scene:
[ f(x, d \mid \text{ObjectFields}) \rightarrow (color, density) ]
It no longer hallucinates the world.
It renders the structured world you give it.
This eliminates:
- view inconsistency
- geometry drift
- hallucinations
And preserves:
- neural shading
- neural detail
- neural style
- neural lighting
3.5 The ObjectField‑Based NeRF Pipeline (Expanded Design)
The ObjectField‑based NeRF pipeline has three major stages:
Stage 1 — Text → ObjectFields (Semantic World Generation)
NeRFs cannot infer objects.
So we introduce a companion model:
Text‑to‑World Object Model (TWOM)
A lightweight generative model that converts high‑level descriptions into structured ObjectFields.
Example:
"small wooden cabin with a stone chimney"
→
[
{type:"cabin", position:(…), orientation:(…), boundingVolume:(…), material:"wood", styleEmbedding:(…)},
{type:"chimney", position:(…), orientation:(…), boundingVolume:(…), material:"stone", styleEmbedding:(…)}
]
TWOM can be implemented as:
- a scene‑graph generator
- a diffusion‑based object placer
- a transformer trained on scene descriptions
- a hybrid symbolic + neural system
Output: A complete list of ObjectFields.
Stage 2 — ObjectFields → Physics + Gameplay
ObjectFields are fed into the physics and gameplay systems:
Physics Engine
- Uses boundingVolume for collisions
- Updates transforms
- Handles rigid bodies, joints, constraints
Gameplay Systems
- Use type, material, and semantic ID
- Attach scripts, AI, interactions
World State
- Stored as a dynamic list of ObjectFields
- Updated every frame
This ensures:
- determinism
- editability
- multiplayer sync
- gameplay consistency
Stage 3 — ObjectFields → NeRF‑Style Renderer
The NeRF‑like renderer consumes ObjectFields as conditioning input.
3.3.1 Conditioning Mechanisms
Each ObjectField provides:
- a latent style embedding
- a material embedding
- a transform
- a bounding region
The renderer uses these to determine:
- which objects influence each ray
- how materials should look
- how lighting interacts with surfaces
3.3.2 Rendering Process
For each pixel:
- Cast a ray
- Query relevant ObjectFields
- Inject object embeddings into the NeRF network
- Evaluate neural radiance + density
- Composite results
- Output final color
3.3.3 No Meshes Required
The renderer does not need:
- vertices
- triangles
- UVs
- tangents
- topology
It only needs:
- object embeddings
- transforms
- bounding volumes
3.6 Why This Architecture Works
This division of labor ensures:
Physics works
Because objects have bounding volumes.
Gameplay works
Because objects have identity and transforms.
Rendering is neural
Because the NeRF consumes ObjectFields.
No hallucinations
Because the renderer does not invent geometry.
Editing is possible
Because ObjectFields are explicit and modifiable.
This makes Tier 2 a game‑ready neural rendering architecture, not a black‑box generative scene.
4. Summary
Tier 1 — Mesh‑Conditioned Neural LOD Booster
- Requires low‑poly meshes on GPU
- CNN reconstructs hi‑fi shading
- Works on all hardware
- Practical and shippable
Tier 2 — Neural Scene Graph Renderer
- Requires no meshes on GPU
- NeRF‑like renderer consumes ObjectFields
- Physics uses bounding volumes
- Fully neural rendering
- Eliminates hallucinations
- Provides scene‑level neural shading
- Uses TWOM to convert text → ObjectFields
Together, they form a unified neural appearance architecture that complements — not replaces — existing geometry systems like Nanite, voxels, SDFs, splats, and neural fields.