Sparse Virtual Textures

Sparse Virtual Texturing is an approach to simulating very large textures using much less texture memory than they'd require in full by downloading only the data that is needed, and using a pixel shader to map from the virtual large texture to the actual physical texture.

The technique could be used for very large textures, or simply for large quantities of smaller textures (by packing them into the large texture, or by using multiple page tables).

It was mostly inspired by John Carmack's descriptions of MegaTexturing in several public and private forums and emails. It may not be exactly the same as MegaTexture, but it's probably close.


A web forum is available to discuss this technique.


id software whitepapers on related technologies


Attendees of GDC 2008 Talk

Looping in the pixel shader

After the talk someone (I didn't get his name) came up and talked to me about the possibility of doing some of the mipmap substitution by looping in the pixel shader, an idea which I had assumed was bad (I actually mentioned this assumption in the rough draft of the talk). But after some discussion it seemed like it might, maybe, be a good idea, because you can reduce some page table traffic in a way that I currently hack around, and the overhead might not be as bad as I originally imagined. If you were that person, please send me an email ( so I can credit you properly if I write up the discussion in more detail.

GDC 2008 Q&A

I'm not always that fast on my feet, so I thought of some better answers to some of the questions raised during q&a.

Rendering into the SVT

I don't have a good answer to the question (naively it requires changing where you're writing to, which APIs haven't supported generally although maybe it got added recently and I missed it). However, there may be good answers to another question, namely whatever it is that prompts you to think you want to do this.

For example, as I said at the talk, if you want to build your procedural texture pages by rendering into a texture, that's just fine. Just build them a block at a time, and no redirection is needed. I do it in software, but hardware is certainly feasible (and possibly more efficient), although you might run into texture memory issues if you do it by compositing bitmaps (which I think you want to, since you want to rely on your artists, not some crazy programmatic procedural shading).

If the reason you want to do it is to add decals on the fly, it's very simple. At some point you'll flush a given page out of the cache, and then later need to download it again, and it'll still need to have the decal. So make adding the decal be a procedural-texturing sort of process, even if the basic data is a disk-streamed bitmap; just always go through and every page, after you've built or streamed it, overlay any decals needed, on a page-by-page basis. Now, when you want to add a decal dynamically, you can just invalidate the cache and hey it just works (but less efficiently, since it rebuilds). Or, if you care about that efficiency, you can go through all the downloaded pages and if they intersect the decal, render the decal onto that page one at a time. You will need to clip the decal to each target page, and I don't know how much that state change (viewport/scissors) costs; I assume not that much. This is exactly how it would already work if you were doing the procedural build in hardware anyway; but it's all extra code if you do the procedural build in software.

If you have some OTHER reason to render into the SVT, I have no idea. But working a block at a time seems like the obvious solution, but maybe in other cases there would be too much geometry and this wouldn't be acceptable.

What Hardware/API Changes To Get Rid of the Hacks?

I said at the time that I don't think there's anything that needs to be done on new hardware to remove the hacks; I only use the hacks to handle old hardware that doesn't have some current features. However, you might also wonder what changes could help the performance of this technique overall. This forum thread has my thoughts on this problem.

Minor Issues with the GDC Talk

As I mentioned (I think), this is a tech demo. I'm not trying to ship a game with it, so there are obvious caveats about overgeneralizing from what I'm succeeding at to what is possible. (Hopefully id's tech5 is reasonable proof about what is possible.) I decided not to mention this at the time, because I didn't want to scare anyone off, but this is actually the first pixel shader I've ever written. I'm not really a rendering guy these days; I wrote the original SVT tech demo last year to keep my hand vaguely in that game. This means a few things. For one, I'm not really sure where the biggest performance issues are these days. I'm assuming you folks can take the idea and run with it to get highest performance. For example, I totally forgot that the reason everyone loves DXT compression isn't texture memory savings, but texture sampling performance (due to bandwidth/caching). So the page table textures can be big bloated floats just fine, since they're being repeatedly sampled so much, but packing the physical textures is crucial for performance. So mostly I'm just saying, if you felt like there were issues like this in the talk, sorry about that!

home :