|Published (Last):||21 June 2005|
|PDF File Size:||5.27 Mb|
|ePub File Size:||19.39 Mb|
|Price:||Free* [*Free Regsitration Required]|
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together. Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Already on GitHub? Sign in to your account. The idea is to manage higher LOD terrain "patches" in a quadtree, and select the ones to render based on the notion of "screen error" depending on view distance and geometric error between chunks.
So far, so good I have my mesh generation taken care of, and manage the chunks each LOD is twice as detailed as the previous lower one in a quadtree and have the terrain renderer up and running. For now, I simply load all chunks as different meshes and select which ones to render by modifying the mesh. The difference between them is translation, scale, and the "height" of the vertices. What is the most efficient way of rendering these chunks?
Treating them as all discrete meshes does not seem to be very optimal. One approach I've tried to take is hacking my way into the immediate rendering mode. I have the renderer invoke the callback into my object, and have looked at the MarchingCubes code, but have not gotten things to work quite yet Any thoughts are welcome!
Should I continue to pursue the immediate rendering could it be made to work with different textures, or does that rendering the whole point moot? Currently I'm hitting a frame rate of 50fps with only the terrain meshes: 20 of them so 20 drawcalls , vertices each, rendered as wireframe. Other demos which seem much more complex render smoothly at 60fps, so there must be something I can do better here ;. Having lots of meshes initialized and managing them with.
Moving around is slow at first as the terrain is procedurally generated, but after the tiles are created moving around is nice and quick. I realized the flaw of my method: it is a recursive pipeline where LoD chunks are organized in a quadtree and selected based on view distance. By managing the visible attribute, it also meant making sure everything starts at invisible and only the relevant chunks are marked to rendering.
This first step, to mark everything as invisible, meant traversing the entire quadtree to reset the attributes. I've hacked my way into the immediate rendering pass, so each object is rendered as needed without the need to maintain these flags, providing the speed boost I was looking for! Do you perform any sort of out-of-core rendering for your terrain engine, where you only maintain a subset of your tiles on the GPU?
Something in ANGLE at the time was making geometry creation slow after a certain number of polygons were created so I ended up taking that code out and leaving everything in WebGL.
Hi NINE78 ,. I'm also really interested in chunked LOD terrain. I'm just starting to work on my own system, but then I discovered this post. I would be interested too. I don't have anything online to share really at this point as the terrain renderer is running in a closed sandbox for now. However, if there is anything I can help you out with? The main obstacle for me was how to do this in Three. Check out pull request , which is the solution I came up with. By having a callback invoked on your main terrain object, it is now quite straightforward to render chunks from the root down as you need them.
I got most of my information from the paper "Rendering Very Large, Very Detailed Terrains", which you can find at www. Once you get past the WebGL stuff which is quite straightforward in retrospect , the tricky bit is doing the actual tile generation.
In order to render chunks correctly you need to estimate the "screen space" error which is a projection of the "world space" error the difference in altitude between a chunk and its 4 child chunks. The problem is that this error generation is a bottom-up process, whereas I'd like it to be a top-down process. Given a large dataset and a high number of zoom levels, this preprocessing step can become quite big Did you implement the algorithm described there?
That means, manually simplifying the chunks from the leafs bottom up to the root tile and calculating the error of every removed vertex for every tile? What I don't quite get clearly my fault, i'm new to 3d ;- , is this error metric see my request Do you project the parent tile center and the 4 child centers into screen space and than calculate the error distance?
I've automated the tile generation which does build the entire tile set in the bottom-up fashion as described. It's a recursive algorithm where the highest zoom tiles are generated directly from the elevation data, and each lower zoom level aggregates its 4 child tiles. Regarding the errors: what you need to calculate and store with each tile is the maximum geometric error in world space. You will then use this in the rendering pass to project this to screen space the tau threshold is measured in pixels.
In order to calculate the max geometric error, do as in the paper; because you construct a lower-level tile by taking every other sample from the 4 children. So for every sample you remove during this process, look at the surrounding remaining ones and take the absolute value of the difference in altitude. The maximum of these for any given tile is that tile's maximum geometric error. For the even rows and columns take the 4-point average of the remaining samples Essentially you need to evaluate the error introduced by removing each sample or "post" by aggregating them into lower detail levels.
And on the meshes: no I do not use different meshes for each tile; I only have one lattice structure in memory which gets reused. Each tile is essentially the same but at a different location and scale. The altitudes are passed along as vertex attributes, along with the morph targets to prevent "popping" as detail increases , but the position VBOs are reused. The scale and translation location are passed along as uniforms which are used to displace the vertices in the vertex shader Many thanks for your lengthy explanation, it's now clear to me.
I think I'll try a different approach, because my terrain is to large to be loaded completely upfront. What I've got working by now is a quadtree implementation that loads the child-chunks in the view frustum depending on the distance from the camera.
As a next step I'll try to implement a method that takes the camera angle into account, calculating the screen size of the tile see Currently I've every tile in a seperate mesh, performance seems to be no problem so far.
Currently I got gaps between tiles with different resolutions, next step is implementing the skirts in my PlaneGeometries. I work with a 50Gb dataset, which is obviously too large to be "loaded" upfront.
The chunked LOD lends itself very well to out-of-core rendering. The whole purpose of doing this geometric error calculation is exactly to avoid doing the chunk selection on distance from camera. If you use camera distance, then every chunk will see the same treatment.
My first approach was to uniformly select chunks based solely on camera distance, but this was far insufficient for me One thing I can also strongly recommend is doing the altitude morphing. This almost completely hides the splitting of chunks and eliminates the nasty side-effect of terrain popping up as you increase zoom levels.
But right now, my main task is the server part. Skip to content. Dismiss Join GitHub today GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together. Sign up. New issue. Jump to bottom. Labels Question. Copy link Quote reply. Other demos which seem much more complex render smoothly at 60fps, so there must be something I can do better here ; Thanks for any help!
Do you have an example of what you're doing available? Contributor Author. Hi Chandler, I realized the flaw of my method: it is a recursive pipeline where LoD chunks are organized in a quadtree and selected based on view distance.
Ah, good to know! Is your work-in-progress available online? I'd love to have a look and contribute. Hi guys, I don't have anything online to share really at this point as the terrain renderer is running in a closed sandbox for now. Thanks for the link, interesting read. Hi, yes, I did implement the algorithms presented in the paper. I hope this brain-dump makes any sense?
Hi, well, your terrain does not need to be loaded upfront: the tiles need to be generated on the server upfront to calculate the recursive geometric error. Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment. Linked pull requests. You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window.
Privacy Terms. Quick links. It's basically going to be a 'chunked' scheme, in which the world is broken into chunks and I generate multiple meshes at different resolutions for each chunk. When rendering I decide which level of detail to use for each particular chunk based on e. This is similar to how many terrain renderers work.
Subscribe to RSS
Since summertime of , I've been doing some personal research into hardware-friendly continuous LOD. It renders CLOD, with geomorphs, using batched primitives. It's very friendly to texture LOD and paging data off disk. The downsides include a long preprocessing phase, and some size overhead for the disk file.
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together. Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Already on GitHub?