Horde3D

Next-Generation Graphics Engine
It is currently 27.04.2024, 13:25

All times are UTC + 1 hour




Post new topic Reply to topic  [ 7 posts ] 
Author Message
PostPosted: 31.05.2008, 13:03 
Offline

Joined: 22.11.2007, 17:05
Posts: 707
Location: Boston, MA
It turns out that it would be very handy to have access to floating point depth-buffers for render targets. Particularly with planets, and long range shadowing, 32-bit depth textures are a must.

The only change required to get a 16 or 32 bit depth buffer is a single line in egRendererBase.cpp, line 526:
Code:
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, rb.width, rb.height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0x0 );

To this:
Code:
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, rb.width, rb.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0x0 );

or this:
Code:
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, rb.width, rb.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0x0 );


I have made this change to my local source, but it would be nice to be able to specify this per-render target in the pipeline config.

_________________
Tristam MacDonald - [swiftcoding]


Top
 Profile  
Reply with quote  
PostPosted: 01.06.2008, 15:29 
Offline
Engine Developer

Joined: 10.09.2006, 15:52
Posts: 1217
Are you sure there is a difference? As far as I know ATI X1000 cards only have 16 bit depth for render targets. And not specifying anything for GL_DEPTH_COMPONENT makes the driver automatically select the optimal format. I also don't think it makes a difference if you specify GL_UNSIGNED_BYTE or GL_FLOAT since it refers to the image data you pass to glTexImage2D (which is NULL in our case). But I'm not entirely certain here...


Top
 Profile  
Reply with quote  
PostPosted: 02.06.2008, 12:07 
Offline

Joined: 22.11.2007, 17:05
Posts: 707
Location: Boston, MA
marciano wrote:
Are you sure there is a difference? As far as I know ATI X1000 cards only have 16 bit depth for render targets.
I can't be absolutely sure, since I am judging based on visual quality, but passing GL_DEPTH_COMPONENT32 does seem to remove my z-buffer artifacts.

However, if you are correct, this raises another issue. Since the main framebuffer has a 32-bit depth buffer (at least on my X1600), if the render buffers have only 16 bit, then we may need some way to render into the framebuffer, and then copy the depth portion into a texture (before we render more into the framebuffer). Perhaps 2 new pipeline commands, CopyColor and CopyDepth, which use glCopyTexImage2D to copy the framebuffer into the specified texture?

_________________
Tristam MacDonald - [swiftcoding]


Top
 Profile  
Reply with quote  
PostPosted: 07.06.2008, 14:31 
Offline

Joined: 22.11.2007, 17:05
Posts: 707
Location: Boston, MA
marciano wrote:
Are you sure there is a difference? As far as I know ATI X1000 cards only have 16 bit depth for render targets. And not specifying anything for GL_DEPTH_COMPONENT makes the driver automatically select the optimal format.
OK, you are partially correct. Specifying just GL_DEPTH_COMPONENT does select the optimal format, but that does not have to be the deepest format - in particular, on all ATI X1000 series cards, it will never choose a 32-bit depth buffer, unless you are using a 32-bit float colour buffer.

So we do need to provide a setting for this, especially as some people may want to limit this to 8 or 16 bit for bandwidth reasons.

Quote:
I also don't think it makes a difference if you specify GL_UNSIGNED_BYTE or GL_FLOAT since it refers to the image data you pass to glTexImage2D (which is NULL in our case).
You are entirely correct here.

_________________
Tristam MacDonald - [swiftcoding]


Top
 Profile  
Reply with quote  
PostPosted: 08.06.2008, 10:35 
Offline
Engine Developer

Joined: 10.09.2006, 15:52
Posts: 1217
If I recall correctly the problem was that you don't get a valid render target when you specify a depth format that is higher than 16 bit on an ATI X1000. NVidia seemed to support better precision. So by specifying explicitely a precision higher than 16 bits you either break the code on ATI or get less quality on NVidia. One thing we could do though is adding an attribute depthPrecisionHint that tries to set the the depth explicitely but uses GL_DEPTH_COMPONENT as fallback in case of failure.


Top
 Profile  
Reply with quote  
PostPosted: 08.06.2008, 11:59 
Offline

Joined: 22.11.2007, 17:05
Posts: 707
Location: Boston, MA
marciano wrote:
If I recall correctly the problem was that you don't get a valid render target when you specify a depth format that is higher than 16 bit on an ATI X1000.
I definitely can get 32-bits on my X1600, as opposed to 8 bits with only GL_DEPTH_COMPONENT.
Quote:
NVidia seemed to support better precision. So by specifying explicitely a precision higher than 16 bits you either break the code on ATI or get less quality on NVidia.
I don't quite get you here, do you mean that NVidia's default depth buffer target format is higher than 32-bits?

Quote:
One thing we could do though is adding an attribute depthPrecisionHint that tries to set the the depth explicitely but uses GL_DEPTH_COMPONENT as fallback in case of failure.
I can't say I am surprised, that OpenGL fails to do proper fallback on its own - are you sure that we can detect correctly when we should fallback? Something is necessary, with the silly 8-bit default on my card (which may be a problem with the apple drivers, who knows).

_________________
Tristam MacDonald - [swiftcoding]


Top
 Profile  
Reply with quote  
PostPosted: 09.06.2008, 20:18 
Offline
Engine Developer

Joined: 10.09.2006, 15:52
Posts: 1217
Hmm, I thought the framebuffer object state was "incomplete" when I tried to attach a 24/32 bit depth buffer on an ATI X1600 but I'm no more sure here (checking the incomplete state is also how I meant to check if a fallback is required). The default precsion on Windows for this card when using GL_DEPTH_COMPONENT seems to be 16 (I always got the "Shadow map precision is limited to 16 bit" warning in the log on ATI cards).


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 7 posts ] 

All times are UTC + 1 hour


Who is online

Users browsing this forum: No registered users and 15 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group