Saturday , 21 December 2024
Breaking News

ATI’s Trilinear Filtering Chatlog

As promised, ATI held a chat today with regards to Trilinear Filtering allegations. It was recently discovered that ATI has implemented a technique (something similar to NVIDIA’s trilinear optimizations) that optimizes mipmap levels based on the next stage — then applies proper filtering.

Q: Why are you not doing full AF ?
A: Andy/Raja Would you say that our AF is not “full” AF? After all, we’ve been using an adaptive method for this since the R200. If you select 16x in the control panel, you may get 8x, 4x, or 2x depending on how steeply angled the surface is. Doing 16x AF on a wall you’re viewing straight on would look exactly the same as no AF, but require 16x more texture samples. Why would it make any sense to do this? This is exactly the same idea we’re using for trilinear filtering.

Q: Why are your trilinear optimizations different to what Nvidia is doing ?
A: Andy/Raja Raja: We won’t comment on competitor’s algorithms. Our focus is to retain full image quality while offering the best performance possible.

No-one is suggesting that our texture filtering algorithms produce a worse output than previous generations . In fact, if we took them out now the speed would be marginally less and we will receive complaints from users that our quality is lower . We did receive feedback from several folks who think that the X800 IQ is better than our 9800 series – it is always our goal to improve quality in newer hardware generations . To assist in this we have many additional quality controls in hardware in X800 series than on 9800

Our target is also to avoid any need to detect applications, and as such we have to try to be sure that our image quality remains high in all cases. To achieve this we spent a lot of effort developing algorithms to make the best use of our quality tuning options. This isn’t a performance enhancement applied to popular gaming benchmarks , it’s a performance boost – without image degradation – applied generically to every game that uses box-filtered mipmaps, which is most of them. This means, incidentally, that it’s working during the WHQL tests (unlike optimizations activated by application detection), which means that it has to meet the very stringent image quality requirements of those tests.

Q: TheRock will full trilinear filtering be allowed to be set in the drivers ?
A: Andy/Raja We try to keep the control panel as simple as possible (even as its complexity increases), and if the image quality is identical or better there doesn’t seem to be a need to turn it off. However, if our users request otherwise and can demonstrate that it’s worthwhile, we would consider adding an option to the control panel.

Q: Is this really trilinear filtering ?
A: Andy/Raja Yes, It’s a linear function between the two mipmap levels based on the LOD.

Q: When will ATI restore full trilinear so that review sites can actually rebench and retest your cards, since any previous review benchmarks is invalidated by this cheat/optimisation/whatever?
A: Andy/Raja We have never removed “full trilinear”. We certainly do not believe that any benchmarks have been invalidated by our techniques. In all cases reviewed so far we believe that we have higher image quality than other implementations.

Q: Is Ati cheating if Colored MipMaps are enabled and shows True FULL_TRI AF and Only Then. Like the Article in ComputerBase.de Descripe it as such one.
A: Andy/Raja Absolutely not. If it were the case that we were only performing full trilinear with coloured mipmaps then you might have a point, but this is emphatically not what we do. We take advantage of properties of texture maps for performance and IQ gains. In cases where we are not able to determine that the texture content is appropriate for these techniques we use legacy trilinear filtering. This includes cases such as dynamically uploaded texture maps where we avoid performing analysis so as not to cause any possible gameplay hitches.

Q: I think the whole community appreciates the time and the initiative for doing a chat about current filtering algorithm which certainly raised a few issues. Question: was the new filtering algorithm intentional and if you could tell us why werent review sites notified about it. Thanks
A: Andy/Raja We are constantly tuning our drivers and our hardware. Every new generation of hardware provides the driver with more programmability for features and modes that were hard-wired in the previous generation. We constantly strive to increase performance and quality with every new driver release. There are literally dozens and dozens of such optimizations that went into our drivers in the last year or so. Sometimes many such optimizations are not even communicated internally to marketing and PR teams for example. And many optimizations are very proprietary in nature and we cannot disclose publicly anyways.

Q: Is X800’s hardware able to do “traditional” trilinear or is the new algorithm completely hardwired (not that I would mind ) ??
A: Andy/Raja The X800 hardware is capable of all features of previous generations, and many more besides.

Q: Is http://www.ixbt.com/video2/images/r420xt/r420-anis0x.jpg bilinear or bri/trillinear as is is supposed to be…i heard it is possilby a bug in CoD causing it to set filtering to bi rather than a really bad trillinear filtering method, is this true?
A: Andy/Raja This we believe is test error and the X800 images appear to be obtained using only a bilinear filter. We have been unable to reproduce this internally. Also, note that the game defaults to bilinear when a new card is installed and this may explain the tester error.

Q: Why did ATI say to the general public that they were using trilinear by default, when in fact it was something else? (quality is ok, i agree, but you did deceive, by claiming it to be a trilinear)
A: We understand that there was confusion due to the recent reports otherwise. We provide trilinear filtering in all cases where trilinear filtering was asked for. As has been demonstrated many times by several people – almost every hardware has a different implementation of lod calculation and filtering calculations. If we start calling all the existing filtering implementations with different names – we will end up with many names for trilinear

Q: is bit comparison difference images can highlight IQ differences surely there must be some – why do you say there are no IQ differences when these comparisons show otherwise?
A: The bit comparision differences between implementations occur due to many reasons. We constantly make improvements to our hardware algorithms. Bit comparisions just say they are different – not necessarily that one is better than the other. We always on the lookout for cases where we can find IQ problems with our algorithms. We can guarantee you that there will be bit-wise mis-matches withour future generation hardware too and the future generation hardware will be better. Our algorithms are exercized by the stringent MS WHQL tests for mipmap filtering and trilinear and we pass all these tests. These tests do not look for exact bit matches but have a reasonable tolerance for algorthmic and numeric variance.

Q: Is this Algorythm implemented in hardware? who’s analysing texture maps, is it just the driver doing that or is it the chip?
A: The image analysis is performed by the driver to chose the appropriate hardware algorithm. This allows us to continually improve the quality and performance with future drivers.

Q: If its so good, why has it remained hidden from the public and not marketed as “ATI SmartFilter” or somesuch? Surely if its as good as you say (better quality, faster speed), ATI marketing should be crowing about it? One of the issue here is that it *looks* like ATI is trying to hide things, even if what you have is a genuine improvement for the customer.
A: The engineering team at ATI is constantly improving our drivers by finding ways to take better advantage of the hardware. These improvements happen during all the catalyst releases. We might have missed an oppurtunity to attach a marketing buzzword to this optimization!

Q: Can you give a more detailed explanation as to why the use of coloured mipmaps shows the use of full trilinear, which doesnt correspond to what seems to occur in a normal, real-world situation?
A: Coloured mipmaps naturally show full trilinear as our image quality analysis reveals that there could be visible differences in the image. It should be noted that trilinear was originally invented to smooth transitions between mip-levels, and in the original definition mip-levels should be filtered versions of each other, as coloured mip-levels clearly are not. Despite this, we understand that people often make use of hardware for purposes beyond that originally envisioned, so we try to make sure that everything always functions exactly as expected.

Q: From previous comments, there have been mention of this technique used for a while (~12 months?) in the RV300 series of chips, but most as with R420. can you tell us which cards, which Catalyst versions, and/or which games exist where we can see similar tendencies.
A: We had new filtering algorithms in places since Cat 3.4 or so. Note that the image quality improved over various driver updates since. Also, as noted earlier, X800 provides better controls than earlier parts. It will be hard to find an exact match with our earlier hardware and drivers

Q: Dont you think such a tradeoff is inconceivable in a 500$ graphic card?
A: We think that what we do is expected of all our cards, in particular the more expensive ones. Our users want the best looking results and the highest quality results. They want us to go and scratch our heads and come up with new ways to improve things. Users of ATI cards from last year want us to come out with new drivers that improve their performance and maintain the image quality. We have dedicated engineering teams that work hard to improve things. It’s an ongoing effort, exploring new algorithms, finding new ways to improve the end user experience, which is what all this is about. And we are listening too; if you don’t like what we offer, let us know and we will strive to improve things.

Q: Image quality is a relative term. The real question is, “does the claimed ‘trilinear filtering’ produce a byte-for-byte replica of ‘true trilinear filtering’?” Whether or not the image quality is “the same” or “essentially” the same is irrelevant to this questions
A: Byte for byte compared to what? “True trilinear” is an approximation of what would be the correct filtering, a blending between two versions, one which is too blurry and one too sharp. An improved filter is not byte for byte identical to anything other than itself, but that doesn’t mean it isn’t a better approximation.

Q: do you think that you still can compare the benchmarks with other brands, even if you use that different approach (non-equivalent technique)?
A: We’ve answered this, and yes, we feel we can compare ourselves to any brand, as we believe our quality and performance are higher. Perhaps at times we should be upset about people comparing us to lower quality implementations

Q: whats the patent number and filing date of this algo?
A: This is in the patent pending process right now. So we will not put out the actual patent information at this time. Once approved, anyone can go read the patent.

Q: What performance boost does this give you, anyway?
A: It’s a very mild optimization at the default levels, so of the order of a few percent. But this is a neat algorithm – we encourage you to take the texture slider all the way towards “performance” – we don’t think you’ll be able to see any image change until the very end, and the performance increases are significant.

Thanks for your time. We appreciate your persistence through the technical difficulties. To ensure that you can all read all the answers, we will post the transcript of the session at www.ATI.com/online/chat.

Check Also

Geforce Now and the Apple Mac Book Pro (M1 Pro) – a match made in heaven?

Jump to section 1. Nvidia GeForce Now – the basics 1. Nvidia GeForce Now – …

NVIDIA introduces DLDSR – an intelligent downscaler

We have written a lot about Nvidias DLSS-technology that uses AI to upscale images to give a boost to the frame rate while trying to still offer great image quality. While I think it offers great image quality I know there are gamers who would would love to get a way to use AI to instead downscale a higher resolution image to give better quality at the lower resolution. Well, they are in luck as Nvidia now has presented Deep Learning Dynamic Super Resolution, DLDSR (phew, say that fast a few times).

Leave a Reply

Your email address will not be published. Required fields are marked *

instagram default popup image round
Follow Me
502k 100k 3 month ago
Share