A month after the release of the GTX 580, Nvidia is back again but this time to launch their GTX 570 enthusiast video card. The GTX 570 offers a 25% performance increase from the GTX 470, and also a slight performance increase from the last generation GTX 480.
The Nvidia Geforce GTX 570
Just a month ago, we saw the release of the Nvidia GeForce GTX 580, running with the newly optimized and enhanced GF110 chip. This newly redesigned chip proved to be the fastest single GPU on the planet, allowing Nvidia to take the crown for the fastest and quietest video card on the market. With 512 CUDA cores and 16 PolyMorph engines, the GTX 580 was able to pull ahead of the GTX 480 by about 15-20%. When we pushed the card even further by overclocking, it could beat two factory super overclocked GTX 460 (GF104) GPUs, and was about 30-35% faster than the GTX 480. The Nvidia GeForce GTX 580 was designed for the enthusiasts looking for the ultimate performance that money could buy.
Today, Nvidia is releasing their latest and second top of the line video card tailored for the enthusiasts: the Nvidia GeForce GTX 570. The GTX 570 also uses the new GF110 chip, 16 PolyMorph Engines, and has support for FP16 texture filterand and boast improved Z-culling, just like its more powerful sibling, the GTX 580.
But why are we mentioning that the GTX 580 is more powerful than the GTX 570? The GTX 570 has been limited in terms of the GPU performance and memory. 32 of its CUDA cores have been disabled, making it a 480 core GPU, along with a total of 1280MB of 320-bit GDDR5 memory. The texture units, ROP units and streaming multiprocessors count have been dropped as well from 64 to 60 texture units, 48 to 40 ROP units and from 16 to 15 streaming multiprocessors. This at first might sound like a worthless card with the current GTX 480s out on the market, but with the release of the GTX 570s, Nvidia will discontinue the GTX 480 GF100 GPUs.
|So what can we expect from the GTX 570 if it is essentially a GTX 470 with the CUDA cores of a GTX 480? If we look back at the GTX 470’s specifications, we can see that the GTX 470 had 448 CUDA Cores (Shader Units) instead of the 480 the GTX 570 offers. The GTX 470 also had 40 ROPs, and a core clock of 607 MHz, and 837 MHz memory frequency. However, the GTX 570 has a core clock of 732 MHz and 950 MHz memory frequency. Immediately, we can see that the GTX 570 has a lot more performance up its sleeves than the older GTX 470, due to its 32 more CUDA cores, and higher frequencies on the core clock and memory clock.|
We also have a feeling that with these specs, the GTX 570 might be a bit faster than the old GTX 480. If that is the case, then it would not be a problem for the GTX 570 to replace the GTX 480, because the GTX 570 is anticipated to launch at around $349. There will be factory overclocked cards available immediately which might have a slightly higher price, but the stock cards should run around $349. We have also heard that there will be future non-reference card designs of the GTX 570, but we cannot disclose the date and time when those cards will become available.
As a result of the GPU redesign, the GeForce GTX 580 also has a lower temperature threshold than its predecessor. Whereas the 480’s GF100 chip had a temperature threshold of 105C, the 580’s and 570’s GF110 has a threshold of 97C.
The GTX 570 is also the same length as the GTX 480 and the GTX 580. Users should be able to run up to 3-way SLI with the GTX 570s.
Nvidia’s New Demos
Nvidia first released some exciting Demos that showcase Nvidia’s tessellation capabilities on the GeForce GTX 580 video card. The demos have not changed for the GTX 570. The users can still enjoy playing and checking out the features and performance of each 500 series video card on the following demos. While these demos will play on the 400 series video cards as well, to have optimal single GPU performance, users will need a GeForce GTX 580/570 video card. The first picture is from Endless City, a tessellated city landscape generated by the GPU. All the fine detail in the buildings are tessellations and not bump maps that we are well aware of. All the lights in the scene are able to produce excellent shadows because we no longer use bump maps, but rather a higher polygon geometry count.
The second picture shows the Aliens vs. Triangles tessellation demo, in which users can modify the aliens with very fine detail. Once again, instead of having bump maps, the alien is very high detailed in geometry. This allows extra options to be integrated, such as making sure that if something interacts with the alien, the skin would change accordingly. This also allows for much higher quality rendering that was not possible with bump maps in the past.
Click Image For a Larger One
The GF110 Architecture – Improved / Optimized FERMI
The GTX 570 has not changed much from the GTX 580, and our GTX 580 review has a detailed explanation of all the improvements and optimizations the new GF110 went through. We have mentioned the FP16 texture filtering and Z-cull on the first page. Of all the the architectural enhancements the GTX 580/570 went through, these were the two major ones. The FP16 texture filtering helps with texture-intensive applications. The chart below from Nvidia shows how the architectural enhancements improved performance from the GTX 480. The extra performance granted from the faster clock speeds on the core and memory, and the extra 32 cores that were unlocked on the GTX 580, made it a true 512 CUDA core GPU. In the GTX 570’s case, there are 480 CUDA Cores, but it should still have a similar effect in performance if we consider that the GTX 470 has 448 CUDA Cores.
With the new GF110 chip, PolyMorph and Raster Engines have been added to help with tessellation. While the new PolyMorph engine helps with tessellation performance in games, the extra Raster Engine helps with the conversion of polygons to pixel fragments. Now with 16 PolyMorph Engines (15 PolyMorph Engines on GTX 570) and 512 CUDA cores (480 CUDA Cores on GTX 570), the cards are able to achieve a stunning 2 billion triangles per second. That is a tremendous amount of polygons, something we would only see in Hollywood blockbuster movies. Now all of this can easily be rendered real-time with the GTX 580/570 GPUs. Nvidia’s new demo Endless City shows this off, rendering and playing back everything in real-time.
The Radeon HD series video cards still have a much harder time with tessellation based benchmarks, which means that when games start incorporating extensive tesselation into their geometry, the Nvidia cards will have an advantage over their AMD counterparts. There are some games that already take advantage of tessellation, like H.A.W.X 2. The Unigine Heaven 2.1 benchmark also tests tessellation capabilities. While the tessellation visual improvement is very limited at the moment, we believe that tessellation will be taken much further in the future, making it possible to make characters, terrain, and objects much more belieavable than they are now.
For the GF110 design, Nvidia completely re-engineered the previous GF100, down to the transistor level. The previous chip had to be evaluated at every block of the GPU. To achive higher performance with lower power consumption, Nvidia modified a very large percentage of the transistors on the chip. They used lower leakage transistors on less timing sensitive processing paths, and higher speed transistors on more critical processing paths. This is why Nvidia was able to add the extra 32 cores to the final Fermi architecture (GTX 580/570), while also adding another SM to the chip (GTX 580).
For many of Nvidia’s previous video cards, the GPU’s thermal protection features meant that the GPU would be downclocked when at extreme temperatures. This would protect the cards from unwanted damage. However, with the release of stressing applications such as FurMark, MSI Kombustor, and OCCT, the latest video cards can reach dangerously high currents, potentially causing damage to components on the card. Nvidia integrated a new power monitoring feature into the GTX 580/570, which will dynamically adjust performance in certain stress applications if the power levels exceed the card’s specifications. These dedicated hardware circuitries run real-time, monitoring the current and voltage on each of the 12V rails. These rails include the 6-pin, 8-pin (GTX 580) or 6-pin, 6-pin (GTX 570), and the PCI-Express edge connector.
Nvidia made improvements when developing the GTX 570 based on what consumers said about the GTX 470. The thermal characteristics of the GF110 chip are also much better than of the GF100. What we see on this chart is that the GTX 480 is roughly about 9-10 dBA higher than the GTX 580. Generally, a human perceives each 10 dBA increase as being twice as loud as the previous noise level. The GTX 580 will perform much quieter than any high-end card Nvidia has released in the past few years.
Based on the tests we did in our labs, the GTX 580/570 does indeed perform very quietly during high loads. We tested the thermal improvements and acoustic improvements on the card, and with our Silverstone TJ-10 chassis and some acoustic dampaning on each side panel, the GTX 580/570 was totally inaudible during gaming. The other fans in the system were a bit louder than the GTX 580/570. When we ran Furmark, the fan speed starts getting faster. However, Furmark is not a real-life based application because it actually pulls more power and heat out of the video card than a real-life application would. Also, if we push the fan speed on the GTX 580 to 100%, we can definitely hear the fan loud and clear, but the fan is limited to go only up to 85%. Depending on the manufactorer of the card, there might be tweaked BIOSs, which allow the cards to push the fan speed all the way up to 100%. During our testing period, we played Metro 2033 for about an hour in a closed chassis with no side ventilation, and the fan speed only reached up to 58%, which kept a very quiet environment for gaming.
The new cooling solution on the GTX 580 uses a special heatsink design, including what is called a vapor chamber. Nvidia explained that the vapor chamber on the GTX 570 is slightly different from the vapor chamber on the GTX 580, due to the less heat the card needs to deal with. Think of the vapor chamber as a heatpipe solution, but instead of just contacting the heatsink fins in certain areas, the vapor chamber has 100% contact with every fin of the heatsink. This helps tremendously by spreading the heat out over a large block of a heatsink.
The GTX 580/570 also has a new adaptive GPU fan control, and the card is designed for great cooling potential in SLI setups. The fan has been redesigned to generate a lower pitch and tone, which allows for lower acoustic noise. The back of the cover is designed to route the air towards the rear bracket, improving SLI temperature performance.
The vapor chamber is a sealed, fluid-filled chamber with thin layered copper walls. When the heatsink is placed on the GPU, the GPU quickly boils up the liquid inside the vapor chamber, and the liquid evaporates to vapor. The hot vapor spreads throughout the top of the chamber, transferring the heat to the heatsink fins. Finally, the cooled liquid goes around and returns to the bottom of the vapor chamber, allowing the whole process to restart again. The hot heatsink fins are cooled by the air being pushed through the fins of the heatsink.
Continue onto the next page, where we examine the Nvidia GeForce GTX 570 in more detail.
What is CUDA?
CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit).
With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for CUDA, including image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing and much more.
Computing is evolving from “central processing” on the CPU to “co-processing” on the CPU and GPU. To enable this new computing paradigm, NVIDIA invented the CUDA parallel computing architecture that is now shipping in GeForce, ION, Quadro, and Tesla GPUs, representing a significant installed base for application developers.
In the consumer market, nearly every major consumer video application has been, or will soon be, accelerated by CUDA, including products from Elemental Technologies, MotionDSP and LoiLo, Inc.
CUDA has been enthusiastically received in the area of scientific research. For example, CUDA now accelerates AMBER, a molecular dynamics simulation program used by more than 60,000 researchers in academia and pharmaceutical companies worldwide to accelerate new drug discovery.
In the financial market, Numerix and CompatibL announced CUDA support for a new counterparty risk application and achieved an 18X speedup. Numerix is used by nearly 400 financial institutions.
An indicator of CUDA adoption is the ramp of the Tesla GPU for GPU computing. There are now more than 700 GPU clusters installed around the world at Fortune 500 companies ranging from Schlumberger and Chevron in the energy sector to BNP Paribas in banking.
And with the recent launches of Microsoft Windows 7 and Apple Snow Leopard, GPU computing is going mainstream. In these new operating systems, the GPU will not only be the graphics processor, but also a general purpose parallel processor accessible to any application.
For information on CUDA and OpenCL, click here.
For information on CUDA and DirectX, click here.
For information on CUDA and Fortran, click here.
Some Games that use PhysX (Not all inclusive)
|Batman: Arkham Asylum|
Watch Arkham Asylum come to life with NVIDIA® PhysX™ technology! You’ll experience ultra-realistic effects such as pillars, tile, and statues that dynamically destruct with visual explosiveness. Debris and paper react to the environment and the force created as characters battle each other; smoke and fog will react and flow naturally to character movement. Immerse yourself in the realism of Batman Arkham Asylum with NVIDIA PhysX technology.
|Darkest of Days|
Darkest of Days is a historically based FPS where gamers will travel back and forth through time to experience history’s “darkest days”. The player uses period and future weapons as they fight their way through some of the epic battles in history. The time travel aspects of the game, lead the player on missions where they at times need to fight on both sides of a war.
|Sacred 2 – Fallen Angel|
In Sacred 2 – Fallen Angel, you assume the role of a character and delve into a thrilling story full of side quests and secrets that you will have to unravel. Breathtaking combat arts and sophisticated spells are waiting to be learned. A multitude of weapons and items will be available, and you will choose which of your character’s attributes you will enhance with these items in order to create a unique and distinct hero.
Dark Void is a sci-fi action-adventure game that combines an adrenaline-fuelled blend of aerial and ground-pounding combat. Set in a parallel universe called “The Void,” players take on the role of Will, a pilot dropped into incredible circumstances within the mysterious Void. This unlikely hero soon finds himself swept into a desperate struggle for survival.
Cryostasis puts you in 1968 at the Arctic Circle, Russian North Pole. The main character, Alexander Nesterov is a meteorologist incidentally caught inside an old nuclear ice-breaker North Wind, frozen in the ice desert for decades. Nesterov’s mission is to investigate the mystery of the ship’s captain death – or, as it may well be, a murder.
In a city where information is heavily monitored, agile couriers called Runners transport sensitive data away from prying eyes. In this seemingly utopian paradise of Mirror’s Edge, a crime has been committed and now you are being hunted.
What is NVIDIA PhysX Technology?
NVIDIA® PhysX® is a powerful physics engine enabling real-time physics in leading edge PC games. PhysX software is widely adopted by over 150 games and is used by more than 10,000 developers. PhysX is optimized for hardware acceleration by massively parallel processors. GeForce GPUs with PhysX provide an exponential increase in physics processing power taking gaming physics to the next level.
What is physics for gaming and why is it important?
Physics is the next big thing in gaming. It’s all about how objects in your game move, interact, and react to the environment around them. Without physics in many of today’s games, objects just don’t seem to act the way you’d want or expect them to in real life. Currently, most of the action is limited to pre-scripted or ‘canned’ animations triggered by in-game events like a gunshot striking a wall. Even the most powerful weapons can leave little more than a smudge on the thinnest of walls; and every opponent you take out, falls in the same pre-determined fashion. Players are left with a game that looks fine, but is missing the sense of realism necessary to make the experience truly immersive.
With NVIDIA PhysX technology, game worlds literally come to life: walls can be torn down, glass can be shattered, trees bend in the wind, and water flows with body and force. NVIDIA GeForce GPUs with PhysX deliver the computing horsepower necessary to enable true, advanced physics in the next generation of game titles making canned animation effects a thing of the past.
Which NVIDIA GeForce GPUs support PhysX?
The minimum requirement to support GPU-accelerated PhysX is a GeForce 8-series or later GPU with a minimum of 32 cores and a minimum of 256MB dedicated graphics memory. However, each PhysX application has its own GPU and memory recommendations. In general, 512MB of graphics memory is recommended unless you have a GPU that is dedicated to PhysX.
How does PhysX work with SLI and multi-GPU configurations?
When two, three, or four matched GPUs are working in SLI, PhysX runs on one GPU, while graphics rendering runs on all GPUs. The NVIDIA drivers optimize the available resources across all GPUs to balance PhysX computation and graphics rendering. Therefore users can expect much higher frame rates and a better overall experience with SLI.
A new configuration that’s now possible with PhysX is 2 non-matched (heterogeneous) GPUs. In this configuration, one GPU renders graphics (typically the more powerful GPU) while the second GPU is completely dedicated to PhysX. By offloading PhysX to a dedicated GPU, users will experience smoother gaming.
Finally we can put the above two configurations all into 1 PC! This would be SLI plus a dedicated PhysX GPU. Similarly to the 2 heterogeneous GPU case, graphics rendering takes place in the GPUs now connected in SLI while the non-matched GPU is dedicated to PhysX computation.
Why is a GPU good for physics processing?
The multithreaded PhysX engine was designed specifically for hardware acceleration in massively parallel environments. GPUs are the natural place to compute physics calculations because, like graphics, physics processing is driven by thousands of parallel computations. Today, NVIDIA’s GPUs, have as many as 480 cores, so they are well-suited to take advantage of PhysX software. NVIDIA is committed to making the gaming experience exciting, dynamic, and vivid. The combination of graphics and physics impacts the way a virtual world looks and behaves.
DirectCompute Support on NVIDIA’s CUDA Architecture GPUs
Microsoft’s DirectCompute is a new GPU Computing API that runs on NVIDIA’s current CUDA architecture under both Windows VISTA and Windows 7. DirectCompute is supported on current DX10 class GPU’s and DX11 GPU’s. It allows developers to harness the massive parallel computing power of NVIDIA GPU’s to create compelling computing applications in consumer and professional markets.
As part of the DirectCompute presentation at the Game Developer Conference (GDC) in March 2009 in San Francisco CA, NVIDIA demonstrated three demonstrations running on a NVIDIA GeForce GTX 280 GPU that is currently available. (see links below)
As a processor company, NVIDIA enthusiastically supports all languages and API’s that enable developers to access the parallel processing power of the GPU. In addition to DirectCompute and NVIDIA’s CUDA C extensions, there are other programming models available including OpenCL™. A Fortran language solution is also in development and is available in early access from The Portland Group.
NVIDIA has a long history of embracing and supporting standards since a wider choice of languages improve the number and scope of applications that can exploit parallel computing on the GPU. With C and Fortran language support here today and OpenCL and DirectCompute available this year, GPU Computing is now mainstream. NVIDIA is the only processor company to offer this breadth of development environments for the GPU.
|OpenCL (Open Computing Language) is a new cross-vendor standard for heterogeneous computing that runs on the CUDA architecture. Using OpenCL, developers will be able to harness the massive parallel computing power of NVIDIA GPU’s to create compelling computing applications. As the OpenCL standard matures and is supported on processors from other vendors, NVIDIA will continue to provide the drivers, tools and training resources developers need to create GPU accelerated applications.|
In partnership with NVIDIA, OpenCL was submitted to the Khronos Group by Apple in the summer of 2008 with the goal of forging a cross platform environment for general purpose computing on GPUs. NVIDIA has chaired the industry working group that defines the OpenCL standard since its inception and shipped the world’s first conformant GPU implementation for both Windows and Linux in June 2009.
NVIDIA has been delivering OpenCL support in end-user production drivers since October 2009, supporting OpenCL on all 180,000,000+ CUDA architecture GPUs shipped since 2006.
NVIDIA’s Industry-leading support for OpenCL:
March – NVIDIA releases updated R195 drivers with the Khronos-approved ICD, enabling applications to use OpenCL NVIDIA GPUs and other processors at the same time
January – NVIDIA releases updated R195 drivers, supporting developer-requested OpenCL extensions for Direct3D9/10/11 buffer sharing and loop unrolling
January – Khronos Group ratifies the ICD specification contributed by NVIDIA, enabling applications to use multiple OpenCL implementations concurrently
November – NVIDIA releases R195 drivers with support for optional features in the OpenCL v1.0 specification such as double precision math operations and OpenGL buffer sharing
October – NVIDIA hosts the GPU Technology Conference, providing OpenCL training for an additional 500+ developers
September – NVIDIA completes OpenCL training for over 1000 developers via free webinars
September – NVIDIA begins shipping OpenCL 1.0 conformant support in all end user (public) driver packages for Windows and Linux
September – NVIDIA releases the OpenCL Visual Profiler, the industry’s first hardware performance profiling tool for OpenCL applications
July – NVIDIA hosts first “Introduction to GPU Computing and OpenCL” and “Best Practices for OpenCL Programming, Advanced” webinars for developers
July – NVIDIA releases the NVIDIA OpenCL Best Practices Guide, packed with optimization techniques and guidelines for achieving fast, accurate results with OpenCL
July – NVIDIA contributes source code and specification for an Installable Client Driver (ICD) to the Khronos OpenCL Working Group, with the goal of enabling applications to use multiple OpenCL implementations concurrently on GPUs, CPUs and other types of processors
June – NVIDIA release first industry first OpenCL 1.0 conformant drivers and developer SDK
April – NVIDIA releases industry first OpenCL 1.0 GPU drivers for Windows and Linux, accompanied by the 100+ page NVIDIA OpenCL Programming Guide, an OpenCL JumpStart Guide showing developers how to port existing code from CUDA C to OpenCL, and OpenCL developer forums
December – NVIDIA shows off the world’s first OpenCL GPU demonstration, running on an NVIDIA laptop GPU at
June – Apple submits OpenCL proposal to Khronos Group; NVIDIA volunteers to chair the OpenCL Working Group is formed
December – NVIDIA Tesla product wins PC Magazine Technical Excellence Award
June – NVIDIA launches first Tesla C870, the first GPU designed for High Performance Computing
May – NVIDIA releases first CUDA architecture GPUs capable of running OpenCL in laptops & workstations
November – NVIDIA released first CUDA architecture GPU capable of running OpenCL
The GTX 570
We also noticed that the fan is larger on the GTX 570 than on the GTX 470. This is one of the new changes to the fan design that Nvidia advertised. Usually with larger fans, it is easier to push more air through the card without using higher RPM, which could cause motor noise. The actual full card length with the back expansion slot bracket is roughly 11 inches, but the PCB is exactly 10.5 inches long. This means that to fit the GTX 570 into a case, users will need at least 10.6 inches of free space, but we always recommend having a case that has a bit more room for better air ciculation and an easier fit. The height of the card matches standard video card height specifications. The cover for the GTX 580 has also been redesigned to allow for better air circulation through the heatsink area of the card, further cooling the GPU.
The PCB design of the GTX 570 is essentially the same as the reference GTX 480 and GTX 580 design, though there have been slight changes to the components on the PCB. One of the most noticeable changes to the board is that the ventilation hole on the PCB is absent, which we mentioned a few paragraphs earlier. The newly redesigned fan and heatsink design is supposed to take into consideration the fact that some users will use SLI systems, so the GTX 570 has been fine tuned to make sure there won’t be any ventilation problems, even without the ventilation hole that we saw on the GTX 480.
The GTX 570 has two SLI connectors, which enable the user to use the GTX 570 in an up to 3-way SLI setup with the appropriate motherboard. We noticed that instead using latches to secure the plastic cover as was done on the GTX 480, the GTX 570 uses screws. Also, the whole cooler on the GTX 570 has been screwed on with small star screws, which are really difficult to find, so changing the cooler will be very frustrating.
In these pictures, we can see each end of the GTX 570. The right picture shows two DVI ports and one HDMI port. These ports should also have audio output, which will only work if the monitor or TV has built-in speakers. While some might think this is a nice feature, most users can get better audio performance from the onboard motherboard speakers or a dedicated sound card. The second picture shows a hole which allows air to be sucked in. This will not only move air all the way through the heatsink in the front, but also cool the components at the far end of the card, which would be difficult with just one fan.
The OS we use is Windows 7 Pro 64bit with all patches and updates applied. We also use the latest drivers available for the motherboard and any devices attached to the computer. We do not disable background tasks or tweak the OS or system in any way. We turn off drive indexing and daily defragging. We also turn off Prefetch and Superfetch. This is not an attempt to produce bigger benchmark numbers. Drive indexing and defragging can interfere with testing and produce confusing numbers. If a test were to be run while a drive was being indexed or defragged, and then the same test was later run when these processes were off, the two results would be contradictory and erroneous. As we cannot control when defragging and indexing occur precisely enough to guarantee that they won’t interfere with testing, we opt to disable the features entirely.
Prefetch tries to predict what users will load the next time they boot the machine by caching the relevant files and storing them for later use. We want to learn how the program runs without any of the files being cached, and we disable it so that each test run we do not have to clear pre-fetch to get accurate numbers. Lastly we disable Superfetch. Superfetch loads often-used programs into the memory. It is one of the reasons that Windows Vista occupies so much memory. Vista fills the memory in an attempt to predict what users will load. Having one test run with files cached, and another test run with the files un-cached would result in inaccurate numbers. Again, since we can’t control its timings so precisely, it we turn it off. Because these four features can potentially interfere with benchmarking, and and are out of our control, we disable them. We do not disable anything else.
We ran each test a total of 3 times, and reported the average score from all three scores. Benchmark screenshots are of the median result. Anomalous results were discounted and the benchmarks were rerun.
Please note that due to new driver releases with performance improvements, we rebenched every card shown in the results section. The results here will be different than previous reviews due to the performance increases in drivers.
|Case||Silverstone Temjin TJ10|
Intel Core i7 930 @ 3.8GHz
ASUS Rampage III Extreme ROG – LGA1366
OCZ DDR3-12800 1600MHz (8-8-8-24 1.65v) 12GB Triple-Channel Kit
|CPU Cooler||Thermalright True Black 120 with 2x Zalman ZM-F3 FDB 120mm Fans|
4x Seagate Cheetah 600GB 10K 6Gb/s Hard Drives
2x Western Digital RE3 1TB 7200RPM 3Gb/s Hard Drives
ASUS ENGTX580 1536MB
Nvidia GeForce GTX 580 1536MB
Nvidia GeForce GTX 570 1536MB
Galaxy GeForce GTX 480 1536MB
Palit GeForce GTX460 Sonic Platinum 1GB in SLI
ASUS Radeon HD6870
AMD Radeon HD5870
2x Zalman ZM-F3 FDB 120mm Fans – Top
1x Zalman Shark’s Fin ZM-SF3 120mm Fan – Back
1x Silverstone 120mm fan – Front
1x Zalman ZM-F3 FDB 120mm Fan – Hard Drive Compartment
1x Zalman ZM-F3 FDB 120mm Fan – Side Ventilation for Video Cards and RAID Card SAS Controller.
|Additional Cards||LSI 3ware SATA + SAS 9750-8i 6Gb/s RAID Card|
Sapphire PURE 1250W Modular Power Supply
Synthetic Benchmarks & Games
We will use the following applications to benchmark the performance of the Nvidia GeForce GTX 580 video card.
|Synthetic Benchmarks & Games|
|Unigine Heaven v.2.1|
|Mafia II – PhysX|
Crysis v. 1.21
Crysis is the most highly anticipated game to hit the market in the last several years. Crysis is based on the CryENGINE™ 2 developed by Crytek. The CryENGINE™ 2 offers real time editing, bump mapping, dynamic lights, network system, integrated physics system, shaders, shadows, and a dynamic music system, just to name a few of the state-of-the-art features that are incorporated into Crysis. As one might expect with this number of features, the game is extremely demanding of system resources, especially the GPU. We expect the highly anticipated Crysis 2 to succeed Crysis as one of the primary gaming benchmarks in the coming year.
Immediately, we can see the GTX 570 shows a very promising result, putting out 6% more FPS over the GTX 480. The result is even more impressive if we look at the minimum frame rates, where the more efficient core is able to yield 5 extra frames over the GTX 480. This makes the GTX 570 a good replacement for the GTX 480. Comparing the GTX 570 and the GTX 580, we can see the GTX 570 performs at about 90% of the GTX 580, not a bad card considering it’s selling much cheaper than its big brother.
Increasing the resolution to 1920×1080, we can see the GTX 570 still maintains about 5% performance lead over the GTX 480. However, we can see the GTX 480 has a slight lead at the minimal frame rate with AA and AF enabled, most likely due to the extra memory on the card.
Crysis Warhead is the much anticipated standalone expansion to Crysis, featuring an updated CryENGINE™ 2 with better optimization. It was one of the most anticipated titles of 2008.
Crysis Warhead uses an updated CryENGINE™ 2 engine which is more optimized for the hardware. Here we can see the performance between the GTX 570 and GTX 480 are virtually identical. The GTX 580 has a clear 20% advantage over the GTX 570, and our GTX 460 SLI also shows a very good result here.
Once again, the GTX 480 also has a slightly higher minimal frame rate over the GTX 570 at 1920×1080 resolution with AA and AF enabled.
Unigine Heaven 2.1
Unigine Heaven is a benchmark program based on Unigine Corp’s latest engine, Unigine. The engine features DirectX 11, Hardware tessellation, DirectCompute, and Shader Model 5.0. All of these new technologies combined with the ability to run each card through the same exact test means this benchmark should be in our arsenal for a long time.
At higher resolution and also under extreme tessellation, we again see the GTX 570 performing about 5% faster than the GTX 480 but about 10% slower than the GTX 580. The AMD Radeon HD5000 and HD6000 series cards lag behind on higher tessellation.
We used a 60 second Fraps run and recorded the Min/Avg/Max FPS rather than rely on the built in utility for determining FPS. We started the benchmark, triggered Fraps and let it run on stock settings for 60 seconds without making any adjustments of changing camera angles. We just let it run at default and had Fraps record the FPS and log them to a file for us.
Key features of the BitSquid Tech (PC version) include:
- Highly parallel, data oriented design
- Support for all new DX11 GPUs, including the NVIDIA GeForce GTX 400 Series and AMD Radeon 5000 series
- Compute Shader 5 based depth of field effects
- Dynamic level of detail through displacement map tessellation
- Stereoscopic 3D support for NVIDIA 3dVision
“With advanced tessellation scenes, and high levels of geometry, Stone Giant will allow consumers to test the DX11-credentials of their new graphics cards,” said Tobias Persson, Founder and Senior Graphics Architect at BitSquid. “We believe that the great image fidelity seen in Stone Giant, made possible by the advanced features of DirectX 11, is something that we will come to expect in future games.”
“At Fatshark, we have been creating the art content seen in Stone Giant,” said Martin Wahlund, CEO of Fatshark. “It has been amazing to work with a bleeding edge engine, without the usual geometric limitations seen in current games”.
The performance of the GTX 480 and the GTX 570 are virtually identical here.
We used a 60 second Fraps run and recorded the Min/Avg/Max FPS rather than rely on the built in utility for determining FPS. We started the benchmark, triggered Fraps and let it run on stock settings for 60 seconds with the AutoPilot ON. We just let it run at default (1920×1200) and had Fraps record the FPS and log them to a file for us.
This is one of the benchmark where the GTX 570 seems to fall behind the GTX 480. The difference, though, is only about 1 frame between the two cards, so they are virtually in a tie.
Metro 2033 is an action-oriented video game blending survival horror and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for the Xbox 360 and Microsoft Windows. In March 2009, 4A Games announced a partnership with Glukhovsky to collaborate on the game. The game was announced a few months later at the 2009 Games Convention in Leipzig; a first trailer came along with the announcement. When the game was announced, it had the subtitle “The Last Refuge,” but this subtitle is no longer being used.
The game is played from the perspective of a character named Artyom. The story takes place in post-apocalyptic Moscow, mostly inside the metro system where the player’s character was raised (he was born before the war, in an unharmed city). The player must occasionally go above ground on certain missions and scavenge for valuables.
The game’s locations reflect the dark atmosphere of real metro tunnels, albeit in a more sinister and bizarre fashion. Strange phenomena and noises are frequent, and mostly the player has to rely on their flashlight and quick thinking to find their way around in total darkness. Even more lethal is the surface, as it is severely irradiated and a gas mask must be worn at all times due to the toxic air. Water can often be contaminated as well, and short contacts can cause heavy damage to the player, or even kill outright.
Often, locations have an intricate layout, and the game lacks any form of map, leaving the player to try and find its objectives only through a compass – weapons cannot be used while visualizing it, leaving the player vulnerable to attack during navigation. The game also lacks a health meter, relying on audible heart rate and blood spatters on the screen to show the player how close he or she is to death. There is no on-screen indicator to tell how long the player has until the gas mask’s filters begin to fail, save for a wristwatch that is divided into three zones, signaling how much the filter can endure, so players must continue to check it every time they wish to know how long they have until their oxygen runs out. Players must replace the filters, which are found throughout the game. The gas mask also indicates damage in the form of visible cracks, warning the player a new mask is needed. The game does feature traditional HUD elements, however, such as an ammunition indicator and a list of how many gas mask filters and adrenaline (health) shots remain.
Another important factor is ammunition management. As money lost its value in the game’s setting, cartridges are used as currency. There are two kinds of bullets that can be found: those of poor quality made by the metro-dwellers, and those made before the nuclear war. The ones made by the metro-dwellers are more common, but less effective against the dark denizens of the underground labyrinth. The pre-war ones, which are rare and highly powerful, are also necessary to purchase gear or items such as filters for the gas mask and med kits. Thus, the game involves careful resource management.
We left Metro 2033 on all high settings with Depth of Field on.
Metro 2033 is one of the most demanding games on the market right now, and it pushes the GTX 570 quite far. At 1680 resolution, both the GTX 570 and GTX 480 are able to offer over the minimum 30 FPS for smooth game play. However, neither cards are able to break the 30 FPS mark. Keep in mind the tests are done with 4xMSAA and 16xAF, so they are done with a fairly high-level of eye-candies. The GTX 570 again manages to out-perform the GTX 480 by a small percentage.
For complete information on 3DMark Vantage Please follow this Link:
The newest video benchmark from the gang at Futuremark. This utility is still a synthetic benchmark, but one that more closely reflects real world gaming performance. While it is not a perfect replacement for actual game benchmarks, it has its uses. We tested our cards at the ‘Performance’ setting.
Nothing out of ordinary in the 3DMark Vantage. The cards scaled fairly well in the 3D Mark Vantage GPU score, where the GTX 570 comes in ahead of the GTX 480, but falls behind the GTX 580.
Tom Clancy’s H.A.W.X. 2 plunges fans into an explosive environment where they can become elite aerial soldiers in control of the world’s most technologically advanced aircraft. The game will appeal to a wide array of gamers as players will have the chance to control exceptional pilots trained to use cutting edge technology in amazing aerial warfare missions.
Developed by Ubisoft, H.A.W.X. 2 challenges you to become an elite aerial soldier in control of the world’s most technologically advanced aircraft. The aerial warfare missions enable you to take to the skies using cutting edge technology.
HAWX 2 is yet another benchmark where the GTX 480 has a slight lead over the GTX 580. HAWX 2 is not particually harsh on our graphic cards, and all cards are able to yield a solid performance of well over 30 FPS, even with 32 AA at 1920×1200.
To test for PhysX capabilities of the video cards, we benchmarked Mafia II with PhysX set at High.
What we can see in this example is that when PhysX was disabled in Mafia II, the two Palit GTX 460s were able to achieve up to 72.6FPS during the benchmark. Once PhysX was turned on and set to High, and we set the GTX 570 as a dedicated PhysX card, the GTX 570 was able to increase the score by roughly 20FPS just because it was dedicated to calculate only PhysX. Without having a dedicated PhysX card, our scores were around 35FPS. We measured the percentage that the PhysX card was working at, and it was only showing 55%-65%. The lower score results in Mafia II most likely come from imperfect support for PhysX, and also the extra geometry the video card has to render when extra objects are added to the scene for a better PhysX experience.
3D performance compAred to standard
We did a quick test in Mafia II to determine how the GeForce GTX 570 performs in 3D compared to standard 2D settings.
The 3D performance was actually expected to drop exactly half ways when we ran the benchmarks, however it seems that with Mafia II, the GTX 570 was able to achieve a higher score in 3D than usual. This could be due to the fact that when 3D is applied, the old PhysX calculations do not have to be recalculated again, but simply re-rendered on the main GPU (not dedicated for PhysX). This seems to be the most reasonable explanation for why we see such results in 3D performance in Mafia II.
Unfortunately, we were unable to change the voltages on the GTX 570. While MSI’s Afterburner 2.1.0 Beta 4 allowed us to tweak the GTX 580’s GPU voltage, Afterburner still does not support the GTX 570. In this case, we had to rely on just overclocking with stock voltages. With the numbers seen in the table bellow, the following stable core clock and memory frequencies were used:
Core Clock: 850 MHz
Memory Clock: 2175 MHz
Here are some performance tests after the GTX 570 was overclocked:
|Video Cards||FPS – Unigene Heaven 2.1 / Crysis Warhead 1920×1200 Res (Extreme Tesselation for Unigine Heaven)|
|Nvidia GeForce GTX 580||44.5FPS / 39.46FPS|
|Nvidia GeForce GTX 580 OC (Stock Voltage)||47.9FPS / 42.52FPS|
|Nvidia GeForce GTX 580 OC (Raised Voltage to 1235mV @ 933MHz GPU, 2275MHz Memory)||52.6FPS / 45.13FPS|
|ASUS GeForce GTX 580 OC (Raised Voltage to 1300mV @ 959MHz GPU, 2348MHz Memory)||54.0FPS/ 46.78FPS|
|Nvidia GeForce GTX 570||39.2FPS/ 33.23FPS|
|Nvidia GeForce GTX 570 OC (Stock Voltage)||43.5FPS/ 36.15FPS|
|Galaxy GeForce GTX 480||37.9FPS / 32.99FPS|
|Palit GeForce GTX 460 Sonic Platinum in SLI||49.0FPS / 46.39FPS|
To measure the temperature of the video card, we used MSI Afterburner and ran Metro 2033 for 10 minutes to find the Load temperatures for the video cards. The highest temperature was recorded. After playing for 10 minutes, Metro 2033 was turned off and we let the computer sit at the desktop for another 10 minutes before we measured the idle temperatures.
|Video Cards – Temperatures – Ambient 23C||Idle||Load (Fan Speed)|
|2x Palit GTX 460 Sonic Platinum 1GB GDDR5 in SLI||31C||65C|
|Palit GTX 460 Sonic Platinum 1GB GDDR5||29C||60C|
|Galaxy GeForce GTX 480||53C||81C (73%)|
|Nvidia GeForce GTX 580||39C||73C (66%)|
|ASUS GeForce GTX 580||38C||73C (66%)|
|Nvidia GeForce GTX 570||39C|
The way the fan speed was automatically set up on the GTX 570 was quite interesting. We do know that temperatures in the 80s are still alright in most cases. While the GTX 580 was running at 73C, its fan was also running at a higher RPM. The GTX 570 kept its fan at 58%, which was very quiet to our ears. Of course with the lower fan speed, the heat was much higher too, but it was still within a reasonable range where the card would not get damaged.
The GTX 480, on the other hand, had the same temperature as the GTX 570, but was running at 15% higher fan speed. With the smaller fan on the GTX 480, the noise was also unbearable.
To get our power consumption numbers, we plugged in our Kill A Watt power measurement device and took the Idle reading at the desktop during our temperature readings. We left it at the desktop for about 15 minutes and took the idle reading. Then we ran Metro 2033 for a few minutes minutes and recorded the highest power usage.
|Video Cards – Power Consumption||Idle||Load|
|2x Palit GTX 460 Sonic Platinum 1GB GDDR5 in SLI||315W||525W|
|Palit GTX 460 Sonic Platinum 1GB GDDR5||249W||408W|
|Nvidia GTX 460 1GB||237W||379W|
|Galaxy GTX 480||248W||439W|
|ASUS GeForce GTX 580||226W||441W|
|Nvidia & ASUS GeForce GTX 580 in SLI||310W||660W|
|Nvidia GeForce GTX 580||225W||439W|
|Nvidia GeForce GTX 580 OC (Stock Voltage)||232W||461W|
|Nvidia GeForce GTX 570||215W||388W|
|Nvidia GeForce GTX 570 OC (Stock Voltage)||222W||408W|
|ASUS Radeon HD6870||235W||375W|
|AMD Radeon HD5870||273W||454W|
We can see that the GTX 570 performed the best in Idle conditions. The power consumption during Metro 2033 was also much lower than that of the GTX 580. Overall, the power to performance ratio on the GTX 570 is just fantastic.
The GeForce GTX 570 deserves a place in the spotlight with its sibling card, the GTX 580. The Nvidia GeForce GTX 570 runs even quieter than the GTX 580, while also maintaining excellent power consumption and keeping heat emission under reasonable numbers. The new and improved GF110 architechture not only helped out the GTX 580, but also the GTX 570, making it possible to get an average of 25% better performance than what the GTX 470 offered. The GeForce GTX 570 was also able to keep up, and in most cases, outperform the GeForce GTX 480. Because the GF100 chips, which include the GTX 480 will not be sold to manufacturers anymore, the GTX 480 will be discontinued. For those looking into getting another GTX 480 for SLI, this should be the time to do it, because these cards will run out of stock quickly. This should not be bad news however, because users can upgrade to a GTX 570 for about $80 less than a GTX 480. With a launch price of $349, the Nvidia GeForce GTX 570 is not a card that people will miss out on.
For those thinking about going with two factory overclocked GTX 460s in SLI, bear in mind that those cards run slight louder than the GTX 570, while also using quite a bit more power than a single GTX 570. And for those looking into an affordable 3D solution for gaming while also maintaining performance, a quiet enviornment and saving on the electricity bill, two GTX 570 would be an excellent option. Two GTX 570 should cost you about $700, but this is still a better option than going with two GTX 580s, which would cost about $300 dollars more.
So with all the new technology hitting the stands, I was slightly skeptical as to why this card should be any better than the GTX 580. Once I took a look at the numbers, though, I was converted. As someone looking for a new (performance level) card to replace my aging GTS 250, I was pleasantly surprised at the results. This card essentially brings the performance of the new GF110 GPU to an affordable level for those who don’t want to run the most extreme systems. I was surprised one design component–the star-shaped screws used to secure the cooler to the GPU (though the amount of frustration this caused our reviewer was slightly amusing). The unusual screws leave me wondering whether this will reduce the versatility of this card in terms of aftermarket coolers. Overall though, this is a card I am considering buying.