Breaking News

ASUS ENGTX580 Voltage Tweak – SLI

ASUS is one of many manufacturers that have factory overclocked video cards even during hard launches like the GTX 580 was. ASUS’ ENGTX580 (GTX 580) Voltage Tweak edition video card is slightly clocked higher than the reference GTX 580s.

Introduction

The launch of the Nvidia GeForce GTX 580 came very fast, and while the card performed exceptionally well all around, some people were disappointed by the SLI performance that it had to offer. In some cases the GTX 480s in SLI performed better than the GTX 580s in SLI. Some of these problems were caused by the the hard launch and premature drivers that Nvidia released, but is the same still true after 3 weeks? When we first reviewed the Nvidia GTX 580, we were using the ForceWare 262.99 drivers, but Nvidia reecently came out with the ForceWare 263.09 drivers, which could have some performance increase over the past 262.99 driver. We’ll go into more detail about that in the following pages.

When the GTX 580 launched, most manufactures were not able to come out with their own overclocked versions of the GTX 580. ASUS was one of the few manufacturers to have factory overclocked cards with the launch: the ASUS ENGTX580 Voltage Tweak. It is a bit ridiculous that the core speed on this card has only been raised from 772MHz to 782MHz, but ASUS claims it produces about 50% more speed, higher performance, and greater satisfaction. We’ll put that statement to the test in our overclocking section, and we’ll also have some numbers for certain benchmarks.


For those that missed the launch of the GTX 580, there was lots of information going around about the new GF110 core improvements, and the new implementation of a vapor chamber design cooling. ASUS follows the same design, which proved to be very successful for the Nvidia reference cards. With stock cooling, we were able to overclock the Nvidia GTX 580 to a stunning 933MHz on the GPU from 772MHz, and 2550MHz on the memory frequency, about 542MHz higher than stock frequency of the memory. With the ASUS GTX 580, we are going to try our best to get stable clocks with higher performance than what we were able to achieve with the reference card. However, because ASUS is using the same reference design, we doubt that there will be much of a difference between the two cards.

To summarize the GF110 chip, Nvidia raised the core count from 480 cores to 512 CUDA cores. The core and memory frequencies have been slightly raised as well, and new PolyMorth and Raster Engines have been added to help tessellation rendering and speed up the conversion of polygons to pixel fragments. We’ll have more on the next page for a GF110 recap.

The GF110 Architecture – Improved / Optimized FERMI

As we mentioned on the previous page, the GTX 580 has went through a lot of architectural enhancements and the two major changes were the FP16 texture filtering, which helps with texture-intensive applications, and the new tile formats that improve Z-cull efficiency. The chart below from Nvidia shows how the architectural enhancements improved performance from the GTX 480, the extra performance granted from the faster clock speeds on the core and memory, as well as the extra 32 cores that were unlocked on the GTX 580, making it a true 512 CUDA core GPU.

Earlier there was a Nvidia GTX 480 video cards which had 512 cores, but the power consumption was also much higher. With the optimized GF110 chip, the GTX 580 can maintain the same power efficiency as the GTX 480, and still gain performance.

With the new GF110 chip, PolyMorph and Raster Engines have been added to help with tessellation. While the new PolyMorph engine helps with tessellation performance in games, the extra Raster Engine helps with the conversion of polygons to pixel fragments. Now with 16 PolyMorph Engines and 512 CUDA cores, the 580 is able to achieve a stunning 2 billion Triangles per second. That is a tremendous amount of polygons, something we would only see in Hollywood blockbuster movies. Now all of this can easily be rendered real-time with the GTX 580 GPU. Nvidia’s new demo Endless City shows this off, rendering and playing back everything in real-time.

The Radeon HD series video cards still have a much harder time with tessellation based benchmarks, which means that when games start incorporating extensive tesselation into their geometry, the Nvidia cards will have an advantage over their AMD counterparts. There are some games that already take advantage of tessellation, like H.A.W.X II (coming out on 11/12/10). The Unigine Heaven 2.1 benchmark also tests tessellation capabilities. While the tessellation visual improvement is very limited at the moment, we believe that tessellation will be taken much further in the future, making it possible to make characters, terrain, and objects much more belieavable than they are now.

For the GF110 design, Nvidia completely re-engineered the previous GF100, down to the transistor level. The previous chip had to be evaluated at every block of the GPU. To achive higher performance with lower power consumption, Nvidia modified a very large percentage of the transistors on the chip. They used lower leakage transistors on less timing sensitive processing paths, and higher speed transistors on more critical processing paths. This is why Nvidia was able to add the extra 32 cores to the final Fermi architecture, while also adding another SM to the chip.

To compare the power consumption of a GTX 480 to that of a GTX 580, we tried to overclock the GTX 480 as far as we could, trying to match the performance of the GTX 580. While it was difficult to reach the performance of the GTX 580 with our Galaxy GTX 480, we got within 2 FPS of the performance of the GTX 580. While the performance was very close, the shocking part was that we were using well over 100W of power. The performance-to-power consumption ratio is definitely improved on the GTX 580’s GF110 chip. The following chart compares the GTX 480 with the GTX 580 for overall performance per watt, showing that the GTX 580 can perform over 35% better than the GTX 480 in 3DMark Vantage.

For many of Nvidia’s previous video cards, the GPU’s thermal protection features meant that the GPU would be downclocked when at extreme temperatures. This would protect the cards from unwanted damage. However, with the release of stressing applications such as FurMark, MSI Kombustor, and OCCT, the latest video cards can reach dangerously high currents, potentially causing damage to components on the card. Nvidia integrated a new power monitoring feature into the GTX 580, which will dynamically adjust performance in certain stress applications if the power levels exceed the card’s specifications. These dedicated hardware circuitries run real-time, monitoring the current and voltage on each of the 12V rails. These rails include the 6-pin, 8-pin, and the PCI-Express edge connector.

Cooler Design

Nvidia made improvements when developing the GTX 580 based on what consumers said about the GTX 480. The thermal characteristics of the GF110 chip are also much better than of the GF100. We’ll go into more detail about the GTX 580 reference card’s cooling solution on the following pages. What we see on this chart is that the GTX 480 is roughly about 9-10 dBA higher than the GTX 580. Generally, a human perceives each 10 dBA increase as being twice as loud as the previous noise level. The GTX 580 will perform much quieter than any high-end card Nvidia has released in the past few years.

Based on the tests we did in our labs, the GTX 580 does indeed perform very quietly during high loads. We tested the thermal improvements and acoustic improvements on the card, and with our Silverstone TJ-10 chassis and some acoustic dampaning on each side panel, the GTX 580 was totally inaudible during gaming. The other fans in the system were a bit louder than the GTX 580. When we ran Furmark, the fan speed starts getting faster. However Furmark is not a real-life based application because it actually pulls more power and heat out of the video card than a real-life application would. Also, if we push the fan speed on the GTX 580 to 100%, we can definitely hear the fan loud and clear. During our testing period, we played Metro 2033 for about an hour in a closed chassis with no side ventilation, and the fan speed only reached up to 66%, which kept a very quiet environment for gaming.

The new cooling solution on the GTX 580 uses a special heatsink design, including what is called a vapor chamber. Think of the vapor chamber as a heatpipe solution, but instead of just contacting the heatsink fins in certain areas, the vapor chamber 100% contact with every fin of the heatsink. This helps tremendously by spreading the heat out over a large block of a heatsink.

The GTX 580 also has a new adaptive GPU fan control, and the card is designed for great cooling potential in SLI setups. The fan has been redesigned to generate a lower pitch and tone, which allows for lower acoustic noise. The back of the cover is designed to route the air towards the rear bracket, improving SLI temperature performance.

The vapor chamber is a sealed, fluid-filled chamber with thin layered copper walls. When the heatsink is placed on the GPU, the GPU quickly boils up the liquid inside the vapor chamber, and the liquid evaporates to vapor. The hot vapor spreads throughout the top of the chamber, transferring the heat to the heatsink fins. Finally, the cooled liquid goes around and returns to the bottom of the vapor chamber, allowing the whole process to restart again. The hot heatsink fins are cooled by the air being pushed through the fins of the heatsink.

Continue onto the next page, where we examine the Nvidia GeForce GTX 580 in more detail.

Specifications & Features

The difference between the older GTX 480 (GF100) and the new GTX 580 (GF110) becomes visible once we take a look at the specs for the GTX 580. We can clearly see that the number of CUDA cores has been raised from 480 to 512. The Graphics Clock, Processor Clock, and Memory Clock frequencies have also been increased, allowing the GTX 580 to get ahead of the GTX 480 quite a bit. The ASUS ENGTX580 Voltage Tweak version of the reference GTX 580 comes with a slightly overclocked GPU core from the standard 772 MHz to 782 MHz. This overclock is frankly laughable, because we highly doubt that it will give even 1 FPS higher performance in today’s latest games, like Metro 2033, etc. On the reference GTX 480 and GTX 580, the Graphics Clock has jumped from 700MHz to 772MHz, and the Processor Clock speed from 1401MHz to 1544MHz. The memory frequency has gone from 1848MHz to 2004MHz, allowing the user to achieve a fantastic memory frequency of 4008MHz. Of course these numbers can be overclocked, and that’s exactly what we are going to do with the ASUS ENGTX580 Voltage Tweak video card.

Specifications

Graphics Engine Nvidia GeForce GTX 580 (ASUS ENGTX580 Voltage Tweak)
Bus Standard PCI Express 2.0
Graphics Processing Clusters 4
Streaming Multiprocessors 16
CUDA Cores 512
Texture Units 64
ROP Units 48
Engine Clock (Graphics Clock) 782 MHz (772 MHz Standard)
Shader Clock 1564 MHz
Memory Clock 4008 MHz ( 1002 MHz GDDR5 )
Total Video Memory 1536MB GDDR5
RAMDAC 400 MHz
Memory Interface 384-bit
Texture Filtering Rage (Bilinear) 49.4 GigaTextels/sec
Fabrication Process 40nm
Transistor Count 3 Billion
Power Connectors 1 x 6-pin, 1 x 8-pin
Recommended Power Supply 600 Watts
Thermal Design Power (TDP) 244 Watts
Thermal Threshold 97C
Resolution D-Sub Max Resolution : 2048×1536
DVI Max Resolution : 2560×1600
Interface DVI Output : Yes x 2 (DVI-I)
HDMI Output : Yes x 1 (via Mini HDMI to HDMI adaptor x 1)
HDCP Support : Yes
Accessories 1 x Power cable
1 x Mini HDMI to HDMI adaptor
Software *Please follow the driver setup instruction to download SmartDoctor application on ASUS website prior to use
Dimensions 11 ” x 5 ” Inch

As a result of the GPU redesign, the GeForce GTX 580 also has a lower temperature threshold than its predecessor. Whereas the 480’s GF100 chip had a temperature threshold of 105C, the 580’s GF110 has a threshold of 97C.

While the TDP of the GTX 580 has also dropped to 244W (compared to the GF100’s 250W), the actual power consumption that we measured in Metro 2033 has not changed at all. Also, we still see the standard 6-pin and 8-pin power connectors on the GTX 580, so the main power design has not changed by much. It is important to understand that the temperature, and particularly the power consumption, were measured in real-life applications rather than benchmarking applications such as FurMark or OCCT. Usually, FurMark and OCCT push the cards way past their standard specs, and depending on the settings, could report values that the card could never achieve in real-life situations.

The power protection circuits on the GTX 580 video cards also prevent applications like FurMark and OCCT from using dangerous currents which could damage the video card in the long run. It has been reported that because of this, the performance and temperatures users get from FurMark and OCCT are significantly exaggerated from the actual power consumption and temperatures a user would get in a real-life application, such as a video game. On the other hand, for those looking into pushing their cards even further than what they were designed for, the latest build of GPU-Z allows the GTX 580 to bypass the current restrictions that were set up by Nvidia.

The length of the card has not changed at all from the GTX 480, and while the main specs for SLI state that the card is designed for up to 3-way SLI, actual 4-Way SLI motherboards are capable of running 4 x Nvidia GeForce GTX 580s in 4-way SLI.

Features

Ready for Mind-bending Gaming Actions!

ASUS Exclusive Innovation

Voltage Tweak
Voltage Tweak

Full throttle overclocking with exclusive ASUS Voltage Tweak via Smart Doctor – boosting 50%* more speed, performance and satisfaction!

Overclocked!
Overclocked!

Factory overclocked to perform at 782MHz, higher than stock performance, resulting in higher frame rates in games.

Smart Doctor
ASUS Smart Doctor

Your intelligent hardware protection and powerful overclocking tool

ASUS GamerOSD
ASUS Gamer OSD

Real-time overclocking, benchmarking and video capturing in any PC game!

ASUS Splendid HD
Splendid™ Video Intelligence Technology

Optimizes colors in various entertainment scenarios with five special modes — standard, game, scenery, night view and theater.

   

Graphics GPU Features

NVIDIA GeForce
Powered by NVIDIA® GeForce GTX580
NVIDIA Force with CUDA
GeForce CUDA™ Technology

Unlocks the power of GPU’s processor cores to accelerate the most demanding system tasks

NVIDIA SLI Ready
SLI Support

Multi-GPU technology for extreme performance ode

PhysX by NVIDIA
NVIDIA PhysX™ ready

Dynamic visual effects like blazing explosions, reactive debris, realistic water, and lifelike characters

NVIDIA 3D Vision
NVIDIA® 3D Vision™

Immersive yourself in 3D gaming world

Full DirectX 11 Support
Microsoft® DirectX® 11 Support

Bring new levels of visual realism to gaming on the PC and get top-notch performance

Compatible with Windows7
Microsoft® Windows® 7 Support

Enable PC users to enjoy an advanced computing experience and to do more with their PC

   

I/O Ports

IO Ports

What is CUDA?

CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit).

With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for CUDA, including image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing and much more.

Background

Computing is evolving from “central processing” on the CPU to “co-processing” on the CPU and GPU. To enable this new computing paradigm, NVIDIA invented the CUDA parallel computing architecture that is now shipping in GeForce, ION, Quadro, and Tesla GPUs, representing a significant installed base for application developers.

In the consumer market, nearly every major consumer video application has been, or will soon be, accelerated by CUDA, including products from Elemental Technologies, MotionDSP and LoiLo, Inc.

CUDA has been enthusiastically received in the area of scientific research. For example, CUDA now accelerates AMBER, a molecular dynamics simulation program used by more than 60,000 researchers in academia and pharmaceutical companies worldwide to accelerate new drug discovery.

In the financial market, Numerix and CompatibL announced CUDA support for a new counterparty risk application and achieved an 18X speedup. Numerix is used by nearly 400 financial institutions.

An indicator of CUDA adoption is the ramp of the Tesla GPU for GPU computing. There are now more than 700 GPU clusters installed around the world at Fortune 500 companies ranging from Schlumberger and Chevron in the energy sector to BNP Paribas in banking.

And with the recent launches of Microsoft Windows 7 and Apple Snow Leopard, GPU computing is going mainstream. In these new operating systems, the GPU will not only be the graphics processor, but also a general purpose parallel processor accessible to any application.

For information on CUDA and OpenCL, click here.

For information on CUDA and DirectX, click here.

For information on CUDA and Fortran, click here.

 

 

PhysX

Some Games that use PhysX (Not all inclusive)

Batman: 
            Arkham Asylum Batman: Arkham Asylum
Watch Arkham Asylum come to life with NVIDIA® PhysX™ technology! You’ll experience ultra-realistic effects such as pillars, tile, and statues that dynamically destruct with visual explosiveness. Debris and paper react to the environment and the force created as characters battle each other; smoke and fog will react and flow naturally to character movement. Immerse yourself in the realism of Batman Arkham Asylum with NVIDIA PhysX technology.
Darkest 
            of Days Darkest of Days
Darkest of Days is a historically based FPS where gamers will travel back and forth through time to experience history’s “darkest days”. The player uses period and future weapons as they fight their way through some of the epic battles in history. The time travel aspects of the game, lead the player on missions where they at times need to fight on both sides of a war.
Sacred 2 
            – Fallen Angel Sacred 2 – Fallen Angel
In Sacred 2Fallen Angel, you assume the role of a character and delve into a thrilling story full of side quests and secrets that you will have to unravel. Breathtaking combat arts and sophisticated spells are waiting to be learned. A multitude of weapons and items will be available, and you will choose which of your character’s attributes you will enhance with these items in order to create a unique and distinct hero.
Dark Void Dark Void
Dark Void is a sci-fi action-adventure game that combines an adrenaline-fuelled blend of aerial and ground-pounding combat. Set in a parallel universe called “The Void,” players take on the role of Will, a pilot dropped into incredible circumstances within the mysterious Void. This unlikely hero soon finds himself swept into a desperate struggle for survival.
Cryostasis Cryostasis
Cryostasis puts you in 1968 at the Arctic Circle, Russian North Pole. The main character, Alexander Nesterov is a meteorologist incidentally caught inside an old nuclear ice-breaker North Wind, frozen in the ice desert for decades. Nesterov’s mission is to investigate the mystery of the ship’s captain death – or, as it may well be, a murder.
Mirror’s Edge Mirror’s Edge
In a city where information is heavily monitored, agile couriers called Runners transport sensitive data away from prying eyes. In this seemingly utopian paradise of Mirror’s Edge, a crime has been committed and now you are being hunted.

What is NVIDIA PhysX Technology?
NVIDIA® PhysX® is a powerful physics engine enabling real-time physics in leading edge PC games. PhysX software is widely adopted by over 150 games and is used by more than 10,000 developers. PhysX is optimized for hardware acceleration by massively parallel processors. GeForce GPUs with PhysX provide an exponential increase in physics processing power taking gaming physics to the next level.

What is physics for gaming and why is it important?
Physics is the next big thing in gaming. It’s all about how objects in your game move, interact, and react to the environment around them. Without physics in many of today’s games, objects just don’t seem to act the way you’d want or expect them to in real life. Currently, most of the action is limited to pre-scripted or ‘canned’ animations triggered by in-game events like a gunshot striking a wall. Even the most powerful weapons can leave little more than a smudge on the thinnest of walls; and every opponent you take out, falls in the same pre-determined fashion. Players are left with a game that looks fine, but is missing the sense of realism necessary to make the experience truly immersive.

With NVIDIA PhysX technology, game worlds literally come to life: walls can be torn down, glass can be shattered, trees bend in the wind, and water flows with body and force. NVIDIA GeForce GPUs with PhysX deliver the computing horsepower necessary to enable true, advanced physics in the next generation of game titles making canned animation effects a thing of the past.

Which NVIDIA GeForce GPUs support PhysX?
The minimum requirement to support GPU-accelerated PhysX is a GeForce 8-series or later GPU with a minimum of 32 cores and a minimum of 256MB dedicated graphics memory. However, each PhysX application has its own GPU and memory recommendations. In general, 512MB of graphics memory is recommended unless you have a GPU that is dedicated to PhysX.

How does PhysX work with SLI and multi-GPU configurations?
When two, three, or four matched GPUs are working in SLI, PhysX runs on one GPU, while graphics rendering runs on all GPUs. The NVIDIA drivers optimize the available resources across all GPUs to balance PhysX computation and graphics rendering. Therefore users can expect much higher frame rates and a better overall experience with SLI.

A new configuration that’s now possible with PhysX is 2 non-matched (heterogeneous) GPUs. In this configuration, one GPU renders graphics (typically the more powerful GPU) while the second GPU is completely dedicated to PhysX. By offloading PhysX to a dedicated GPU, users will experience smoother gaming.

Finally we can put the above two configurations all into 1 PC! This would be SLI plus a dedicated PhysX GPU. Similarly to the 2 heterogeneous GPU case, graphics rendering takes place in the GPUs now connected in SLI while the non-matched GPU is dedicated to PhysX computation.

Why is a GPU good for physics processing?
The multithreaded PhysX engine was designed specifically for hardware acceleration in massively parallel environments. GPUs are the natural place to compute physics calculations because, like graphics, physics processing is driven by thousands of parallel computations. Today, NVIDIA’s GPUs, have as many as 480 cores, so they are well-suited to take advantage of PhysX software. NVIDIA is committed to making the gaming experience exciting, dynamic, and vivid. The combination of graphics and physics impacts the way a virtual world looks and behaves.

 

Direct Compute

DirectCompute Support on NVIDIA’s CUDA Architecture GPUs

Microsoft’s DirectCompute is a new GPU Computing API that runs on NVIDIA’s current CUDA architecture under both Windows VISTA and Windows 7. DirectCompute is supported on current DX10 class GPU’s and DX11 GPU’s. It allows developers to harness the massive parallel computing power of NVIDIA GPU’s to create compelling computing applications in consumer and professional markets.

As part of the DirectCompute presentation at the Game Developer Conference (GDC) in March 2009 in San Francisco CA, NVIDIA demonstrated three demonstrations running on a NVIDIA GeForce GTX 280 GPU that is currently available. (see links below)

As a processor company, NVIDIA enthusiastically supports all languages and API’s that enable developers to access the parallel processing power of the GPU. In addition to DirectCompute and NVIDIA’s CUDA C extensions, there are other programming models available including OpenCL™. A Fortran language solution is also in development and is available in early access from The Portland Group.

NVIDIA has a long history of embracing and supporting standards since a wider choice of languages improve the number and scope of applications that can exploit parallel computing on the GPU. With C and Fortran language support here today and OpenCL and DirectCompute available this year, GPU Computing is now mainstream. NVIDIA is the only processor company to offer this breadth of development environments for the GPU.

 

OpenCL

 

 

OpenCL (Open Computing Language) is a new cross-vendor standard for heterogeneous computing that runs on the CUDA architecture. Using OpenCL, developers will be able to harness the massive parallel computing power of NVIDIA GPU’s to create compelling computing applications. As the OpenCL standard matures and is supported on processors from other vendors, NVIDIA will continue to provide the drivers, tools and training resources developers need to create GPU accelerated applications.

In partnership with NVIDIA, OpenCL was submitted to the Khronos Group by Apple in the summer of 2008 with the goal of forging a cross platform environment for general purpose computing on GPUs. NVIDIA has chaired the industry working group that defines the OpenCL standard since its inception and shipped the world’s first conformant GPU implementation for both Windows and Linux in June 2009.

OpenCL for GPU Nbody Demo

NVIDIA has been delivering OpenCL support in end-user production drivers since October 2009, supporting OpenCL on all 180,000,000+ CUDA architecture GPUs shipped since 2006.

NVIDIA’s Industry-leading support for OpenCL:

2010

March – NVIDIA releases updated R195 drivers with the Khronos-approved ICD, enabling applications to use OpenCL NVIDIA GPUs and other processors at the same time 

January – NVIDIA releases updated R195 drivers, supporting developer-requested OpenCL extensions for Direct3D9/10/11 buffer sharing and loop unrolling 

January – Khronos Group ratifies the ICD specification contributed by NVIDIA, enabling applications to use multiple OpenCL implementations concurrently 

2009 

November – NVIDIA releases R195 drivers with support for optional features in the OpenCL v1.0 specification such as double precision math operations and OpenGL buffer sharing 

October – NVIDIA hosts the GPU Technology Conference, providing OpenCL training for an additional 500+ developers 

September – NVIDIA completes OpenCL training for over 1000 developers via free webinars 

September – NVIDIA begins shipping OpenCL 1.0 conformant support in all end user (public) driver packages for Windows and Linux 

September – NVIDIA releases the OpenCL Visual Profiler, the industry’s first hardware performance profiling tool for OpenCL applications 

July – NVIDIA hosts first “Introduction to GPU Computing and OpenCL” and “Best Practices for OpenCL Programming, Advanced” webinars for developers 

July – NVIDIA releases the NVIDIA OpenCL Best Practices Guide, packed with optimization techniques and guidelines for achieving fast, accurate results with OpenCL 

July – NVIDIA contributes source code and specification for an Installable Client Driver (ICD) to the Khronos OpenCL Working Group, with the goal of enabling applications to use multiple OpenCL implementations concurrently on GPUs, CPUs and other types of processors

June – NVIDIA release first industry first OpenCL 1.0 conformant drivers and developer SDK

April – NVIDIA releases industry first OpenCL 1.0 GPU drivers for Windows and Linux, accompanied by the 100+ page NVIDIA OpenCL Programming Guide, an OpenCL JumpStart Guide showing developers how to port existing code from CUDA C to OpenCL, and OpenCL developer forums

2008

December – NVIDIA shows off the world’s first OpenCL GPU demonstration, running on an NVIDIA laptop GPU at

SIGGRAPH Asia 

June – Apple submits OpenCL proposal to Khronos Group; NVIDIA volunteers to chair the OpenCL Working Group is formed

2007 

December – NVIDIA Tesla product wins PC Magazine Technical Excellence Award 

June – NVIDIA launches first Tesla C870, the first GPU designed for High Performance Computing 

May – NVIDIA releases first CUDA architecture GPUs capable of running OpenCL in laptops & workstations

2006 

November – NVIDIA released first CUDA architecture GPU capable of running OpenCL

The ASUS ENGTX580

Click Image For a Larger One
 
Let’s start off with the packaging the ENGTX580 came in. A lot of people seem to be interested in not just the product, but the packaging as well, so we went ahead and took several pictures of the packaging. We found it a bit boring that ASUS is still using the same box design with the armored horse and warrior. We have a GT240 video card from ASUS laying around in its box, and it has the same design. Of course the specifications and features are always updated for each video card, but we would have liked to see a custom design just for the GTX 580, because it might become difficult for the buyer to decide between two cards if they are packaged the same way but one is cheaper than the other.

Click Image For a Larger One
 
When we removed the cover of the ENGTX580x, the overall construction and sturdiness of the box impressed us. When we opened the box, there were three other sections in there. The larger section on the left of the opened box is the accessory box, which includes the accessories shown below. We’ll go into more detail about that next. The section inside the box right next to the accessory box did not actually have anything in it, so we’re not quite sure why it was there.

Click Image For a Larger One
 
Below the accessory box is where the ASUS ENGTX580 Voltage Tweak video card is located. It has several inches of a thick foam layer to protect it from damage during shipping or handling. The video card is wrapped in anti-static bag, making sure there is no static discharge during shipment.
 
The second picture shows the accessories that come with the ENGTX580. We have removed the Driver CD from its paper cover, so we could take a better picture of it. Starting from the left, there is a Mini HDMI to HDMI adaptor. Right next to it, we see the well known 8-pin to 2x 6-pin power connector. This will come handy if the user’s power supply does not have a direct 8-pin PCI-E connector. In this case, the user can plug two 6-pin PCI-E connectors into the adapter, and it will serve as a single 8-pin PCI-E power cable. To understand the logic behind this, we know that one 6-pin PCI-E connector can provide up to 75W of power. A single 8-pin PCI-E connector can provide up to 150W of power. So two 6-pin PCI-E connectors would provide 2x75W, the same amount of power an 8-pin connector would give. Right next to the connector, we have a quick SpeedSetup guide for installing the GPU and other information, and a Driver CD.

Click Image For a Larger One
 
The card uses the same design as the Nvidia reference card. At first, it looks very similar to the GTX 480, but it is missing the heatpipe solution that we saw on the GTX 480s. As we mentioned earlier, Nvidia totally redesigned the cooling on their new GF110 lineup, and like other companies, ASUS is only allowed to work with the reference design. In the coming weeks and months, we will see custom designs coming out from each company, and we are eager to see how they modify the vapor chamber cooling of this card for their unique products. With the introduction of the vapor chamber heatsink design, there is no more need for heatpipes. It is possible for the heatpipes to be integrated, and it would certainly be interesting to see the performance of such a solution, but for now, the vapor chamber is fine. With the vapor chamber heatsink design, the heat can be transferred more effectively to the heatsink fins, so the fan could easily blow out the heat from the back of the system. While overall the card has a very nice clean design,  we are quite disappointed that the bracket cooling still uses the dense ventilation hole design. With the implementation of a less dense ventilation hole bracket design like the EVGA High Flow Bracket, the heat could be pushed out of the card with ease and it would also reduce the turbulence caused by air being pushed against the dense fins of the ventilation holes. Our previous lab tests have shown that with the EVGA High Flow design, we were able to drop the temperatures on the GTX 480 by around 3 degrees Celsius. Thankfully, because the GeForce GTX 580 also has the same connectors as the GTX 480, the EVGA High Flow bracket an easily be used on the GTX 580 cards as well. Though ASUS has to follow Nvidia’s guidelines on reference cards, we’re hoping that their future custom designs will implement the EVGA High Flow design or something similar.
 
One difference to note is that this card does not have the fine graphical detail on the cover of the cooler like the Nvidia card had. We can also see the incorporation of the ASUS logo on the front and top of the video card, and the removal of the Nvidia logo from the fan.

Click Image For a Larger One
 
We also noticed that the fan is larger on the GTX 580 than on the GTX 480. We believe this is one of the new changes to the fan design that Nvidia was talking about. Usually with larger fans, it is easier to push more air through the card without using higher RPMs that could cause motor noise. The actual full card length with the back expansion slot bracket is roughly 11 inches, but the PCB is exactly 10.5 inches long. This means that to fit the GTX 580 into a case, users will need at least 10.6 inches of free space, but we always recommend having a case that has a bit more room for better air circulation and an easier fit. The height of the card matches standard video card height specifications, and since there is no heatpipe solution on the GTX 580, the overall card size with the cooler does not exceed the standard 4.5 inch height. The cover for the GTX 580 has also been redesigned to allow for better air circulation through the heatsink area of the card, further cooling the GPU.

The PCB design of the GTX 580 is essentially the same as the reference GTX 480 design. Of course we can see new components incorporated on the PCB, and also some components removed. One of the most noticeable changes to the board is that the ventilation hole on the PCB is absent. The newly redesigned fan and heatsink design is supposed to take into consideration the fact that some users will use SLI systems, so the GTX 580 has been fine tuned to make sure there won’t be any ventilation problems even without the ventilation hole that we saw on the GTX 480.

Also, as expected, the GTX 580 has 6-pin and 8-pin power connectors which supply a maximum of 75W + 150W = 225W of power, and two SLI connectors enable the user to use the GTX 580 in an up to 4-way SLI setup with the appropriate motherboard. We noticed that instead of using latches to secure the plastic cover as was done on the GTX 480, Nvidia decided to use screws to tighten the top cover of the GTX 580.

From this point on, we are using the Nvidia GTX 580 reference card pictures. However, the actual design of the cooler and the PCB with the components has not changed at all on the ASUS ENGTX580 Voltage Tweak video card.

 

Click Image For a Larger One
 
This is Nvidia’s best cooling design to date. The new vapor chamber cooling is designed to provide extra cooling to make sure the GPU does not overheat. The new GF110 GPU is also designed to withstand up to 97C temperatures before it is down-throttled. The GF100 was designed to withstand up to 105C before down throttling started. Just from looking at the reference board, we can also see that the GTX 580 does not have the capacitors in the middle of the card that we saw on the Galaxy GTX 480. Instead, they have been moved to the far edge of the PCB. Nvidia made improvements to the decoupling on the board to achieve higher clocks at a given voltage, which helps increase performance in a fixed power envelope.

 

Click Image For a Larger One
 
The vapor chamber is finally revealed. As we can see, it is very thin, but it is enough to allow the liquid to evaporate and cool down. It seems as though the thermal paste on the GPU is also better quality, to ensure that there is excellent contact between the GPU and the cooler’s base. The vapor chamber’s base is a copper base, with a smooth surface. While the base of the cooler is not mirror-finished, it is smooth enough to provide excellent contact with the GPU, especially with high-quality thermal paste in between.
 
The PCB cover can also be removed from the video card, which shows us that the memory, MOSFETs, and other components are also cooled through the cover of the video card. The cover is made of aluminum alloy, coated with black electrodeposit (this is designed to be electrically nonconductive). While the card was in operation, we could feel the cover transferring a fair amount of heat.

 

Click Image For a Larger One

This is the GF110 GPU. From first impressions, it looks like other aftermarket coolers should easily be compatible with the GTX 580 video card. The PCB design is very similar. We can also see that Nvidia once again uses Samsung memory chips on the PCB. The GTX 580 consists of six 64-bit memory controllers (384-bit) and 1536MB of GDDR5 memory.

Taking a look at an SLI setup

Click Image For a Larger One
 
These are some teaser pictures for those looking into running two GTX 580s in SLI. In this review we will be testing the performance of the Nvidia GTX 580 and the ASUS ENGTX580 Voltage Tweak video card configured in SLI.

Testing Methodology

The OS we use is Windows 7 Pro 64bit with all patches and updates applied. We also use the latest drivers available for the motherboard and any devices attached to the computer. We do not disable background tasks or tweak the OS or system in any way. We turn off drive indexing and daily defragging. We also turn off Prefetch and Superfetch. This is not an attempt to produce bigger benchmark numbers. Drive indexing and defragging can interfere with testing and produce confusing numbers. If a test were to be run while a drive was being indexed or defragged, and then the same test was later run when these processes were off, the two results would be contradictory and erroneous. As we cannot control when defragging and indexing occur precisely enough to guarantee that they won’t interfere with testing, we opt to disable the features entirely.

Prefetch tries to predict what users will load the next time they boot the machine by caching the relevant files and storing them for later use. We want to learn how the program runs without any of the files being cached, and we disable it so that each test run we do not have to clear pre-fetch to get accurate numbers. Lastly we disable Superfetch. Superfetch loads often-used programs into the memory. It is one of the reasons that Windows Vista occupies so much memory. Vista fills the memory in an attempt to predict what users will load. Having one test run with files cached, and another test run with the files un-cached would result in inaccurate numbers. Again, since we can’t control its timings so precisely, it we turn it off. Because these four features can potentially interfere with benchmarking, and and are out of our control, we disable them. We do not disable anything else.

We ran each test a total of 3 times, and reported the average score from all three scores. Benchmark screenshots are of the median result. Anomalous results were discounted and the benchmarks were rerun.

Please note that due to new driver releases with performance improvements, we rebenched every card shown in the results section. The results here will be different than previous reviews due to the performance increases in drivers.

Test Rig

Test Rig

Case Silverstone Temjin TJ10
CPU

Intel Core i7 930 @ 3.8GHz

Motherboard

ASUS Rampage III Extreme ROG – LGA1366

Ram

OCZ DDR3-12800 1600MHz (8-8-8-24 1.65v) 12GB Triple-Channel Kit

CPU Cooler Thermalright True Black 120 with 2x Zalman ZM-F3 FDB 120mm Fans
Hard Drives

4x Seagate Cheetah 600GB 10K 6Gb/s Hard Drives

2x Western Digital RE3 1TB 7200RPM 3Gb/s Hard Drives

1x Zalman N Series 128GB SSD

Optical ASUS DVD-Burner
GPU

ASUS GeForce ENGTX580 1536MB – Voltage Tweak

Nvidia GeForce GTX 580 1536MB

Galaxy GeForce GTX 480 1536MB

Palit GeForce GTX460 Sonic Platinum 1GB in SLI

ASUS Radeon HD6870

AMD Radeon HD5870

Case Fans

2x Zalman ZM-F3 FDB 120mm Fans – Top

1x Zalman Shark’s Fin ZM-SF3 120mm Fan – Back

1x Silverstone 120mm fan – Front

1x Zalman ZM-F3 FDB 120mm Fan – Hard Drive Compartment

1x Zalman ZM-F3 FDB 120mm Fan – Side Ventilation for Video Cards and RAID Card SAS Controller.  

Additional Cards
LSI 3ware SATA + SAS 9750-8i 6Gb/s RAID Card
PSU

Sapphire PURE 1250W Modular Power Supply

Mouse Logitech G5
Keyboard Logitech G15

Synthetic Benchmarks & Games

We will use the following applications to benchmark the performance of the ASUS GeForce ENGTX580 video card.

Synthetic Benchmarks & Games
3DMark Vantage
Metro 2033
Stone Giant
Unigine Heaven v.2.1
Crysis
Crysis Warhead
Endless City
HAWX 2

Crysis v. 1.21

Crysis is the most highly anticipated game to hit the market in the last several years. Crysis is based on the CryENGINE™ 2 developed by Crytek. The CryENGINE™ 2 offers real time editing, bump mapping, dynamic lights, network system, integrated physics system, shaders, shadows, and a dynamic music system, just to name a few of the state-of-the-art features that are incorporated into Crysis. As one might expect with this number of features, the game is extremely demanding of system resources, especially the GPU. We expect Crysis to be a primary gaming benchmark for many years to come.

The Settings we use for benchmarking Crysis
 
 

The Nvidia GeForce GTX 580 shows phenomenal gaming performance in Crysis at the 1680×1050 resolution. Never before has a single GPU been so powerful or acheived such scores at stock clocks. It is safe to say that the GTX 580 has a level of performance at which Crysis could easily be enjoyed without any lags at even higher resolutions like 1900×1200 with AA. The ASUS ENGTX480 pulls just a tiny bit ahead of the Nvidia GTX 580.

 

As we can see on the last 1900×1200 test with 2x AA, the actual game performance is still within excellent gaming range, and though the user might not get the smoothest gameplay, it should be smooth enough for the game to be easily playable. With the GTX 480 falling behind by roughly 6FPS, users will no doubt have a better gaming experience with the GTX 580 video card. Unfortunately, the GeForce GTX 580 is not able to outperform the Palit GTX 460 Sonic Platinum video cards in SLI at stock settings, but with overclocking the ENGTX580 was able to achieve up to 5FPS faster than two GeForce GTX 460 Sonic Platinums in SLI. That’s an amazing speed gain.

CRYSIS WARHEAD

Crysis Warhead is the much anticipated standalone expansion to Crysis, featuring an updated CryENGINE™ 2 with better optimization. It was one of the most anticipated titles of 2008.

The Settings we use for benchmarking Crysis Warhead
 

Crysis Warhead has been optimized to run smoother on even slower video cards. We can also see that SLI performance jumps quite high compared to the single GTX 580, which we have not seen very much in the Crysis benchmarks. The ENGTX580 with the Nvidia GTX 580 has a tremendous boost over the GTX 460s in SLI. Up to 24FPS faster than the GTX 460s in SLI. For those looking into 3D gaming, it looks like even upgrading from a GTX 460 SLI setup to a GTX 580 SLI setup, a user would gain a lot of performance.

Even at 1920×1200, and 2x AA, the performance is still around 40FPS for the GTX 580, which will guarantee a very smooth gameplay, considering that the minimum FPS on the GTX 580 is above 30FPS. With the GTX 480, the FPS drops to 25.26FPS, which will make gameplay a bit difficult in action filled scenes. Based on testing so far, we notice that the AMD counterparts fall behind considerably. This makes the Nvidia GTX 580 the world’s fastest single GPU video card. But that’s not all, when the ASUS ENGTX580 was overclocked, it gained about 7FPS, and this allowed it to beat two GTX 460s in SLI. It’s very crazy to see a single GPU beat two GPUs in a benchmark.

Unigine Heaven 2.1

Unigine Heaven is a benchmark program based on Unigine Corp’s latest engine, Unigine. The engine features DirectX 11, Hardware tessellation, DirectCompute, and Shader Model 5.0. All of these new technologies combined with the ability to run each card through the same exact test means this benchmark should be in our arsenal for a long time.

Unigine Heaven shows us exactly what the GTX 580 is capable of. Since tessellation has been improved on the 580, it is not a surprise that it has a great advantage over the other video cards. Also, the extra clock frequency and CUDA cores add to the overall performance of the GTX 580. If we push all the video cards during extreme tessellation, we can easily see that the Fermi cards have a a great advantage over the AMD cards in tessellation. With more games being developed with high amounts tessellation, the Nvidia cards will have an advantage over the competitors. We can start seeing it here that just by overclocking the GTX 580, the performance went up by almost 10FPS under Extreme Tessellation. While we expected a big gain in FPS when we overclocked the card, we did not expect such a high gain as we have seen in the graphs.

Stone Giant

We used a 60 second Fraps run and recorded the Min/Avg/Max FPS rather than rely on the built in utility for determining FPS. We started the benchmark, triggered Fraps and let it run on stock settings for 60 seconds without making any adjustments of changing camera angles. We just let it run at default and had Fraps record the FPS and log them to a file for us.

Key features of the BitSquid Tech (PC version) include:

  • Highly parallel, data oriented design
  • Support for all new DX11 GPUs, including the NVIDIA GeForce GTX 400 Series and AMD Radeon 5000 series
  • Compute Shader 5 based depth of field effects
  • Dynamic level of detail through displacement map tessellation
  • Stereoscopic 3D support for NVIDIA 3dVision

“With advanced tessellation scenes, and high levels of geometry, Stone Giant will allow consumers to test the DX11-credentials of their new graphics cards,” said Tobias Persson, Founder and Senior Graphics Architect at BitSquid. “We believe that the great image fidelity seen in Stone Giant, made possible by the advanced features of DirectX 11, is something that we will come to expect in future games.”

“At Fatshark, we have been creating the art content seen in Stone Giant,” said Martin Wahlund, CEO of Fatshark. “It has been amazing to work with a bleeding edge engine, without the usual geometric limitations seen in current games”.

In the Stone Giant benchmark, the GTX 580 challanged the GeForce GTX 460s in both resolutions. The two results are very close. With 3D on, and running two GTX 580s in SLI, the performance of Stone Giant is amazing. After seeing it several times already, I still get phasinated by the 3D quality we get in Stone Ginet.

Endless City

Endless City is one of the demo’s Nvidia released with the GTX580 to show off their tessellation performance. We used a 60 second Fraps run and recorded the Min/Avg/Max FPS rather than rely on the built in utility for determining FPS. We started the benchmark, triggered Fraps and let it run on stock settings for 60 seconds with the AutoPilot ON. We just let it run at default (1920×1200) and had Fraps record the FPS and log them to a file for us.

The GTX 580 is capable of rendering up to 2 billion triangles created by tessellation in real-time. Endless City is supposed to test this by rendering the whole 2 billion triangles in real-time, and according to the benchmark, this is correct. The GTX 580 passed the 30FPS mark, which is a good example of a real-time playback. The ASUS ENGTX580 only had a 0.1FPS gain over the stock GTX 580. This shows the small 10 MHz increase on the cores, but it’s not enough to give a larger performance increase.

Metro 2033

Metro 2033 is an action-oriented video game blending survival horror and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for the Xbox 360 and Microsoft Windows. In March 2009, 4A Games announced a partnership with Glukhovsky to collaborate on the game. The game was announced a few months later at the 2009 Games Convention in Leipzig; a first trailer came along with the announcement. When the game was announced, it had the subtitle “The Last Refuge,” but this subtitle is no longer being used.

The game is played from the perspective of a character named Artyom. The story takes place in post-apocalyptic Moscow, mostly inside the metro system where the player’s character was raised (he was born before the war, in an unharmed city). The player must occasionally go above ground on certain missions and scavenge for valuables.

The game’s locations reflect the dark atmosphere of real metro tunnels, albeit in a more sinister and bizarre fashion. Strange phenomena and noises are frequent, and mostly the player has to rely on their flashlight and quick thinking to find their way around in total darkness. Even more lethal is the surface, as it is severely irradiated and a gas mask must be worn at all times due to the toxic air. Water can often be contaminated as well, and short contacts can cause heavy damage to the player, or even kill outright.

Often, locations have an intricate layout, and the game lacks any form of map, leaving the player to try and find its objectives only through a compass – weapons cannot be used while visualizing it, leaving the player vulnerable to attack during navigation. The game also lacks a health meter, relying on audible heart rate and blood spatters on the screen to show the player how close he or she is to death. There is no on-screen indicator to tell how long the player has until the gas mask’s filters begin to fail, save for a wristwatch that is divided into three zones, signaling how much the filter can endure, so players must continue to check it every time they wish to know how long they have until their oxygen runs out. Players must replace the filters, which are found throughout the game. The gas mask also indicates damage in the form of visible cracks, warning the player a new mask is needed. The game does feature traditional HUD elements, however, such as an ammunition indicator and a list of how many gas mask filters and adrenaline (health) shots remain.

Another important factor is ammunition management. As money lost its value in the game’s setting, cartridges are used as currency. There are two kinds of bullets that can be found: those of poor quality made by the metro-dwellers, and those made before the nuclear war. The ones made by the metro-dwellers are more common, but less effective against the dark denizens of the underground labyrinth. The pre-war ones, which are rare and highly powerful, are also necessary to purchase gear or items such as filters for the gas mask and med kits. Thus, the game involves careful resource management.

We left Metro 2033 on all very high settings with Depth of Field on.

Here comes the interesting part. As some of our readers might know already, Metro 2033 doesn’t have exceptional SLI support, certainly not as good as some other games on the market. The GTX 580s in SLI did perform double the performance of a single GTX 580. We are not 100% sure why this happened. It could be a slight error in the charts due to us using Fraps to measure the FPS of Metro 2033. Using Fraps may not yield us perfect results, but it should still give our readers an idea on what to expect. Since we were running two GTX 460s in SLI, all the other benchmarks showed that the two GTX 460s were faster than a single GTX 580. With Metro 2033, where SLI support is not the best, we noticed that the GTX 580 was able to take the crown in these tests. This is understandable because with poor SLI support, the overall SLI score will be lower, and as a more powerful single GPU video card with more powerful tessellation performance, the GTX 580 performs better. The beautiful thing about the GTX 580 was also the fact that the fan ran very quietly, making it enjoyable to play Metro 2033. Even though Palit designed their cards to be quiet, the two GTX 460s in SLI made twice the amount of noise, making it more enjoyable to play video games with the GTX 580 instead on an SLI setup.

3DMark Vantage

For complete information on 3DMark Vantage Please follow this Link:

www.futuremark.com/benchmarks/3dmarkvantage/features/

The newest video benchmark from the gang at Futuremark. This utility is still a synthetic benchmark, but one that more closely reflects real world gaming performance. While it is not a perfect replacement for actual game benchmarks, it has its uses. We tested our cards at the ‘Performance’ setting.

In 3DMark Vantage, we see a fantastic improvement in overall GPU score. The leap from the GTX 480 to the GTX 580 seems to be around 4600 points. Then again just by overclocking the ASUS ENGTX580 Voltage Tweak video card, we were able to gain another 4300 points, almost the performance difference between the GTX 480 and the GTX 580…. anybody thinking about GTX 680? 🙂

HAWX 2

Tom Clancy’s H.A.W.X. 2 plunges fans into an explosive environment where they can become elite aerial soldiers in control of the world’s most technologically advanced aircraft. The game will appeal to a wide array of gamers as players will have the chance to control exceptional pilots trained to use cutting edge technology in amazing aerial warfare missions.

Developed by Ubisoft, H.A.W.X. 2 challenges you to become an elite aerial soldier in control of the world’s most technologically advanced aircraft. The aerial warfare missions enable you to take to the skies using cutting edge technology.

HAWX 2 did not show too much improvement in overall graphics, but tessellation to the hills and terrain showed significant improvement. There were points when the high level of geometry and the combination of high quality textures made the hills look quite realistic. However, it would have been more interesting to see tessellation being utilized in more places than just the terrain.

By examining the scores on both graphs, it is clearly visible that the Nvidia GPUs with the extra PolyMorph units are able to perform better in tessellation-heavy applications than AMD’s GPUs. We can see this with the Palit GeForce GTX 460 and the HD6870 video cards. On the other hand the GTX 580s did not have a performance difference on the 1920×1200 benchmark, but did have a 1FPS difference in the 1680×1050 benchmark. Once again two GTX 580s in SLI blew everything out of the water.

Overclocking

Overclocking the ASUS GTX 580 was a snap. There are not many applications that can overvolt the GTX 580 right now, but the ASUS SmartDoctor and MSI AfterBurner’s Beta version can overclock it without a problem. We were able to reach a stunning 959MHz on the GPU frequency, and 2348MHz on the Memory Frequency. At these speeds, we were able to beat the two factory super overclocked Palit GTX 460 Sonic Platinum cards without any problem.

Unfortunately, working with ASUS SmartDoctor was not as smooth as we would have liked it to be. After using it for several minutes, we ended up using the MSI Afterburner tool with the Logitech G15 LCD option so we could easily monitor temperatures, voltages, and and other performance related information.

Click Image For a Larger One
 

Here are some performance tests after the GTX 580 was overclocked:

Video Cards FPS – Unigene Heaven 2.1 / Crysis Warhead 1920×1200 Res (Extreme Tesselation for Unigine Heaven)
Nvidia GeForce GTX 580
44.5FPS / 39.46FPS
Nvidia GeForce GTX 580 OC (Stock Voltage)
47.9FPS / 42.52FPS
Nvidia GeForce GTX 580 OC (Raised Voltage to 1235mV @ 933MHz GPU, 2275MHz Memory) 52.6FPS / 45.13FPS
ASUS GeForce GTX 580 OC (Raised Voltage to 1300mV @ 959MHz GPU, 2348MHz Memory) 54.0FPS/ 46.78FPS
Galaxy GeForce GTX 480 37.9FPS / 32.99FPS
Palit GeForce GTX 460 Sonic Platinum in SLI
49.0FPS / 46.39FPS

What a performance difference with the ASUS ENGTX580 Voltage Tweak video card. We were absolutely shocked that we were able to achieve such speeds with this card without even breaking a sweat on the cooling. Though the card did run hot at around 95C at load even at 85% fan speed, we were running it at 1.3V. After taking out the system into our garden at night when the temperatures were around 10C, the overall GPU temperatures dropped to 82C, and stayed stable around there. This shows that with a water cooling setup, there is a lot of performance that could be gained with this card. We would like to mention that the card was also 100% stable, and we were able to run benchmarks for over 20 minutes without stopping, and the card would just rock on.

Also, it is very interesting that we were almost able to achieve a 10FPS increase with the overclocked GTX 580 over the stock GTX 580. The performance difference between a GTX 480 and the ASUS ENGTX580 OC was almost 16FPS. This can definitely change the user’s gaming experience.

TEMPERATURES

To measure the temperature of the video card, we used MSI Afterburner and ran Metro 2033 for 10 minutes to find the Load temperatures for the video cards. The highest temperature was recorded. After playing for 10 minutes, Metro 2033 was turned off and we let the computer sit at the desktop for another 10 minutes before we measured the idle temperatures.

Video Cards – Temperatures – Ambient 23C Idle Load (Fan Speed)
2x Palit GTX 460 Sonic Platinum 1GB GDDR5 in SLI
31C 65C
Palit GTX 460 Sonic Platinum 1GB GDDR5
29C 60C
Galaxy GeForce GTX 480
53C 81C (73%)
Nvidia GeForce GTX 580 39C 73C (66%)
ASUS GeForce GTX 580 38C 73C (66%)

The ASUS GTX 580 shows a significant temperature decrease during Idle temperatures from the GTX 480, and about 1C difference between the Nvidia reference card GTX 580 and the ASUS ENGTX 580. While the Load temperature is 8C lower than that of the GTX 480, the fan speed on the GTX 580 was also lower than the GTX 480. Overall, the cooling solution for the GTX 580 was very well designed, along with the improvements and optimizations of the GF110 chip. We were expecting slightly higher temperatures on the ASUS card, but it seems that the slight GPU Core speed change does not change the temperature significantly on GF110 GPU.

POWER CONSUMPTION

Power_Consumption

To get our power consumption numbers, we plugged in our Kill A Watt power measurement device and took the Idle reading at the desktop during our temperature readings. We left it at the desktop for about 15 minutes and took the idle reading. Then we ran Metro 2033 for a few minutes minutes and recorded the highest power usage.

Video Cards – Power Consumption Idle Load
2x Palit GTX 460 Sonic Platinum 1GB GDDR5 in SLI
315W 525W
Palit GTX 460 Sonic Platinum 1GB GDDR5
249W 408W
Nvidia GTX 460 1GB 237W 379W
Galaxy GTX 480
248W 439W
ASUS GeForce GTX 580 226W 441W
Nvidia & ASUS GeForce GTX 580 in SLI 310W 660W
Nvidia GeForce GTX 580 225W 439W
Nvidia GeForce GTX 580 OC (Stock Voltage)
232W 461W
ASUS Radeon HD6870 235W 375W
AMD Radeon HD5870 273W 454W

We can see that the ASUS GeForce GTX 580 has about the power consumption as the reference Nvidia GeForce GTX 580, but with slightly higher Wattage output. This is expected since the GPU core speed is raised by a tiny bit.

Conclusion

While we could go into much detail about the newly improved GF110 chip that the GTX 580 uses, we feel it is more important to highlight the actual features that we liked about this card. Of course it is a nice improvement to have 512 CUDA cores, 1 more PolyMorph and Raster Engine, and higher frequencies, but even more so, we believe that the overclocking performance the ASUS ENGTX580 Voltage Tweak video card was capable of was just phenomenal. What makes it even more exciting is that the GeForce GTX 580 video cards are the fastest single GPU video cards on the planet. On the other hand, the ENGTX580 still seems to be on the hungry side for power consumption. While we were able to get the extra performance from the GTX 580, the actual power consumption levels that we measured were still the same as the GTX 480. We made sure we were measuring the power consumption under the same testing environments as the GTX 480, and while playing Metro 2033. This allowed us to get the most real life power measurements.

As we have seen with the SLI tests that we performed, a combination of two GTX 580s can yield slightly higher performance with the newer drivers then what we saw in the past. For users who would like to play video games at high resolutions with all the eye candy on, including the 3D, we recommend getting two GTX 580s for optimal performance.

Is the extra 10 MHz performance GPU frequency increase on this card worth the slight price increase that we see for the ASUS ENGTX580? Probably not, and at the same time there are other cards on the market that go for cheaper and include more accessories, even special bundles that include a free download of 3DMark 11, when it becomes available for download later on this year. We still believe however that the card had the potential to perform much better than what we have seen in the past, and performance is really what matters on expensive video cards like the ENGTX580.

Reviewer’s Opinion:
The ASUS ENGTX580 is a very solid video card for overclocking. There was only one time when the whole system froze because of an overclocking failure, and this only happened when I went above 1000MHz on the GPU speed. I’m sure even more is possible with this card especially with better cooling, but for an enthusiast in gaming and overclocking, the ASUS ENGTX580 is a perfect choice.

Editor’s Opinion:
My favorite thing about this card is that it has so much potential for overclocking. In today’s expanding market, I think one of the most important features of computer hardware, particularly a video card, is its potential for the future. It’s important to know how far it will go two, three, years down the line when it does need to be overclocked to keep up. From what I saw of the performance, this card overclocks brilliantly, and it’s unique cooling is great for this kind of use. I was somewhat surprised that ASUS only adjusted the clock speed by 10MHz. It would not overclock better if there would have been any significant difference in overclocked performance, but I am sure it would have increased the price. Perhaps this is a good middle ground, and one that can be built on further down the line.

 

OUR VERDICT: ASUS ENGTX580 Voltage Tweak
Performance 10
Value 9
Quality 9
Features 9
Innovation 9
We are using a new addition to our scoring system to provide additional feedback beyond a flat score. Please note that the final score isn’t an aggregate average of the new rating system.
Total 9.75
Pros Cons

Performance is fantastic, especially when the card is overclocked.

Quiet operation while maintaining reasonable temperatures during load.

Fastest Single GPU card on the market.

3 Year Warranty

Temps and Power Consumption still are a bit on the high side.

 

Summary: The ASUS ENGTX580 Voltage Tweak video card has surprised us in quite a few areas, but especially in the overclocking field. We are happy to award the ASUS ENGTX580 a 9.75 out of 10 and the Bjorn3D Golden Bear Award.

Check Also

Fifine Ampligame A6T

Introduction Much like the webcam, the USB microphone has become a rather indispensable tool in …

Cooler Master Hyper 622 Halo

Introduction The liquid cooling is the go to cooler for the PC enthusiasts who want …

Leave a Reply

instagram default popup image round
Follow Me
502k 100k 3 month ago
Share