Today we find the GPU Flashback Archive delving into the not so distant past to focus on the NVIDIA 900 series of graphics cards, the first to use NVIDIA’s new Maxwell architecture which had already seen the light day in mobile GPU solutions, an indication of the direction that the company were taking at the time. Let’s take a look at the cards that were launched as part of the 900 Series, the improvements and changes that Maxwell brought and some of the more memorable scores that have been posted on HWBOT.
The first question one may well have regarding the NVIDIA 900 series is simple - what happened to the 800 series? To answer the question fully, you must first look at the direction that NVIDIA was moving at the time. A movement to expand its product offerings in order to compete in the quickly expanding mobile SoC market. The suddenly ubiquity of Android-based smartphones around the globe was fuelled in part by the development of mobile SoCs from Qualcomm, Samsung, Mediatek, Marvell, Allwinner and others. The traditional feature phone was quickly being replaced by smartphones that now required improved multi-core CPU performance, HD display support and, importantly from NVIDIA’s perspective, decent enough graphics processing to actually play 3D games. Intel and NVIDIA were two companies with plenty of R&D and marketing budget who sought to enter a new market to help bolster revenues during an inevitable slow down of desktop PC sales, a traditional cash cow for both.
The GPU Flashback Archive series continues today with a recap of the NVIDIA GeForce 700 series, a series refresh which heralds part two of the Kepler family of GPUs. We can also remember it as a time when NVIDIA launched their first ever GTX Titan card and with it, a new pricing and retail strategy for truly high-end graphics card products. Let’s take a look at the new Kepler architecture GPUs, the cards that were popular with HWBOT members and some of the more memorable scores that have been posted since launch.
The 2011-2013 period of history saw NVIDIA implement a more regular cadence to their high-end product launches and refreshes. One that saw the company launch a new GPU architecture every two years, with new product lines arriving each year. This means deriving two product lines per architecture with an improved version offered the second time out. This is what we saw with Fermi, an architecture whose potential was full realized at the second attempt. With the GeForce 700 series, which arrived proper in May 2013 with the arrival of both the GeForce GTX 780 and GTX 770, we have something different. The new cards arrived using a much bigger version of the Kepler architecture compared to what we saw on the NVIDIA 600 series.
The GPU Flashback Archive arrives today at the NVIDIA 600 series that debuted in Spring of 2012. The new range of cards showcased a new graphics architecture design and the beginning of what we might describe as the Kepler era. Let’s take a peek at the changes that the new design heralded as well as a close up view of on the GeForce GTX 680 card, the most popular 6-series card with HWBOT members historically speaking. Before we look at some notable scores that were made with the GeForce 680, let’s first kick off with an overview of what innovations arrived with the new Kepler architecture.
If we cast our minds back to 2012 we can recall a era when NVIDIA and AMD were virtually neck and neck, with successive graphic card launches from each company swinging the performance crown from side to side. The arrival of Kepler in many ways represents the beginning of the end of the competitive duopoly that is clearly absent today. Kepler helped NVIDIA push ahead of AMD in terms of graphics processor design, creating a performance lead which AMD still finds insurmountable, despite the arrival of their latest Vega-based cards. Let’s take a look at Kepler in a little detail.
This week the GPU Flashback Archive sets its sights on the GeForce 500 series from NVIDIA. Arriving in late 2010, the 500 Series was the second round of graphics cards based on the Fermi architecture which had limped over the line in the previous generation, ostensibly due to fabrication and yield issues. The new flagship GTX 580 arrived with a more polished take on the Fermi design that help NVIDIA combat the threat from AMD and their popular Radeon 5000 and 6000 series cards. As ever, let’s take a look at the new GPU, the new flagship card and a few of the outstanding scores that have been submitted to HWBOT.
To say that the NVIDIA 400 series graphics cards launch was less than smooth, would be a total understatement. The GF100 Fermi architecture GPU in fact arrived six months late with a significant number of cores hacked off. Blame was laid at the door of fabricators TSMC and a 40nm manufacturing process that clearly hadn’t been optimally adapted for NVIDIA’s Fermi, a monster chip boasting 3 billion transistors and a 529mm² die. While cards such as the GTX 480 had actually done well to make NVIDIA competitive in performance terms, the GTX 580 and its GF110 GPU was rather quickly shoved out the door just eight months later as a revised and improved version of the original.
This week in our GPU Flashback Archive series we cast our minds back to a very popular and well loved graphics card series, the GeForce 400 series. NVIDIA launched the GeForce 400 series in March 2010 armed with a new Fermi architecture that it hoped would help it compete with the successful AMD Radeon 5000 series. Let’s look at the new features that Fermi offered, the cards that were popular and the scores that were submitted to HWBOT in this era.
Compared to previous product launches from NVIDIA, the GeForce 400 series launch did not go as smoothly as hoped. September 2009 saw AMD come out with their Radeon 5000 series which made a solid case against NVIDIA 200 series offerings. It would be January before NVIDIA really started wooing tech media with tales of its forthcoming Fermi architecture lineup. It would be March 2010 before tech media actually got their hands on the new cards and several weeks after that before enthusiasts would be able to actually buy one. This was not the typical NVIDIA launch. Reasons for the delay certainly seemed to lie with issues with actual fabrication at TSMC who were not providing the yields expected on their new 40nm process. This was a problem that particularly hurt NVIDIA due to the fact that the new Fermi GPU, the GF100, was actually very large. When the GeForce 400 series finally arrived in the form of the GeForce GTX 480 and GTX 470, by most calculations they were six months late.
Taipei, Taiwan (28 March 2018) – G.SKILL International Enterprise Co., Ltd., the world’s leading manufacturer of extreme performance memory and gaming peripherals, is excited to announce the achievement of an unprecedented DDR4-5000MHz memory in dual-channels. This major breakthrough is the world’s first instance of two DDR4 RGB memory modules breaking the DDR4-5000MHz barrier on just air cooling, when considering that this world-record class speed was only achievable under extreme liquid nitrogen cooling just two years ago. This massive technological feat is achieved with the high performance Samsung DDR4 B-die ICs, and running on the MSI Z370I GAMING PRO CARBON AC motherboard and the Intel® Core™ i7-8700K processor.
World’s First Dual-Channel DDR4-5000MHz Achieved on Air-Cooling
This in-development memory speed marks the first time in history that a pair of air-cooled RGB memory achieving the legendary speed of DDR4-5000MHz. While DDR4-5000MHz memory kits aren’t yet ready to hit retail stores, G.SKILL is taking major leaps in developing much faster memory speeds and demonstrating the brand’s unwavering dedication to continually push DDR4 memory performance into the absolute extremes.
“Previously, the 5GHz memory speed is only achievable in extreme overclocking and in single-channel. We’re excited to share that we’ve been able to achieve the 5GHz memory speed in not only air-cooling conditions, but also in dual-channels. This is a major milestone for us,” says Tequila Huang, Corporate Vice President, G.SKILL International. “We will make every effort to bring this specification onto the consumer market, and bring the experience of extreme performance to worldwide users.”
Shown in the screenshot below, on a system with the MSI Z370I GAMING PRO CARBON AC motherboard and an Intel i7-8700K processor, CPU-Z displays a DDR4-4700MHz Trident Z RGB dual-channel memory kit being overclocked 300MHz past its original rated speed to reach DDR4-5000MHz.
Established in 1989 by PC hardware enthusiasts, G.SKILL specializes in high performance memory, SSD products, and gaming peripherals designed for PC gamers and enthusiasts around the world. Combining technical innovation and rock solid quality through our in-house testing lab and talented R&D team, G.SKILL continues to create record-breaking memory for each generation of hardware and hold the no. 1 brand title in overclocking memory.
Taipei, Taiwan, April 1st, 2018 – GIGABYTE TECHNOLOGY Co. Ltd., a leading manufacturer of motherboards and graphics cards, today announced another great competition for the overclocking community with the GIGABYTE AORUS April Extreme Clocking 2018, to be hosted this April on HWBOT.org. This challenge gives the community a chance to test their skills on the GIGABYTE motherboards before another series of motherboards comes around. By participating, overclockers have the chance to win GIGABYTE AORUS Z370 chipset motherboards!
Starting April 1st 2018, overclockers will have a chance to win some great prizes by submitting their best score within 4 stages. Most points win, it’s as simple as that! Participants have two ways to win, either they collect the most points to win one of the 3 big prizes, or they submit their scores so they are eligible for the lucky draw.
Prize info for GIGABYTE AORUS April Extreme Clocking is listed below:
Overclock.net Freezer' Burn Overclocking competition starts this Sunday and runs for 2 months. It contains four stages, split in ambient cooling (min 20 degrees Celsius) and extreme cooling. The 2400 USD prize pot will be split over 20 winners. Awesome! Good luck for all participating. :)
The hwbot point calculation or revision 7 was wonky, and messed up the rankings under load. Now the bug are ironed out rev7.1, a full recalculation of approx 870.000 rankings is in progress. As the site is under quite some load from the day to day submissions and visitors, this takes a long time. In the meanwhile, please be patient.
The full recalc was started 48 hours ago. At a rate of one ranking every 2 seconds, approx. 90000 have been recalculated up till now.
[Press Release] Taipei, Taiwan (9th March 2018) – G.SKILL International Enterprise Co., Ltd., the world’s leading manufacturer of extreme performance memory and gaming peripherals, is announcing the online qualifier competition of its biggest annual overclocking competition G.SKILL OC World Cup 2018!
G.SKILL OC World Cup 2018 - G.SKILL OC World Cup overclocking competition consists of three rounds: Online Qualifier, Live Qualifier, and Grand Finals. The top 6 contestants from the Online Qualifier round will receive an invitation to the Live Qualifier round, as well as the chance to qualify for the Grand Final round, in the G.SKILL booth at Computex 2018 from June 5 to 8, 2018 in Taipei, Taiwan. The top two finishers from the Live Qualifier round will compete head-to-head on the fourth day of Computex 2018 for the grand prize.
Round 1: Online Qualifier - The Online Qualifier round will be held on HWBOT (hwbot.org) from March 13th to April 16, 2018, featuring 4 benchmarks: Highest DDR4 Frequency, Geekbench 3 Multi-Core Full Out, 3DMark11 with IGP, and SuperPi 32m Full Out. Contestants must use G.SKILL DDR4 memory, Intel Core i5-8600K processor, and Z370 chipset motherboards. For more event details and rules, please visit the competition page on HWBOT.
$20,000 USD Total Cash Prize & $10K for the Champion! - G.SKILL OC World Cup competition features the largest single cash prize in overclocking competition with USD $10K for the Grand Champion.
GIGABYTE released a major revision of Aorus Z370 Ultra Gaming motherboard. Revision 2.0 replaces the 7-phase CPU VRM of the original with a new 11-phase setup that uses stronger ferrite-core chokes that don't whine when stressed. This new revision will be a part of the prize money of the GIGABYTE HWBOT competitions this winter: the AUROS Winter OC Challenge, the currently running AORUS March Madness and the soon to be anounced april competition too!
The latest version of GPU-Z is now available from the guys at TechPowerUp. GPU-Z version 2.8.0 adds support for for several AMD Vega-based Mobile GPUs and improved stability with AMD ‘Raven Ridge’ APUs. As ever there the new release includes a bunch of significant bug fixes and optimizations:
To begin with, we've addressed driver-crash issues seen on AMD "Raven Ridge" APU iGPU enabled systems, when using GPU-Z. The new DXVA 2.0 Features page in the "Advanced" tab is a ready-reckoner for all the video formats your GPUs provide hardware-acceleration for. We've made improvements to the accuracy of video memory usage readings on AMD Radeon GPUs, rendering performance of NVIDIA PerfCap sensor; and AMD power-limit readings in the "Advanced" tab.
Among the new GPUs supported are Radeon RX 460 Mobile, RX 560 Mobile, RX 570 Mobile, RX 580 Mobile, RX 550 based on Baffin LE. Minor bug-fixes include NVIDIA PerfCap sensor drawing outside its area; accuracy of temperature reading on AMD "Vega," a "BIOS reading not supported" error popping up on certain motherboards, and the driver digital signature reading getting truncated on high-DPI displays. Grab GPU-Z v2.8.0 from the link below.
Here’s the full changelog for version 2.8.0:
- Fixed crashes and other issues on AMD Ryzen Raven Ridge APU
- Added DXVA 2.0 hardware decoder info to Advanced Tab
- "Disable sensor" menu item now properly called "Hide"
- Improved VRAM usage monitoring on AMD
- Improved rendering performance of NVIDIA PerfCap sensor
- Improved AMD power limit reporting in Advanced Panel
- "MemVendor" is now included in XML dump output
- Fixed NVIDIA PerfCap sensor drawing outside its area
- Fixed "BIOS reading not supported" error on NVIDIA, on some motherboards
- Fixed HBM memory type detection in Advanced Tab on Fury X
- Fixed temperature misreadings on Vega
-Fixed "Digital Signature" label getting truncated on some hidpi screens
- Added support for RX 460 Mobile, RX 560 Mobile, RX 570 Mobile, RX 580 Mobile, RX 550 based on Baffin LE
A little while ago we noticed a flurry of very impressive 2D scores from US No.2 Splave. He used an octa-core Intel Core i7 7820X processor to make Global First Place scores in three of today’s multi-threaded 2D benchmarks; Cinebench R11.5, Cinebench R15 and Geekbench 3 Multi-core and Intel XTU. Let’s have a peek at the rig used and also try and figure what configuration he used:
In the two Cinebench benchmarks we find Splave pushing his Core i7 7820X under liquid nitrogen to 6,128MHz which is a very satisfactory +70.22% beyond the chip’s stock settings. According to the CPU-Z screenshot he configured core voltage at 1.524 V. He also configured his DRR4 kit at 1,838MHz with 12-12-12-24 timings. His motherboard of choice is an ASRock X299 OC Formula. All of which helped him push the highest ever octa-core score in Cinebench R11.5 to 29.68 points, and in Cinebench R15 to 2,739 cb points. Both scores edge past the previous best from Sofos1990 (Greece).
When benching Geekbench 3 Multi-core we find CPU cores of the same rig pushed slightly more conservatively to 6,115MHz (+69.86%) to hit a new octa-core high score of 50,725 points. In the Intel XTU benchmark the new Global First Place in the octa-core rankings also belongs to Splave with a score of 4,175 marks and his 'Skylake-X' architecture CPU clocked at 5,958MHz (+65.5%). Again in both cases Sofos1990 loses out.