Burberry Factory AMD claims it can offer the be

AMD claims it can offer the benefits of Nvidia

At CES this week, AMD made an unusual announcement about Nvidia new G Sync technology. According to the company senior engineers, they can replicate much of the advantages of Nvidia G Sync tech through the use of what are called dynamic refresh rates. Multiple generations of AMD video cards have the ability to alter refresh rates on the fly, with the goal of saving power on mobile displays. Some panel makers offer support for this option, though the implementation isn standardized. AMD engineers demoed their own implementation, dubbed on a laptop at the show.

AMD windmill application FreeSync demo. Unfortunately, it impossible to find video demos on YouTube that don ruin the G Sync or FreeSync effect.

Dynamic refresh rates would theoretically wor Burberry Factory k like G Sync by specifying how long the display remained blank on a frame by frame basis, providing for smoother total movement. AMD has stated that the reason the feature didn catch on was a lack of demand but if gamers want to see G Sync like technology, AMD believes it can offer an equivalent. AMD also told Tech Report that it believes triple buffering can offer a solution to many of the same problems G Sync addresses. AMD theory as to why Nvidia built an expensive hardware solution for this problem is that Nvidia wasn capable of supporting G Sync in any other fashion.

Nvidia rebutsNvidia, unsurprisingly, has a different view of the situation. Tech Report spoke to Tom Peterson, who stated the difference between a laptop and a desktop running a software equivalent to G Sync is that laptop displays are typically connected using embedded DisplayPort or the older LVDS standard. Standalone monitors, in contrast, have their own internal scaling solutions and these chips typically don support a variable refresh rate.

I think Nvidia is probably being honest on that score. The G Sync FPGA is fairly hefty, with 768MB of onboard memory and a limi Burberry Factory ted number of compatible monitors. Nvidia has a long interest in keeping its technology proprietary, but it also has reasons to extend G Sync as widely as possible for as little up front cost as possible. A G Sync upgrade kit for $50 that fits any modern monitor would sell more units than a $100 or $150 kit that only fits a limited number of displays or that requires a new LCD purchase.

Nvidia G Sync includes a 768MB buffer combined with a custom FPGA.

It entirely possible that both companies are telling the truth on this one. AMD may be able to implement a G Sync like technology on supported panels, and it could work with the manufacturers of scalar ASICs if G Sync starts catching on for Nvidia. Nvidia, meanwhile, is probably telling the truth when it says it had to build its own hardware solution because existing chips for desktop displays weren doing the job.

Whether this works out to a significant halo for Nvidia in the long run or not will come down to price and time to market. In the past, Nvidia took the lead on computing initiatives like PhysX and CUDA, getting out in front on technical capability, while industry wide standards followed along at a slower pace. The impact on the consumer market has been mixed PhysX definitely delivered some special effects that AMD didn match, but CUDA impact on the consumer space has been small (its HPC success is another story altogether).

The difference between these technologies and G Sync is that monitors are fairly long lived. Buy a G Sync monitor today, and you have the benefits for five years or more. Some games benefit from G Sync more than others, but once Nvidia smoothes out the development pipeline, we should see a consistent stream of titles that run better in that mode. It not like hardware PhysX, which was never supported by more than a handful of major games in any given year. In the long run, if panel makers start building variable refresh rates into their own displays, than the need for Nvidia specific G Sync technology may fade out but that doesn mean the company can make a pretty penny off the concept while it lasts. And since it take time for panel manufacturers to adopt the capability if they choose to do so, it means Nvidia has a definite window of opportunity on the technology.

Tagged InWell, in this case, the opinion section is the impact of G Sync on the monitor market and I don think we know enough yet to predict whether customers will or won go for it. The benefit has to be significant, easy to demonstrate, and broad.

When it comes to the technical capabilities of the respective products, there no reason to doubt AMD claim that it can drive a monitor display using variable refresh rates in the GCN chip and no reason to doubt NV claim that it built its own scalar solution because existing chips in desktop monitors can offer the capability. We know that desktop monitors have their own scalars, and if they could offer the capability, Nvidia wouldn have needed its own chip.

V sync On eliminates tearing but can introduce stutter and a tiny bit of input lag. V sync Off eliminates stutter but introduces tearing.

Monitors Burberry Factory typically operate at a constant refresh rate, refreshing the frame presented by the video source every (for example) 1/60th of a second. With V sync on, the video driver will wait to present a new frame until the monitor has finished drawing the last one. If the source rendered the next frame extremely quickly, it will have to wait almost an entire refresh before showing the next frame, which introduces a tiny bit of lag between user input and the visual feedback.

If frame render time varies while still falling within 1/60th of a second, the user may notice their input occasionally lagging slightly. Stutter comes when the next frame isn quite ready by the time the next refresh comes around, which results in the video card presenting the same frame for two refreshes, which is visible as a stutter.

With G sync and its equivalents, that last scenario would allow the monitor to wait up to an entire refresh cycle for the next frame to become available before refreshing. Because, in most cases, the frame will only need a small fraction of the next 1/60th of a second to finish being rendered, the delay is much less noticeable than if it were to have to wait an entire frame refresh.

Put another way, the variation between frame times under V sync is normally either invisible (always rendering above 60 FPS) or clearly visible as ugly stuttering if the frame rate drops below 60 for even a single frame render. With G sync, the frames are displayed by the monitor as quickly as they can be rendered, up to the monitor maximum refresh rate, which means the variation of the time between two frames will always be very small, even unnoticeable.

It essentially the benefits of both V sync on and off without the shortcomings of either. You be able to turn your settings up to operate in an optimal range without worrying as much about the frame rate. 120 Hz monitors are looking quite attractive now, eh?

It never stated that NVIDIA invented PhysX It simply stated they took the initiative with it meaning they used it stronger then the competition and eventually buying the company, etc. So I not sure what that statement is doing in your retort used AMD for many many years but gave away my pair of 7970 for 690 for multiple reasons. Mostly though they have issues going above 60Hz effectively. Constantly got the crash. Never a problem with the NVIDIA card. But also the frame delay in the multi card setups got me. So my last few systems have been NVIDIA focused until AMD cleans that up which they seem to be doing on the multi card issue anyhow.

Not sure they mean the same thing. They can keep the tech proprietary and still give it out, Like Blu Ray. Sony gets a cut of every blu ray setup. NVIDIA could simply get a cut of every monitor that uses their setup (and this cut is tiny, but pays off in bulk bigtime). They could also license it to AMD for a fee. If AMD will pay for it is the question. And it actually does make a nice difference so AMD might pick it up for their best cards, perhaps. That is the possibility while still keeping it proprietary.

There nothing intrinsically wrong with proprietary standards. CUDA has done great things for NV in the HPC space. PhysX didn really catch on as a major driver of the GPU market, but it offered its own advantages.

Rambus didn turn itself into a hated cariacture of a company because it had proprietary tech. It turned into a punching bag because it attended JEDEC meetings while secretly filing patents on the improvements discussed therein.

I did a great deal of research on that situation back in the day. Intel absolutely wanted to corner a new high speed memory market for itself, but it made no secret of that fact. Even the preferential stock deal was the icing on the cake, not a secret revelation of intent. The irony was, the major DRAM manufacturers actively worked against Rambus and kept right on using its IP even after they knew about the patents. Rambus actively worked against the DRAM manufacturers, threatening terrible licensing deals if cases went to court. The only company that actually seemed to be working in good faith (albeit towards its own self interest) was Intel.

Nvidia hasn really done anything like this. And as for Sony, they persistently created great tech standards, then attempted to charge ludicrous amounts of money for them, or made them very user hostile. PhysX is Burberry Factory n user hostile. Neither is CUDA.

But how does it benefit the OEMs? I mean, every little cost adds up eventually and sounds like AMD solution is going to be cheaper for them unless nVidia steps up their game and brings an integrated solution rather than an expansion slot (which I wouldn be surprised it happens in the future.) I guess it gonna be up to the early adopters and see if the option floats or sinks. But my bet is going to be with the but free solution that AMD is offering.

I find it laughable whenever there news about Rambus showing off new RAM technology that promises superior performance than what is out there on the market, but it always ends up as NO ONE wants to manufacture it. I mean Nintendo has burned a lot of bridges but the people at Rambus has burned the bridges, slaughtered the horses, poisoned the well, and spread salt all over the fields.

I don think this helps or hurts the OEMs. It like saying that 4K hurts OEMs because it more expensive. If an OEM bets the farm on 4K, and 4K flops, then yeah, hey, 4K flops and that OEM goes out of business. That why you don bet the farm.

Nvidia built the ASIC to support this capability. If the cost of supporting that ASIC is relatively low, then the risk to OEMs in doing so will be correspondingly low. Sure, it always possible that the OEMs will put int he effort and get no reward, but that could happen even with an open standard. If AMD works with Broadcom to create a TV ASIC that variable refresh, and then Broadcom doesn sell the numbers it expected to make, then that still hurts them, too.