![]() ![]() Despite #1, improved gaming performance dramatically Barely increased TFLOPs - Especially compared to CDNA, it is clear that RDNA has fewer FLOPsĢ. What I can say is that the various improvements AMD made to RDNA did the following:ġ. ![]() > But why would game programmers care about shader core latency? I seriously don't understand. So I assume that they have kept GCN rebranded as CDNA for datacenter applications because they were not ready to develop appropriate software tools for RDNA. Until today ROCm and the other AMD libraries and software tools for GPU computational applications have good support only for GCN/CDNA GPUs, while the support for RDNA GPUs was non-existent in the beginning and very feeble now. I suppose that the reason why AMD has continued with GCN for the datacenter GPUs was their weakness in software development. GCN/CDNA would have been as useful in scientific computing applications as they are in the gaming GPUs and RDNA is also defined to potentially have fast double-precision operations, even if no such RDNA GPU has been designed yet. Moreover, CDNA is not a new architecture, but just a rebranding of GCN.ĬDNA 1 had little changes over the previous GCN variant, except for the addition of matrix operations, which have double throughput compared to the vector operations, like NVIDIA did before (the so-called "tensor" cores of NVIDIA GPUs).ĬDNA 2 had more important changes, with the double-precision operations becoming the main operations around which the compute units are structured, but the overall structure of the compute units has remained the same as in the first GCN GPUs from 2012. I'm sure Nvidia's internal plan is to just cut prices if RDNA 3 surprisingly competes at the top end instead of just the midrange, but in the meantime is happy to take those fat fat margins. Which the rumor mill is saying it will be, but AMD's graphics track record is spotty to put it mildly. That just lives AMD as competition, and only if RDNA 3 comes out a winner. None of the $1000+ GPU buying market is going to care about saving $100-300 to get half the performance after all. So used cards yes are a factor, yes, but if this really is a 2x performance jump then those aren't going to be a factor here. And Intel is both missing any semblance of a release date it set for itself, but the drivers are also a disaster and rumors are Intel is going to shutter the entire division, focusing only on data center compute cards only. Intel's highest end ARC card, the A770, was positioned against the RTX 3060. All signs point to ARC's first outing being a swing & a miss, and Intel themselves didn't even have any plans to come anywhere close to competing at the top end. YMMV if you're not paid well in a HCOL area, or if your location offers alternatives such as PC cafes. I couldn't imagine giving up my access to old titles, console emulators, mods, itch.io experiments, and low-budget indie games on Steam to save a few hundred bucks in the short-term to buy a console. I will say that I was excited for the 40XX lineup, but at these prices I'm more inclined to hunt for a cheaper 2080 or 3080 solely for RTX functionality, but that's an enthusiast feature even for someone like me who spends a huge amount of time gaming. I'm running my favorite games at 144fps, and the newer super-high fidelity stuff I can run at 60fps, which is more than reasonable. Cost is essentially no object for me, as gaming is my primary hobby and I can easily spend 60hrs/week playing video games, so the return would be more than justified in my mind.īut I haven't upgraded in 6 years because there has been no reason to. I game at 1440p on a GTX 1080 from launch day in 2016. Just to echo the other children of this comment I'm pretty lost as to the reasoning here.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |