There isn't a lot you can do about textures from an optimization standpoint, you can attempt to compress them in memory, but they have to be uncompressed SOMEWHERE to display them and the compression/decompression takes cycles as well, but a 35MB 4k texture is a 35MB 4k texture, there is no way to cut it up or dynamically stream it in faster than VRAM. The problem lies when you have to hold up the frame to go fetch a texture. Note that those results would be running at High and 1920x1080 screen res. Post processing effects or even rendering and CG effects don't have much of an impact on VRAM at all. The experience of playing Arch Linux through a Radeon RX 580 8GB is going to deliver a solid 70 FPS. You don't need everything on ultra to kill 4gb on a card, you just need to up the texture/shader quality. VRAM isn't needed for optimization of anything, it quite literally exists as fast storage for texture/shader/and mesh data which shouldn't need to be streamed from RAM or thrashed from storage. The system power consumption with the RX 580 does come much higher than the RX 480. The MSI Radeon RX 580 ends up running much cooler than the reference AMD Radeon RX 480 graphics card. The only reason you're ever going to need 8GB of VRAM is if you're playing poorly optimized games, or if you're slamming all the graphics settings on a game to Ultra. The CPU usage between the RX 480 and RX 580 testing is very close. I think some reviewers have been fair in saying that it's better than integrated graphics across the board.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |