GAMING COMPUTER UPGRADES: INTEL ANND NVIDIA, AMD AND… AMD

I’ve been busy with work and family stuff lately, so I didn’t write a blog post last week.  This one is on a pretty relevant topic:  gaming PC building and upgrades.  Now, I own several gaming desktops and laptops, and messing with and collecting gaming hardware is one of my hobbies, both consoles and PCs.  In the last few years, as console gaming took more of a center role in the games industry, I stopped lusting after the “latest and greatest” in PC gaming hardware.  There were several reasons for this.  First, as console game development took center stage, PC games stopped “pushing the envelope” … instead of a time where companies like Origin Systems put out a game like Wing Commander, which made many people buy new systems or upgrade their graphics cards just to play the game, now developers were making the console version first, and the PC port as an afterthought.  That means that, at least until very recently, the PC has been getting wither Indie, low budget games that, while fun don’t really push the hardware, or quick-and-easy console ports that run only on DirectX 9, and thus sun blazingly-fast on even middling gaming PCs.  The other reason was cost… a top-of-the-line graphics card can cost from $700 to $1000 these days, and that’s not even taking into consideration the high cost of the latest and greatest Intel Core i7 or Xeon processors.  That’s hard to swallow in today’s economy, especially when the games don’t really require it.

So what I’ve been doing is targeting strictly the mainstream/middle of the road in gaming PC hardware.  This allows me to do a hardware upgrade/refresh every year, which keeps my gaming computers current and it’s cheaper in the long run.  This year that means an Asus Intel Core i7 with a Geforce GTX 660 GPU, and an HP AMD A10 system with a Radeon HD7870 GPU, 16GB RAM and 2TB HD drives.  Both graphics cards are able to run the latest and greatest games at blazing speeds and higher than 1080p resolutions, and support multiple monitors if you want that kind of setup.  In the case of the Asus system I went with the Geforce GTX 660 instead of the newer 760 because they simply didn’t have the newer card at CompUSA, and I didn’t want to order it online.  Go with the 760 if you can – the 660 has more CUDA units, but the 760 is clocked higher, and is even faster.

I went with a 600W power supply for both computers.  Though modern GPU power requirements have gone down, especially in the middle-of-the-road cards, putting in anything lower than that is just asking for trouble.

Both systems came with Windows 8 installed, but I put Windows 8.1 Preview Edition in (just because I like to live dangerously :P).  This caused some hiccups in the case of the Asus system, since it insisted in going into auto-repair mode every other boot cycle.  The problem disappeared after I uninstalled the drivers for the original graphics card that came with the machine, and did a clean install of the newest drivers from the Nvidia website.

IMG_0062a

In the case of the AMD system I had no trouble at all… everything just worked from the first try (maybe because it’s all made by AMD to work together in the first place?)

IMG_0061

The end result of my hardware tinkering is that I now have two new gaming PCs connected to my old Viewsonic 28” monitor, and all my games look awesome running at full resolution and with full details.  The Asus feels and is faster, because the Intel Core i7 processor trumps AMD’s best by a long margin, but the AMD A10 system works just as well, and it costs a couple of hundred dollars less.

Now that I’m done with the PCs, I’m thinking of how I’m going to re-organize my entertainment center to make room for that Playstation 4 and Xbox One. 😉

GAME OF THE WEEK

The game of the week is Naughty Dog’s “The Last of Us” for the Playstation 3.  Playing through the game was quite an emotional ride, and it really brings a lot of feelings to the surface, especially if you’re a parent.  I wish Sony and Naughty Dog didn’t mess with it and just leave the game as is, but I believe they’re making at least two sequels on the Playstation 4.  Damn good stuff.

MMO OF THE WEEK

The MMO of the week is “Final Fantasy XIV: A Realm Reborn”.  I know the game’s Early Access phase has been a total clusterfuck, but it’s still a beautiful, fun game… if you can play it. 😛

THIS WEEKS RELEASES

Hmm, Killer is Dead certainly snuck up on us, didn’t it?  Sweet Fuse is an Otome dating sim for PSP/PS Vita, and looks to be lots of fun.

 

 

Advertisements

The PS4 and Xbox One are not Nearly as Powerful as Sony and Microsoft Would Like Us to Believe

I recently came back from a two (2) week vacation with my son.  The games industry is typically on a hiatus of sorts during the Summer months, and even more in this console-transition year.  This means that gaming news are also sparse during this time.  However, you cannot have console launches without console fan-boy flame wars, and there were some tidbits reported in the last weeks that have stirred the fan-boy pot.  Specifically, Microsoft announced that it has increased the clock speed of the Xbox One’s GPU by 53MHZ, to 853 MHZ.  The second tidbit came from an unofficial but trusted source, and has to do with how much memory these new consoles will allocate for their Operating System overhead and media and social media/game streaming functions.  Digital Foundry and Eurogamer reported that game developers could only access 4.5 GB of the PS4’s 8GB or GDDR5 memory.  This led to flame wars between Xbox One and PS4 fans, as to which console has the most available memory for actual games and gaming functions.  Xbox One fans contended that, since the Xbox One only reserves 3GB of its 8GB of DDR3 memory pool for the Operating System and whatnot, the larger amount  memory available to developers on the system made up for its less powerful GPU (the PS4’s GPU has 1.5x as many GCN Compute Units as the Xbox One’s).  Then QuakeCon happened, and John Carmack, ever the voice of reason in hardware-related matters, threw in his preliminary impression that both the Xbox One and PS4 were “about the same” in terms of graphical prowess.

To me this is the proverbial “tempest in a teapot”.  When it comes to the Playstation 4 and Xbox One, I don’t care about OS memory overhead, or a slightly more powerful GPU.  The GPU is fine on both machines — the AMD GCN (Radeon 8xxx) architecture is a known quantity by now.  The GPUs  inside these new consoles are not cutting edge by any means, but they are fine, definitely good enough.  What I really care about, and what worries me about these new consoles, is the AMD “Jaguar” CPU cores they both share.

Screen Shot 2013-05-23 at 12.22.12 AM_678x452

As part of my usual “retail therapy” after my Summer vacation, I recently picked up a 2013 Google Nexus 7 (which will be the subject of a future post) and an Acer Aspire V5-122 11 inch touchscreen notebook (I can’t call it an Ultrabook, since that’s an Intel trademark, but it’s the same form-factor.)  What’s notable about this inexpensive little notebook, besides it’s low price for a touchscreen-equipped model, is that it’s based on AMD’s Jaguar APU technology, probably the first product on the market to use it.  I’ve been playing around with it over the last couple of days, enough to get a handle on how fast the Jaguar processor core can and cannot do.  My take on the architecture powering the Xbox One and the Playstation 4?  The short version:  the GPU is good enough, blazingly fast for what these consoles will require of it, gaming wise.  Unfortunately, the CPU cores are pitifully weak.

IMG_0057

In my testing, I put the Acer Aspire V5-122 to the usual tasks a user puts a computer through… Word Processing, spreadsheets, gaming, web browsing, email and social media, and video playback.  Throughout it all, the computer felt slower than my other machines (which run on processors that go from a Core i3 to my fastest, Core i7 gaming desktop.) I could feel it struggle under the load when I ran World of Warcraft alongside Firefox (with a maximum of three (3) tabs open) and then switched to the Windows 8 Modern interface to access the Windows 8 Store.  Windows 8 apps took longer to open.  Copying files took a lot longer than an AMD Piledriver-based desktop, not to mention a Core i5 or Core i7 machine.  It could play back a 1080p video file just fine, thanks to the fast GPU, but the Jaguar CPU cores were maxed out in doing so.  This machine could not match my slowest gaming rig, much less the newest, Intel Haswell or even AMD’s own Steamroller A10 APU models.  This is not the microprocessor you would want in a next-generation gaming machine.

I can understand why this particular chip was chosen for the Xbox One and Playstation 4:  it’s all about cost.  Neither Microsoft or Sony are in a position to lose billions on R&D for custom console chipsets anymore, and neither can lose even more billions selling gaming hardware at a loss anymore. Microsoft and Sony will not lose money on these machines.  But the tradeoffs are dangerous.  Unlike the Xbox 360 and the PS3, which were the most powerful gaming hardware ever released at their time, these machines cannot match the performance of the average current gaming PC.  AMD cannot do miracles with what Microsoft and Sony were willing to spend:  these Jaguar cores have less than one fifth the performance of an Intel Core i7, and less than a quarter the performance of an Intel Core i5.  It’s not even half as fast as an AMD FX processor.  That’s the reason there are eight (8) of them in the new consoles.  Unless game developers get really good at multi-threading, there’s no way they’ll get good performance out of these CPU cores.  And we don’t know how many of them will actually be available for running games.

And another worrisome detail about the AMD Jaguar processor, one that hasn’t been previously discussed:  Jaguar CPU cores come in clusters or modules of four cores each. The Playstation 4 and Xbox One will have two (2) of these modules each, for a total of eight (8) CPU cores.  But communication between the two modules is not, er, ideal.  Expect games to only run on one module, and thus have four (4) cores available, with the remaining four (4) cores used for the OS and other, non-game tasks.

What about next-gen physics, game A.I., etc?  Sure, part of those computations could be offloaded to the Playsation 4’s and Xbox One’s GPU, but there’s not much room to grow there… those GPUs are already middle-of-the-road when compared with current gaming PC models.  Developers wanting to push the envelope in those areas had better stick with making their games for PCs.

A prime example is Sony’s own Planetside 2.  Yesterday, the devs for the Sony Online Entertainment title expressed their frustration at the difficulties they were facing in getting their game to run at acceptable speeds on Playstation 4 hardware.  They are essentially rewriting their game to optimize it for multithreaded performance (and, coincidentally, said that AMD gaming PC owners would notice a nice performance boost when the PC version gets the updated code in a patch.)

The Playstation 3 and Xbox 360 had plenty of headroom when they were released in 2005/2006.  Each year games looked better and better, as game developers learned the intricacies of their architecture, and implemented new techniques and programming tricks to take advantage of them.  “The Last of Us” is easily the most graphically impressive game I’ve ever seen, and it’s running on seven (7) year old PowerPC hardware.  The Playstation 4 and Xbox One are having trouble running current PC ports right out of the gate.  Food for though … and worry, there.