The Prelude

As Intel got into the chipset business it quickly found itself faced with an interesting problem. As the number of supported IO interfaces increased (back then we were talking about things like AGP, FSB), the size of the North Bridge die had to increase in order to accommodate all of the external facing IO. Eventually Intel ended up in a situation where IO dictated a minimum die area for the chipset, but the actual controllers driving that IO didn’t need all of that die area. Intel effectively had some free space on its North Bridge die to do whatever it wanted with. In the late 90s Micron saw this problem and contemplating throwing some L3 cache onto its North Bridges. Intel’s solution was to give graphics away for free.

The budget for Intel graphics was always whatever free space remained once all other necessary controllers in the North Bridge were accounted for. As a result, Intel’s integrated graphics was never particularly good. Intel didn’t care about graphics, it just had some free space on a necessary piece of silicon and decided to do something with it. High performance GPUs need lots of transistors, something Intel would never give its graphics architects - they only got the bare minimum. It also didn’t make sense to focus on things like driver optimizations and image quality. Investing in people and infrastructure to support something you’re giving away for free never made a lot of sense.

Intel hired some very passionate graphics engineers, who always petitioned Intel management to give them more die area to work with, but the answer always came back no. Intel was a pure blooded CPU company, and the GPU industry wasn’t interesting enough at the time. Intel’s GPU leadership needed another approach.

A few years ago they got that break. Once again, it had to do with IO demands on chipset die area. Intel’s chipsets were always built on a n-1 or n-2 process. If Intel was building a 45nm CPU, the chipset would be built on 65nm or 90nm. This waterfall effect allowed Intel to help get more mileage out of its older fabs, which made the accountants at Intel quite happy as those $2 - $3B buildings are painfully useless once obsolete. As the PC industry grew, so did shipments of Intel chipsets. Each Intel CPU sold needed at least one other Intel chip built on a previous generation node. Interface widths as well as the number of IOs required on chipsets continued to increase, driving chipset die areas up once again. This time however, the problem wasn’t as easy to deal with as giving the graphics guys more die area to work with. Looking at demand for Intel chipsets, and the increasing die area, it became clear that one of two things had to happen: Intel would either have to build more fabs on older process nodes to keep up with demand, or Intel would have to integrate parts of the chipset into the CPU.

Not wanting to invest in older fab technology, Intel management green-lit the second option: to move the Graphics and Memory Controller Hub onto the CPU die. All that would remain off-die would be a lightweight IO controller for things like SATA and USB. PCIe, the memory controller, and graphics would all move onto the CPU package, and then eventually share the same die with the CPU cores.

Pure economics and an unwillingness to invest in older fabs made the GPU a first class citizen in Intel silicon terms, but Intel management still didn’t have the motivation to dedicate more die area to the GPU. That encouragement would come externally, from Apple.

Looking at the past few years of Apple products, you’ll recognize one common thread: Apple as a company values GPU performance. As a small customer of Intel’s, Apple’s GPU desires didn’t really matter, but as Apple grew, so did its influence within Intel. With every microprocessor generation, Intel talks to its major customers and uses their input to help shape the designs. There’s no sense in building silicon that no one wants to buy, so Intel engages its customers and rolls their feedback into silicon. Apple eventually got to the point where it was buying enough high-margin Intel silicon to influence Intel’s roadmap. That’s how we got Intel’s HD 3000. And that’s how we got here.

Haswell GPU Architecture & Iris Pro
Comments Locked

177 Comments

View All Comments

  • virgult - Saturday, August 31, 2013 - link

    Nvidia Kepler plays Crysis 3 well but it sucks insanely hard at computing and rendering.
  • Eric S - Wednesday, July 3, 2013 - link

    It appears to do compute better then graphics (and ECC memory is a plus for compute). That is exactly what pros will be looking for. Apple doesn't cater to the gaming market with these machines even if they should play most games fine. A dedicated gaming machine would be built much different then this.
  • jasonelmore - Sunday, June 2, 2013 - link

    This, I dont know about anyone else, but i'm not dropping 2 grand or $2700 with upgrades on a 15 incher that does not have dedicated graphics.

    Another problem i see is the 13" Retina only uses duals, and if they did use this quad with GT3e silicon, then the price of of the 13" will go up at least $150 since the i7's and i5's the 13" currently use, are sub $300 parts.

    The only solution i see is Apple offering it as a build to order/max upgrade option, and even then they risk segmentation across the product line.
  • fteoath64 - Monday, June 3, 2013 - link

    "can't sell a $2000 laptop without a dedicated GFX". Absolutely true, especially when the GT3e is still a little slower than the 650M. So the 750M tweaked a few mhz higher will do nicely for the rMBP. The 13 incher will get a boost with the GT3e CPU. So a slight upgrade to lower power cpu maybe worthwhile to some. Improvement to 1080p eyesight camera would be a given for the new rMBP.
  • Eric S - Wednesday, July 3, 2013 - link

    You can drop discrete graphics when that $2000+ laptop is using builtin graphics with the same price premium and number of transistors of the discrete chip. I'm almost positive the discrete will go away. I have a feeling that Apple had a say in optimizations and stressed OpenCL performance. That is probably what they will highlight when they announce a new MacBook Pro.
  • xtc-604 - Saturday, June 8, 2013 - link

    I really hope that Apple continues to treat the rMBP 15 as a flagship. Giving it iGPU only would be a deal breaker for many professionals. Atleast in haswell's current form. Until Intel can make an IGPU that atleast matches or exceeds performance at high resolutions, it is still a no go for me.
  • Eric S - Wednesday, July 3, 2013 - link

    Why is that a deal breaker? The Iris 5200 is better then a discrete chip for compute (OpenCL). If you are doing 3D rendering, video editing, photoshop, bioinformatics, etc. that is what you should care about. It also has ECC memory unlike a discrete chip so you know your output is correct. How fast it can texture triangles is less important. It still has plenty of power in that area for any pro app. This is not designed to be a gaming machine. Not sure why anyone would be surprised it may not be optimized for that.
  • Eric S - Monday, July 1, 2013 - link

    You never know, but I doubt it. They will have trouble with the ports on the side if they make it smaller. I think it is more likely the space saving will go to additional battery. They may be able to get similar battery life increases to the Air with the extra space.
  • mikeztm - Tuesday, June 4, 2013 - link

    Notice that the 13" 2012 rMBP is a little thicker than the 15" version. Quad core in 13 inch may be planned at the very beginning.
  • axien86 - Saturday, June 1, 2013 - link


    Look at the overheating issues that come with i5/i7 Razer notebooks and finding the same heating noticed in their Haswell notebook press event several days ago.

    If Apple decides to use these Haswells which put out heat in a concentrated area and in very thin outlines, you are essentially computing over a mini-bake oven.

Log in

Don't have an account? Sign up now