The Prelude

As Intel got into the chipset business it quickly found itself faced with an interesting problem. As the number of supported IO interfaces increased (back then we were talking about things like AGP, FSB), the size of the North Bridge die had to increase in order to accommodate all of the external facing IO. Eventually Intel ended up in a situation where IO dictated a minimum die area for the chipset, but the actual controllers driving that IO didn’t need all of that die area. Intel effectively had some free space on its North Bridge die to do whatever it wanted with. In the late 90s Micron saw this problem and contemplating throwing some L3 cache onto its North Bridges. Intel’s solution was to give graphics away for free.

The budget for Intel graphics was always whatever free space remained once all other necessary controllers in the North Bridge were accounted for. As a result, Intel’s integrated graphics was never particularly good. Intel didn’t care about graphics, it just had some free space on a necessary piece of silicon and decided to do something with it. High performance GPUs need lots of transistors, something Intel would never give its graphics architects - they only got the bare minimum. It also didn’t make sense to focus on things like driver optimizations and image quality. Investing in people and infrastructure to support something you’re giving away for free never made a lot of sense.

Intel hired some very passionate graphics engineers, who always petitioned Intel management to give them more die area to work with, but the answer always came back no. Intel was a pure blooded CPU company, and the GPU industry wasn’t interesting enough at the time. Intel’s GPU leadership needed another approach.

A few years ago they got that break. Once again, it had to do with IO demands on chipset die area. Intel’s chipsets were always built on a n-1 or n-2 process. If Intel was building a 45nm CPU, the chipset would be built on 65nm or 90nm. This waterfall effect allowed Intel to help get more mileage out of its older fabs, which made the accountants at Intel quite happy as those $2 - $3B buildings are painfully useless once obsolete. As the PC industry grew, so did shipments of Intel chipsets. Each Intel CPU sold needed at least one other Intel chip built on a previous generation node. Interface widths as well as the number of IOs required on chipsets continued to increase, driving chipset die areas up once again. This time however, the problem wasn’t as easy to deal with as giving the graphics guys more die area to work with. Looking at demand for Intel chipsets, and the increasing die area, it became clear that one of two things had to happen: Intel would either have to build more fabs on older process nodes to keep up with demand, or Intel would have to integrate parts of the chipset into the CPU.

Not wanting to invest in older fab technology, Intel management green-lit the second option: to move the Graphics and Memory Controller Hub onto the CPU die. All that would remain off-die would be a lightweight IO controller for things like SATA and USB. PCIe, the memory controller, and graphics would all move onto the CPU package, and then eventually share the same die with the CPU cores.

Pure economics and an unwillingness to invest in older fabs made the GPU a first class citizen in Intel silicon terms, but Intel management still didn’t have the motivation to dedicate more die area to the GPU. That encouragement would come externally, from Apple.

Looking at the past few years of Apple products, you’ll recognize one common thread: Apple as a company values GPU performance. As a small customer of Intel’s, Apple’s GPU desires didn’t really matter, but as Apple grew, so did its influence within Intel. With every microprocessor generation, Intel talks to its major customers and uses their input to help shape the designs. There’s no sense in building silicon that no one wants to buy, so Intel engages its customers and rolls their feedback into silicon. Apple eventually got to the point where it was buying enough high-margin Intel silicon to influence Intel’s roadmap. That’s how we got Intel’s HD 3000. And that’s how we got here.

Haswell GPU Architecture & Iris Pro
Comments Locked

177 Comments

View All Comments

  • jasonelmore - Sunday, June 2, 2013 - link

    Looking at the prices, this will raise the price or Lower the margins of the 13" Retina Macbook Pro by about $150 each.
  • mschira - Sunday, June 2, 2013 - link

    Yea laptops benefit most - good for them.
    But what about the workstation?
    So intel stopped being a CPU company and turned into a mediocre GPU company? (can even beat last years GT650M)
    I would applaude the rise in GPU performance if they had not completely forgotten the CPU.
    M.
  • n13L5 - Monday, June 3, 2013 - link

    You're exactly right.

    13" ultrabook buyers who need it the most get little to nothing out of this.

    And desktop users don't need or want GT3e and it uses system RAM. Better off buying a graphics card instead of upgrading to Haswell on desktops.
  • glugglug - Tuesday, June 4, 2013 - link

    While I agree this misses "where it would benefit most", I disagree on just *where* that is.

    I guess Intel agrees with Microsofts implicit decision that media center is dead. Real-time HQ quicksync would be perfect to transcode anything extenders couldn't handle, and would also make the scanning for and skipping of commercials incredibly efficient.
  • n13L5 - Tuesday, June 11, 2013 - link

    Core i5…4350U…Iris 5000…15W…1.5 GHz
    Core i7…4550U…Iris 5000…15W…1.5 GHz
    Core i7…4650U…Iris 5000…15W…1.7 GHz

    These should work. The 4650U is available in the Sony Duo 13 as we speak, though at a hefty price tag of $1,969
  • Eric S - Monday, July 1, 2013 - link

    The last 13" looks like they were prepping it for a fusion drive then changed their mind leaving extra space in the enclosure. I think it is due for an internal redesign that could allow for a higher wattage processor.

    I think the big deal is the OpenCL performance paired with ECC memory for the GPU. The Nvidia discrete processor uses non-ECC GDDR. This will be a big deal for users of Adobe products. Among other things, this solves the issue of using the Adobe mercury engine with non-ECC memory and the resulting single byte errors in the output. The errors are not a big deal for games, but may not be ideal for rendering professional output and scientific applications. This is basically a mobile AMD FireGL or Nvidia Quadro card. Now we just need OpenCL support for the currently CUDA-based mercury engines in After Effects and Premiere. I have a feeling that is coming or Adobe will also lose Mercury Engine compatibility with the new Mac Pro.
  • tviceman - Saturday, June 1, 2013 - link

    Impressive iGPU performance, but I knew Intel was absolutely full of sh!t when claiming equal to or better than GT 650m performance. Not really even close, typically behind by 30-50% across the board.
  • Krysto - Saturday, June 1, 2013 - link

    When isn't Intel full of shit? Always take what the improvements they claim and cut it in half, and you'll be a lot closer to reality.
  • xtc-604 - Saturday, June 8, 2013 - link

    Lol...you think that's bad? Look at Apple's claims. "over 200 new improvements in Mountain Lion"
  • piroroadkill - Saturday, June 1, 2013 - link

    sh<exclamation point>t? What are we? 9?

Log in

Don't have an account? Sign up now