Comments Locked

138 Comments

Back to Article

  • Spoelie - Thursday, July 16, 2015 - link

    Will intel actually flesh out the broadwell desktop lineup anytime soon? Two SKUs is not really an earthshattering line-up

    On a somewhat related note, when will Carrizo move from vaporware to something you can actually buy?
  • A5 - Thursday, July 16, 2015 - link

    Probably never, especially if Skylake is actually coming in August/September like rumors say.
  • nevcairiel - Thursday, July 16, 2015 - link

    I think Intel never really planned to have Broadwell offer full coverage of the Desktop segment, only a few intermediate models until Skylake offers the full range again.
  • retrospooty - Friday, July 17, 2015 - link

    Intel can afford to milk it. They are so far ahead of AMD they can release nothing at all for 2 years and still be ahead.
  • Samus - Saturday, July 18, 2015 - link

    While we know AMD won't have a manufacturing AND architecture breakthrough at the same time like they did with the gold interconnects and "K8" offering 64-bit x86 and an on-die memory controller, they can still beat Intel at the architecture game.

    There is nothing earth shattering about the IPC improvements since Nehalem/Westmere. Intel, over the last 7 years, has only improved IPC 27%. In fact, clock for clock, Haswell and Broadwell are virtually identical. Almost nobody with a decent Sandy Bridge/Ivy Bridge PC has any reason to upgrade unless they are looking for a slightly lower power bill.

    AMD obviously won't beat Intel at the manufacturing end, but Intel's recent architectures wreak of laziness. It's as if they are so busy focusing on minor IGP improvements that they are making IPC an after thought. And sure, you can use the counter-argument that there is no "need" for IPC improvements since CPU's are "so fast" but that's a ridiculous concept to intentionally hold back innovation because there is no need for it.

    Someone will always find a need for more processing power. Many AAA games are actually CPU limited now.
  • LordanSS - Sunday, July 19, 2015 - link

    You are correct.

    I am an owner of a 3770k, and it still performs brilliantly for the tasks I use it for (mostly gaming, and some image treatment and 3D modelling).

    The only one thing that a Haswell/Broadwell would improve would be video encoding for me, as the AVX2 extensions actually give a reasonable boost to the current build of x264 (don't use x265 yet, for many reasons).

    I'm waiting for the "nominal" IPC increase over my current build to reach 20% so I can jump to a new machine. Not sure if Sky Lake would make it that far yet, perhaps Kaby Lake. Time will tell.
  • JonnyDough - Friday, July 24, 2015 - link

    No, they're just poorly coded. Code isn't very optimized these days.
  • troubleshootertony - Monday, July 27, 2015 - link

    100% percent agree! Since hardware started really ramping up, (dual-core, etc..) software went on the "oh well, that runs good enough ship it!" then customer was like why does this run so crappy, and software company goes, "oh you need to upgrade your hardware" ... and the cycle continues... and continues and continues...
  • troubleshootertony - Monday, July 27, 2015 - link

    Moores law is stuck in a triagle of hell...

    one point of triangle =HARDWARE
    2nd point of triangle == SOFTWARE
    3rd point of triangle = complacent, ignorant consumer
  • easp - Tuesday, August 18, 2015 - link

    Um, that didn't start with dual-core. In fact, dual-core really marks the beginning of the decline for hardware. Not that dual and quad core chips aren't useful, but they started throwing transistors at extra cores, even extra cores that only saw partial utilization, because single-core performance had reached a point of steeply diminishing returns.
  • Strunf - Monday, July 27, 2015 - link

    No AAA game is CPU limited today unless you play in low resolution, people are moving towards the 4K and this will only make games GPU limited even more.
  • ImSpartacus - Thursday, July 16, 2015 - link

    We had rumors from semiaccurate that broadwell would basically not exist in desktop form many years ago.

    Intel made an exception and released the Crystalwell parts, but I think we can all agree that they are awkwardly named & positioned based on Intel's other lineup. They feel like a "tacked on" product because they probably were.

    And honestly, I'm ok with the "ticks" being mobile-only since desktops don't benefit from those much. I mean, was anyone screaming from the hilltops that they got to put ivy bridge in their gaming machine instead of sandy bridge? Not really.
  • name99 - Thursday, July 16, 2015 - link

    What you say makes logical sense but Brett's point remains. Switching to a two year cycle for desktop CPU updates is an abrupt (and I am guessing unannounced and unexpected) change for the OEMs, which rather screws them over.
    If Intel had been honest about this with them say 18 months ago (when I assume they had internal knowledge of 14nm difficulties) they could have helped them transition. Instead Intel has basically left them all out to dry --- if they go bankrupt that's not Intel's problem.

    Short term that's a viable strategy, but longer term, it probably means even more absolute resistance to ever using an Intel part when you don't have to (phones, tablets, wearables, IoT), because once you're tied to the Intel machine, you've lost control of your destiny.
    And it likewise has implications for MS, since they won't see those annual sales. Basically it does even more to convert the PC industry into a white goods industry.
  • hughlle - Thursday, July 16, 2015 - link

    Could go both ways. Only offering OEM's a new cpu every 2 years instead of 1 could force them to actually become competitive and start designing and implementing new features or designs to differeitate and sell their products instead of just relying on a new cpu to do it for them.

    It also goes both ways with regard to owner requirements. These days it seems that due to the stagnation of software (it used to be the case that I had to upgrade my computer to play X, now I can happily play anything on the market with years old mid-range hardware without an issue) so even if intel were to give the OEM a new cpu each year, there would be no reason to upgrade each year. Seems that phones have it right, there is just no reason to upgrade your phone every year, even 2 years is a bit silly to me.
  • extide - Thursday, July 16, 2015 - link

    Did you even read this article? It's main point was to say that Intel is still coming out with new products each year for OEM's. The Haswell refresh was to fill in for Broadwell DT, then we actually do get Broadwell on mobile, then we get Skylake, and then Kaby Lake to fill in for while Cannonlake is being developed (and it's requisite 10nm process).
  • name99 - Thursday, July 16, 2015 - link

    If the upgrades are not compelling, it changes nothing for the OEMs.
    If the Haswell refresh is any indication, they will NOT be compelling. A 100MHz bump in base and turbo frequency is not going to light any purchasing fires.
  • ImSpartacus - Thursday, July 16, 2015 - link

    If they need to be compelling, then have any of the recent post-sandy bridge desktop cpus been terribly compelling?

    I honestly don't think that they need to be compelling. They just need a new name. I mean, look at the rebranding that occurs in the gpu space. It's awful, but the oems demand it.
  • Nagorak - Friday, July 17, 2015 - link

    At this point I think people only buy a new computer when their old one has gotten "slow". Probably more than half the time that's because it has a bunch of malware installed on it. Or something inane like the motherboard battery needs to be replaced, so it's tossed because the owners don't understand it's a $5 fix.

    CPUs pretty much aren't compelling at all anymore, and the needs of the general public in this space has long since been surpassed. If you have even a ten year old machine it probably would be fine for 95% of people.
  • masouth - Monday, August 3, 2015 - link

    I'm not sure about a 10 year old machine but it's pretty darn close. I would drop that about a year or so to the 8-9 year range since that's when the Core 2 Duo launched. I think even most gamers would be surprised what they can play with a C2D.
  • Skraeling - Thursday, August 13, 2015 - link

    My desktop is I think ~6 years old and is just now running into issues playing games just due to it having a dx10 and not a dx11 card. If I upgraded just that? It would probably be ok honestly.

    Q9450 and ddr2 ffs. Runs things like cities skylines shockingly fine.

    I do need to do a whole system refresh though just to get up to current gen hardware formats ddr4 / dx11/12 gpus and a more recent socket instead of 775.

    Im pretty much waiting for something to fail so I can justify it sadly.
  • emn13 - Friday, July 17, 2015 - link

    There's been shockingly little progress since sandy bridge in normal desktop workloads.

    http://anandtech.com/bench/product/287?vs=1260

    Anything that's not using new extensions is likely to run only slightly faster - not enough to be particularly compelling. Personally I don't think that speed differences of around 25% are noticable if you're using a system normally (productively) as opposed to trying to focus on its performance.
  • Morawka - Thursday, July 16, 2015 - link

    the 4790k saw a 400Mhz increase in base clock and turbo clock. that's pretty good for just a refresh. first intel stock cpu to clock at 4.0 ghz.
  • nikaldro - Friday, July 17, 2015 - link

    C'mon now. The 4790K was clearly an exception.
  • ImSpartacus - Thursday, July 16, 2015 - link

    If the oems don't have someone on payroll that can read semiaccurate and make the most basic of predictions, then they deserve what's coming to them.

    I'm a random guy from the internet and I'm totally not surprised that broadwell is mia, by and large, on the desktop. I have zero qualifications and relevant experience.

    And even that aside, people are constantly howling that moores law is dead/dying. The tech world is really risky compared to most industries.

    I don't feel sorry for the Dells of the world.
  • psyq321 - Friday, July 17, 2015 - link

    I am quite sure that the key OEMs were well aware of the delays this way or the other. Even much less important companies which collaborate with Intel have knowledge about parts many months and, sometimes, years before these pieces of info leak to public, not to talk about key OEMs like Apple, Dell, etc.

    Also, it would be rather foolish to bet your business on a single CPU vendor's ability to always be on time when it comes to process innovation. This is already seen many times in the GPU space - companies adapt, there will be rebrands, maybe some features will be added in minor refreshes and the CPU SKU will get a shiny new product number - and the business will go on, as usual.
  • DanNeely - Thursday, July 16, 2015 - link

    May's leaked roadmap had a few broadwell desktop chips launching in 2015Q2 (I think these were just the ones with the big IGP with the on package dram), with the majority of product lines going direct to skylake in q3.

    With the continuing 14nm struggles this isn't a surprise; Intel's said since last summer that all the fab problems would cut into Broadwell's lifespan, but not delay Skylake from it's planned launch date.

    http://www.techspot.com/news/60572-leaked-intel-de...
  • extide - Thursday, July 16, 2015 - link

    There are no 'continuing' 14nm problems. 14nm is done and a working process. It wont be long before Intel starts shipping 600+mm^2 Phi2 dies out to key customers (if they haven't already started).

    Skylake is basically launching when it is supposed to which absorbs most of the 14nm delay and just chops off a bunch of broadwell's shelf life, which is frankly good for all of us.
  • lhsbrandon - Thursday, July 16, 2015 - link

    HP has Carrizo in at least 2 laptop models that I have seen.
  • MrCommunistGen - Thursday, July 16, 2015 - link

    I had to look pretty hard, but I found a Carrizo laptop:
    http://store.hp.com/us/en/pdp/Laptops/hp-pavilion-...

    Sadly, asside from the Carrizo chip (A10-8700P) most of the specs are decidedly quite 2011:
    15.6" 1366x768 display
    8GB Single Channel RAM
    750GB 5400RPM HDD
    100Mbps Ethernet
    802.11 bgn 1x1 Wi-Fi
  • MrCommunistGen - Thursday, July 16, 2015 - link

    gah typo

    asside -> aside
  • niva - Thursday, July 16, 2015 - link

    This is one of the aspects of AMD systems I don't understand. Why can't we find a normal laptop with an AMD chip inside of it? 1366x768 screen in 2015 is a throwaway.
  • frenchy_2001 - Thursday, July 16, 2015 - link

    To be honnest, even intel laptops have a *lot* of 1366x768 screens.
    I have been looking for 15.6" laptops and in the $400-$700, you can find *anything*, with a majority of low res, TN screens, a big chunk of 1080p screens and even some hi-density screens.

    IPS screens happen, but this is far from a given.

    Basically, the laptop market is slowly evolving from the 2000s while any low end tablet will have a relatively nice screen by now.

    Same for SSD storage usage.

    Computing is a *very* price sensitive market and each dollar saved counts. SSD, resolution and IPS cost money.
  • Margalus - Thursday, July 16, 2015 - link

    There are lots of laptops out there with better screens. You just have to step up higher in the price range you want to spend. Big, high res IPS monitors cost a lot more money than small, low res TN panels.

    and since most of these chips from amd are cheap chips, they go in cheap computers to make the masses happy. Just like most of intels cheap chips go in cheap computers also.
  • silverblue - Friday, July 17, 2015 - link

    It's only a $500 laptop, as well, albeit after a hefty price cut. The A10-8700P is nearly as fast as an Athlon II 860K in single threaded tasks on PassMark, being clocked at 3.2GHz turbo compared to 4.0GHz (1502 vs. 1599), however it has a base clock of 1.8GHz. Still, this puts it in a good position versus comparable Intel CPUs, and is close to the FX-7600P.

    https://www.cpubenchmark.net/laptop.html

    I believe the target price point of systems with these new APUs was $400 - $700, so it'd be interesting to see what is in the machines at the top end of that price bracket. The RRP of this one was very much overpriced at $630, I would say.
  • WinterCharm - Saturday, July 18, 2015 - link

    At this point they're just getting ready for the skylake launch. Broadwell is late, and dead in the water, IMO.
  • ingwe - Thursday, July 16, 2015 - link

    This doesn't surprise me. I really think that we need to move to a new paradigm soon. Simply increasing the number of transistors on Si seems to be near its limit.
  • mortimerr - Thursday, July 16, 2015 - link

    I agree. Although, a possible assumption is that it would cost so much to develop, fab, and integrate an entirely new paradigm that simply adding transistors and reducing their size is just easier.
    If it's not broke don't fix it...until there are literally no other options.
  • tviceman - Thursday, July 16, 2015 - link

    Hey Intel,

    Open up your fabs to (indirect) competition, like Apple, Qualcomm, or Nvidia. Money brought in by keeping your most advanced nodes fully operational equals more money to fund R&D of future nodes. Win-win. Screw your contra-revenue strategy that is simply throwing money at a lost cause.

    Sincerely yours,

    Random internet dude
  • A5 - Thursday, July 16, 2015 - link

    These delays aren't due to a lack of funding. There's some serious engineering problems to be solved for these new nodes, and sometimes it just takes a long time to get an acceptable solution.
  • name99 - Thursday, July 16, 2015 - link

    Psychologically, Intel defines itself as "the x86 company" not as "the best fab company on earth".

    IMHO this will be their downfall, but that's the way it is. (Hence the whole Atom mess [has the Atom line generated a profit in its entire life since launch?] and such insane products as Quark.)
  • Krysto - Thursday, July 16, 2015 - link

    Meh. Samsung was already only 6 months behind Intel on 14nm, and with Intel's new 6 months delay, it will probably put them more or less on par.
  • Michael Bay - Thursday, July 16, 2015 - link

    In their dreams, maybe. Show me something comparable to i5 on their node by complexity.
  • Flunk - Thursday, July 16, 2015 - link

    They're actually experimenting with that, they're fabbing stuff for Rockchip and Spreadtrum right now.
  • Ammaross - Thursday, July 16, 2015 - link

    If Apple wanted a fab, surely they could use their trove of cash to outright buy/build one (or ten even). They know it's a money pit for their purposes and only someone with Intel's volume can make money off the venture. Why do you think IBM is selling/sold theirs?
  • frenchy_2001 - Thursday, July 16, 2015 - link

    Worse, IBM had to not only give their fabs away, but even *PAY* money for GloFo to take them...

    This should tell you how much of a drain fabs are.
  • MrSpadge - Thursday, July 16, 2015 - link

    Well, if the rumors are true Intel could easily improve on Skylake by enabling AVX-512 for regular CPUs rather than only for Xeons. And TXT for all CPUs.

    "No other industry is tasked with breaking the laws of physics every two years"
    That's one seriously liberal useage of the word "breaking the laws". The laws stay the same, one "just" engineers workarounds for the limitations they result in.
  • nevcairiel - Thursday, July 16, 2015 - link

    Honestly, AVX-512 is an extremely complex and convoluted instruction set, its unfortunately not a straight continuation of AVX/AVX2, but something else, I at least doubt many desktop applications would ever get much into using htem.
  • Refuge - Thursday, July 16, 2015 - link

    Agreed, we still don't have a use for more than 2 cores 80% of the time, and 4 cores 90% of the time.

    And I thought TSX was supposed to make multi-threading programs easy enough to make it mainstream.
  • extide - Thursday, July 16, 2015 - link

    The thing about multithreading is that not all 'problems' translate to being done in parallel very well. Some things just have to be done single threaded, and that's all there is to it. It not necessarily a problem with developers anymore, but some things just simply are not parallelizable.
  • MrSpadge - Thursday, July 16, 2015 - link

    That's exactly why wider vector units (which potentially double the throughput of each core) could help more than more cores. And TSX.. well, it allows you to parallelize things which previously were not gaining speed. But if you expect this to make that programming mainstream you can only be disappointed. Still, without the hardware support software is guaranteed not to be developed.
  • jjj - Thursday, July 16, 2015 - link

    "Intel’s latest delay ends up being part of a larger trend in semiconductor manufacturing, which has seen the most recent nodes stick around much longer than before. "

    Maybe you can argue that but 28nm seems more like a 1 time thing than a trend.
    And ofc you don't point out the obvious, TSMC and Samsung are racing and both trying really hard to get to each node first. That's the current trend, the foundries are pushing harder than ever while Intel is slowing down.

    One also has to wonder if this decision is in any way related to AMD Zen perf , if Intel knows Zen is not good enough,, the delay might be for financial reasons, a longer lifetime for the process has a big impact on costs and given that both units and die sizes have been declining for them in the last few years, they could be at a stage where it was this or margins.

    TSMC had their Q2 results call today (i was waiting for a transcript to not waste an hour listening to the audio so yet to see what they had to say), chances are they provided some updates on their process plans. They were planning to introduce some EUV at a later stage on 10nm.

    Very very curious now about TSMC's 16FFC sizes. It's aimed at IoT and budget SoCs and it won't arrive all that soon but maybe some budget SoCs will be using it before Intel goes 10nm and that would be .. a bit strange.
  • nevcairiel - Thursday, July 16, 2015 - link

    A smaller node is generally more power efficient, which would help Intel in much more markets than the areas Zen might be competition.

    I wouldn't say Intel is "slowing down". They are one node ahead of everyone else, afterall, and the other fabs had equal problems getting their nodes going, re: 20nm not working out, and 14/16nm FinFETs taking longer to implement as well. At best the other fabs can use this delay to get closer to Intel again, but the real question then is, how long will 10nm take them to implement.
  • jjj - Thursday, July 16, 2015 - link

    Intel has no competition , Zen would be the only one and actually a huge threat in high perf desktop and server at first and later on in all else, if it's good. Intel is selling 4 cores with pointless GPU while AMD could offer a lot more cores while 4 cores would be far cheaper. Without AMD Intel can just offer the same garbage they've been offering for a while in consumer. In mobile they aren't relevant and we have no details on mobile plans yet, they could go 10nm sooner than in PC.
    As for a node ahead, not really Size wise they are very little ahead and size is not by any means everything. 14/16 is here or almost here . Both Samsung and TSMC claim 10nm volume production in late 2016 but both are doing their best to do it sooner if possible( yet to see if TSMC provided any updates on 10nm today). Ofc there can always be delays, we don't know many details on what each has at 10nm , or how fast the foundries will ramp volume. So no way to tell who's gonna have the best process on 10nm but Intel is positioned to be the 3rd to arrive on 10nm.
  • Adding-Color - Thursday, July 16, 2015 - link

    Intel's 14nm is more dense than Samsung's/GF's 14nm. Samsung's 14nm basically gives you the leakage of a true 14nm process, but not the size/density advantage.
    For more info see last table "density comparison" here:
    https://www.semiwiki.com/forum/content/3884-who-wi...

    Another advantage of Intel is that they are in the 2nd/3rd generation of FinFet (others just 1st/2nd gen), so they have more experience in this regard as well.
  • mdriftmeyer - Thursday, July 16, 2015 - link

    AMD FX are currently 32nm process nodes. Going to GloFo/Samsung's 14nm will be a huge change for the upcoming Zen and APUs across the board.
  • BrokenCrayons - Monday, July 20, 2015 - link

    Since the vast majority of computers sold contain only an integrated graphics solution, something that's been the case for a very long time in the computing industry, I'd contend that most buyers would argue that Intel's GPU isn't pointless at all. In fact, even among gamers who use Steam, Intel's HD 4000 was the largest single GPU model.

    With all of the power of their marketing analysis, I'd say Intel knows best what is and isn't pointless to include in their processor packages. Thus, iGPUs are included.
  • Nagorak - Friday, July 17, 2015 - link

    How can you say Intel is not slowing down? Mark my words, 10nm is going to cause them even more trouble than 14nm did. And 7 nm, I highly doubt we see it in 2019.

    I don't think the other fabs are going to catch up, but rather that everyone is going to struggle.
  • jardows2 - Thursday, July 16, 2015 - link

    Maybe it's time for some companies to start looking more seriously at some of the "miracle" technologies, such as Quantum computing we've been hearing about for the last 15-20 years?
  • ImSpartacus - Thursday, July 16, 2015 - link

    With Intel's r&d budget, I'd be absolutely astounded if they aren't looking into all of that crazy stuff. Intel has been doing this stuff for too long to get blindsided.
  • Notmyusualid - Thursday, July 16, 2015 - link

    My thoughts exactly.

    Oh how I'd pay to be a fly on their R&D wall for a day...
  • Alketi - Thursday, July 16, 2015 - link

    Quantum computing isn't the answer, as it'll be targeted at math problems that gain an advantage by trying all possibilities at once. It won't run normal software.

    But, yes, new technologies or approaches are needed, because Moore's Law is about to hit a brick wall. We're quickly headed toward single-digit numbers of electrons between the source and drain, which won't be sustainable and obviously has a finite limit.
  • EdgeOfDetroit - Thursday, July 16, 2015 - link

    I never understood the logic of going down to 14nm from 22nm. That's a much bigger % decrease than going from 28nm to 22nm. At the time I thought "well if they can do it, more power to them". But now its clear they actually couldn't do it. They should have gone to something like 17nm to keep the % decrease to not be completely crazy. I guess we're probably getting 10nm sooner than we otherwise would, but it has delayed everything else in the mean time.
  • zepi - Thursday, July 16, 2015 - link

    Intel never did 28nm, they had 32nm process by their definitions before.
  • nevcairiel - Thursday, July 16, 2015 - link

    The use of FinFETs made the extra shrink possible, so, why not? :)
  • nandnandnand - Thursday, July 16, 2015 - link

    No. 28nm is a half node for wimps.

    The real shift was from 32nm to 22nm to 14nm, which is about the same proportional decrease each time.

    Intel is still in the lead and the 3rd generation of 14nm will give them time to make architectural improvements. Like those 28nm AMD and NVIDIA GPUs that have been around forever. Moore's law may be dead but sitting and waiting for extreme ultraviolet lithography before taking on 10nm, 7nm and less is a good strategy.
  • Notmyusualid - Thursday, July 16, 2015 - link

    +1
  • extide - Thursday, July 16, 2015 - link

    Nobody went from 28 -> 22

    Intel 45 -> 32 -> 22 -> 14 -> 10
    TSMC 50 -> 40 -> 28 -> 20 -> 16FF (20nm BEOL) -> 16FF+ (16nm BEOL)
  • 3DoubleD - Thursday, July 16, 2015 - link

    It is because the numbers 22 and 14 are somewhat arbitrary. There are in fact no actual dimensions that correspond with these numbers anymore. The node measurements are something of a historical nomenclature that has lost all real meaning. The actual improvements in density are not directly reflected by the node number, they are just a name.

    The delays are inevitable based on the increasing complexity of lithography, etching, and metal layers. This doesn't even account the increasing troubles modeling channel behavior that contain only a few dopant atoms and all the problems that come along with having short channels.

    Progress does not end here though, the harder shrinking silicon becomes, the sooner Intel and others will shift to the next paradigm of ICs, which according to the ITRS roadmap will likely be InGaAs channels and then perhaps InAs-Si tunnel FETs. Should be an exciting decade to come.
  • Nagorak - Friday, July 17, 2015 - link

    "Should be an exciting decade to come."

    Well this decade sure has been boring. I'm honestly not expecting more from the next either. Even if Intel is still shrinking nodes, CPU performance has been almost stagnant for 5 years now. The die shrinks aren't helping unlock much performance at this point and on the desktop side you'll never save enough power to recoup the cost of buying a new more energy efficient processor.
  • Death666Angel - Thursday, July 16, 2015 - link

    Intel never had a 28nm node. It went from 45nm -> 32nm -> 22nm -> 14nm.
  • hakime - Thursday, July 16, 2015 - link

    "The biggest front runner in turn is of course Intel, who has for many years now been at the forefront of semiconductor development, and by-and-large the bellwether for the semiconductor fabrication industry as a whole"

    This is so much bullshit and so much typical of the bias towards Intel that Anandtech has, basically since this site started. Intel has surely contributed a lot to the semiconductor development but calling it a bellwether in this industry is plain flat wrong.

    Not trying to defend anyone here but IBM has had as much as a big contribution to the industry, I can name a few examples right away: copper interconnect, Silicon On Insulator, graphene chips, big contribution in the development of III-V semiconductors, the first carbon nanotube transistor and of course the recent announcement of their 7 nm chip.

    So let's not say stupid things, thanks.
  • Kristian Vättö - Thursday, July 16, 2015 - link

    While IBM has undoubtedly made big contributions, most items on your list are technologies that at least currently only exist in R&D labs. It's one thing to develop a new technology and perhaps show a prototype of it; making it viable for mass production is where the real complexity is. Frankly, Intel has been at the forefront of that for several years now, although it will be interesting to see if Samsung and IBM can catch up.
  • krumme - Thursday, July 16, 2015 - link

    "Rocks..."
    When the Anandtech boss participate in the writing for especially Intel or to a lesser degree NV there is always this spinning of the words, that at least for me leaves me with a feeling a little bit like a PR piece. Only Ryan and Anand can/could twist the words like that. Just a bit and very subtle. I am pretty sure that when you read it Kristian you know what i am talking about.

    There is plenty good, valid reasons to slow down development. Why do we have to hear all this - in my ears - bs i dont understand. Why not explain why its slowing down and the reasons for it, technical and economic. Thats what i expect from AT. Not this.
  • Kristian Vättö - Thursday, July 16, 2015 - link

    We have actually explained the slow down in lithography shrinks. Well, not directly under that title, but if you read the full article linked below you will understand why the developments are now taking longer. In short, to get down to 10nm economically, it's likely that a new lithography technology needs to be used, which is a huge step for the industry because AF laser has been used for well over a decade now. The issue is that there's no one technology that is truly ready to carry the sword yet.

    http://www.anandtech.com/show/8223/an-introduction...
  • krumme - Friday, July 17, 2015 - link

    yeaa. This is a blast of an article i know that.
    Most of it is over my head, but i prefer it to this fairy tale picture of Moores law is still very much there - when it clearly isnt.
    What is Intels interest in upholding that story actually? Why this hell bend protecting of this stupid irrelevant "law" or symbol? - Its not like their profit is dependant on it any more.
  • tarlinian - Thursday, July 16, 2015 - link

    Exactly one of those has turned out to be practical: copper interconnect. If you ask anyone in the industry, Intel has led the way for the entire last decade while IBM led its happy followers down the dead end path of SOI and biaxially strained channels.
  • dealcorn - Thursday, July 16, 2015 - link

    Does Best Buy offer any product with Silicon On Insulator, graphene chips, or III-V semiconductors?
  • Zizy - Thursday, July 16, 2015 - link

    Possibly some old AMD gear with SOI. Unlikely to see IBM gear on sale there.
    3-5 is getting ready and you can find a chip here or there. Not at best buy of course.
    Graphene is far far away, even photonics is closer, as is neural chips stuff. Heck, even quantum computing is closer than graphene.
  • MrSpadge - Thursday, July 16, 2015 - link

    Of course: any system with an AMD CPU that's not an APU uses SOI. Didn't help them all that much, though.

    And III-V: almost all LEDs which are not OLEDs are using III-V semiconductors. Some are probably using II-VI, but research-wise that's all the same non-CMOS playing ground. They're also used in high-performance power electronics. Not sure in which Best Buy products they are, but chances aren't too bad.

    And the Quantum Computer: first models are commercially available, just not at Best Buy and definitely not for everyone.
  • 3DoubleD - Thursday, July 16, 2015 - link

    Don't forget that almost all 3G and LTE radios have been III-V. Only very recently has Si CMOS been able to drive these radios. So lots of III-V products available at Best Buy.
  • Oxford Guy - Thursday, July 16, 2015 - link

    AMD FX chips are SOI
  • xthetenth - Thursday, July 16, 2015 - link

    You might want to look up what bellwether actually means. It doesn't mean the cause of change, it means the first to show change. Intel's the first one on nodes, they're the bellwether, there ya go.
  • Deelron - Thursday, July 16, 2015 - link

    It actually means one, not the. IBM and Intel can both be bellwethers.

    ": someone or something that leads others or shows what will happen in the future"
  • Michael Bay - Thursday, July 16, 2015 - link

    So, typical muh ibm wail with a seasoning of muh graphene.

    Fabs sold at a loss tell a different story, as usual.
  • Alketi - Thursday, July 16, 2015 - link

    The real failure with the delay of 10nm is not desktop or laptop, but mobile.

    Intel's 14nm process doesn't seem to be gaining any traction in mobile. 10nm would have allowed their power envelope to comfortably fit within a mobile device, and thus compete in that enormous marketspace -- and still maintain the capability of running x86 software. i.e. the phone that runs Android/iOS, then clips into a docking station and boots to desktop Windows/OSX. Your phone is your laptop.

    It's looking like that disruption won't happen now till ~2020.
  • frozentundra123456 - Thursday, July 16, 2015 - link

    Yea, the worst part of 14nm is not that it was delayed, but that when it finally arrived, it was not a compelling improvement over 22nm. By the time 10nm arrives, the tablet market will be saturated ( it is already starting there) and Apple/ARM will be that much more firmly entrenched into phones. Sometimes I think Intel should just say "screw it" to tablets/phones and concentrate on what they do best, servers and laptops/ultrabooks.
  • extide - Thursday, July 16, 2015 - link

    What are you talking about? 14nm is a huge improvement over 22nm, ESPECIALLY in mobile/low power situations! See how 14nm Y class parts are keeping up with 22nm U class parts. Sure the 14nm atom kinda sucks, but thats cuz it's atom.
  • Michael Bay - Thursday, July 16, 2015 - link

    What planet are you from? Here on Earth there are a lot of tablets on this particular node, much more than 22 could ever archieve.
  • Alketi - Thursday, July 16, 2015 - link

    Greetings from planet Earth. Welcome!

    We're not talking about ARM tablets, we're talking about i686 processors.

    Visit us again on your next trip through the solar system. Thanks.
  • medi03 - Thursday, July 16, 2015 - link

    Enormous market eh? Anand once stated that with current ARM margins, even if Intel would get most of the market share, it would still be a laughable profit to them.

    They are there because they are scared by ARMs popularity, not because they really see they could make money in that area. The same reason why Microsoft decided to rival Sony in console market.
  • FH123 - Thursday, July 16, 2015 - link

    Ever since nVidia released such kick-ass products on the same 28nm process, products that combined increased perfomance with low(er) power consumption, I've been wondering how they managed that. In case it's not obvious, I'm talking about the GTX 970/980, back last year. Could it be a good thing to stick with the same process for, say, two tocks per tick? Maybe working with a mature node for longer has it's advantages?
  • extide - Thursday, July 16, 2015 - link

    Yeah I was thinking the same thing, both AMD and nVidia have gotten a LOT of mileage out of the 28nm process, so I wouldn't be surprised if we saw Intel do something similar with 14nm.
  • Oxford Guy - Thursday, July 16, 2015 - link

    The 980, yes. The 970 is still a Trojan Horse with its 28 GB/s + XOR nonsense.
  • Krysto - Thursday, July 16, 2015 - link

    I would if Samsung or TSMC will launch 10nm chips before Intel. That would be hilarious.
  • krumme - Thursday, July 16, 2015 - link

    Why is that?
    You think Intel should rush 10nm when dcg is having no competition and when Mobile is just the usual 1b loss?
    Makes no sense to be on cutting edge when there is no customers for it.
    Intel have and is cutting capex.
    More is to come. Big time.
  • boeush - Thursday, July 16, 2015 - link

    Here's a radical idea for Intel to revive PC part sales: how about dropping the iGPU crap, and instead putting out 8-core pure CPUs with jacked up FPU resources and performance, larger caches, better interconnects and MMUs, and >20 PCIe lanes (how about 40?) Without the iGPU, the CPU could be driven to higher voltages/frequencies within the same overall power envelope too. It would be a performance leap over the last few generations. Then maybe all the Sandy Bridge holdouts would finally have a reason to upgrade...
  • boeush - Thursday, July 16, 2015 - link

    BTW to all those who keep repeating the party line that most applications don't scale well with more cores: how about considering overall system performance instead if single-app scaling? When you have 50 processes running concurrently across your OS (including a dozen or two applications or Chrome tabs open on your desktop), do you think performance would be better when they all share/swap the computing and cache resources of the same 2 or 4 cores, or when they each run on a wholly dedicated core of their own? Hmmmm, ehhh, there's a head-scratcher for you...
  • zeealpal - Thursday, July 16, 2015 - link

    Because 90% of those processes will be using less than 5% of a cores resources, so why have 50 cores where 48 are at < 5%, and 2 are at 50-100%?

    To fit a larger amount of cores (50 as per your example) in a CPU, each core would have to run slower, and be weaker atom style cores.

    I do agree that an iGPU-less 8 core CPU would be nice :)
  • boeush - Thursday, July 16, 2015 - link

    Point being those 1 or 2 applications that run at 100% utilization on a single core, can have that core to themselves while the rest of the system doesn't even notice. Maximizes the performance of both the CPU hogs and everything else in the system. 1 core per process was a theoretical extreme to illustrate the point - more cores ARE better for the system overall, even when individual apps don't know what to do with them.
  • Gigaplex - Sunday, July 19, 2015 - link

    Those 1 or 2 applications would have the whole core to themselves, but then you're wasting the hardware resources (by a factor of 25-50) to reduce the minimal overhead of thread preemption. You're better off with a CPU architecture with fewer cores that are significantly beefed up with those extra transistor resources for that type of workload.
  • boeush - Sunday, July 19, 2015 - link

    'Wasting' is a value judgement based on usage. Having extra cores in reserve for when you actually need them - that is, to run applications actually capable of scaling over them, or to run many heavy single-core power viruses simultaneously - is a nice capability to have in the back pocket. Finding yourself in occasional need yet lacking such resources can be an exercise in head-on-brick frustration.

    That said, in my view the real abject waste presently evident is the iGPU taking up more than half of typical die area and TDP while almost guaranteed to remain unused in desktop builds (that include discrete GPUs). Moreover, all the circuitry and complexity for race-to-sleep, multiple clock domains and power backplanes, power monitoring and management, etc. - are totally useless on *desktops*. When typical users regularly outfit their machines with 200+ W GPUs (and more than one), why are their CPUs limited to sub-150 or even 100 W? Why not a 300 W CPU that is 2x+ faster?

    Lastly, with Intel we've witnessed hardly any change in absolute performance over a shift from 32 nm to 14. By Moore's Law alone current CPUs should be at least 4x (that's 300%) faster than Sandy Bridge. By all means, beef them up (while getting rid of unnecessary power management overhead) - but even then, by virtue of process shrink, you'd still have plenty of space left over to add more cores, more cache, and wider fabric.

    All that in total is my argument for bringing relevance (and demand) back to the desktop market. IOW make the PC into a personal supercomputer rather than a glorified laptop in a large and heavy box, with a giant GPU attached
  • jeffkibuule - Thursday, July 16, 2015 - link

    You need to keep in mind that Intel's chips with hyper threading already effectively allow 8 simultaneous hardware threads to run in parallel with each other without the OS being aware of it. In fact, I'm pretty sure they are exposed to the OS as 8 cores. They each have their own set of registers, stack pointer, frame pointer, etc.. and the things that aren't duplicated aren't used often enough to warrant being on each core (floating point math is not what a CPU spends most of its time doing).
  • boeush - Friday, July 17, 2015 - link

    Yeah, and the fact that the caches are constantly being trashed by the co-located, competing 'threads' doesn't impact performance one bit. And the ALU/FPU and load/store contention are totally inconsequential. I mean there's no reason at all why some applications actually drop in performance when hyperthreading is enabled.

    Look, I'm all for optimizing overall resource utilization, but it's not a substitute for raw performance through more actual hardware.

    My answer to the cannibalization of the desktop market by mobile is simple enough: bring back the differentiation! Desktops have much larger space/heat/power budgets. Correspondingly, a desktop CPU that consumes 30 times the power of a mobile one, should be able to offer performance that is at least 10 times greater.
  • boeush - Friday, July 17, 2015 - link

    Also, regarding floating point math... There might be a reason not enough applications (and especially games) are FP-heavy: it's because general CPU performance on FP code sucks. So they stay away from it. It's chicken vs. egg - if CPUs with lots of cores and exceptional FP performance became the norm, suddenly you might notice a whole bunch of software (especially games) eating it up and begging for seconds. Any sort of physics-driven simulation; anything involving neural networks; anything involving ray-tracing or 3D geometry - all would benefit immensely from any boost to FP resources. Granted, office workers, movie pirates, and web surfers couldn't care less -- but they aren't the whole world, and they aren't the halo product audience.
  • psyq321 - Friday, July 17, 2015 - link

    Actually, ever since Pentium II or so, floating point performance on Intel CPUs is not handicapped compared to integer performance.

    In fact, Sandy Bridge and Ivy Bridge SKUs could do floating point calculations faster due to the fact that AVX (first generation) only had floating point ops.
  • boeush - Friday, July 17, 2015 - link

    Vector extensions are cool and everything, but they are not universally applicable and they are hard to optimize for. I'm not saying let's get rid of AVX, SSE, etc. - but in addition to those, why not boost the regular FPU to 128 bits, clock it higher or unroll/parallelize it further to make it work faster, and have a pair of them per core for better hyperthreading? Yeah, it would all take serious chip real estate and power budget increases - but having gotten rid of the monkey on the back of the CPU that is the iGPU, and considering the advantages of 14nm process over previous generations, it should all be doable with proper effort and a change in design priorities (pushing performance over power efficiency - which makes perfect sense for desktops but not mobile or data center use cases.)
  • Oxford Guy - Thursday, July 16, 2015 - link

    You mean AMD FX? That was what AMD did back in 2012 or whatever. Big cache, lots of threads, no GPU, more PCI-e lanes.
  • boeush - Thursday, July 16, 2015 - link

    Exactly so, or otherwise what Intel already does with their Xeon line. Instead of reserving real performance for the enterprise market only while trying to sell mobile architectures to desktop customers, why not have a distinct desktop architecture for desktops, a separate one (with the iGPU and the power optimizations) for mobile, and a third one for the enterprise (even if it's only the desktop chips with enterprise-specific features enabled)? Surely that would help revitalize the desktop market interest/demand, don't you think?
  • boeush - Thursday, July 16, 2015 - link

    Sorry for multi-post; wish I could edit the previous one to add this - we all celebrated when Intel's mobile performance narrowed the gap with desktop. But perhaps we should have been mourning instead. It signalled the end of disparate mobile/desktop architectures - the slbeginning of the era of unified design. So now desktop chips are held hostage to mobile constraints. I have no doubt whatsoever that a modern architecture on a modern process designed singly with desktop in mind would have far superior performance (at much higher power - but who cares?) - than the current mongrel generations of CPU designs.
  • Gigaplex - Sunday, July 19, 2015 - link

    You just described the current enthusiast chips on the LGA 2011 socket (or equivalent for the given generation). The mass market is more interested in the iGPU chips. If nobody wanted them, nobody would be buying them.
  • boeush - Sunday, July 19, 2015 - link

    I think you're conflating interest vs. product positioning. The 'enthusiast' offerings cost an arm and a leg, because they are limited runs and high margin products. Were they mass-produced and (relatively) cheap, hell yea there'd be much wider interest.

    Additionally, even those enthusiast parts presently feature the same mobile-optimized core designs - not ones designed from the ground-up for performance as a first priority. Intel's present design rule that any feature costing an extra 1% in power must yield at least 2% more performance, is fine for mobile and data center products, but is antithetical to desktop/workstation products. Which is why the latter us a market in decline currently - it's being cannibalized by mobile, and no surprise: it's a self fulfilling prophesy by design.
  • AnnonymousCoward - Tuesday, July 21, 2015 - link

    Agreed. Cut the iGPU nonsense.
  • txjr88 - Thursday, July 16, 2015 - link

    Regarding Igpu, I wonder is they could use the gpu space and put down copper layers for heatsink/heat conductor. I realize the chip size has to be the same for heat surface area concerns, but why not have heat conductors either layered or embedded into the actual die space. Intel needs to think outside the box and go for what end users really want not the penny pinching customers who are essentially middle men these days. Speed sells, iGPU not so much.
  • boeush - Thursday, July 16, 2015 - link

    As far as layering heat spreading layers, I think graphene is the material of choice. Forget about making circuits/transistors with it for the near future; use it for its unmatched heat conductance instead. With graphene/silicon layered sandwich construction, it may even become possible to layer multiple CPUs on top of each other (in a 3D stack) for huge core counts per die. Would make Xeon Phi look like a toy. How would you like 128 Skylake Y cores on a single chip?
  • txjr88 - Friday, July 17, 2015 - link

    My thought was to use existing technology now. Graphene brings EPA health issues/concerns and material compatibility/material handling issues while copper use in now with existing processes/machines.
  • boeush - Friday, July 17, 2015 - link

    ??? What health issues? You do realize that any time you use a pencil on paper, you are smearing graphene flakes over cellulose fibers? As far as material compatibility and handling, I thought IBM demonstrated graphene film growth on silicon carbide, like 5 years ago? Not knocking the practicality of copper, but to get good heat pipe performance you'd need to lay it on pretty thick. Not so with graphene. Hell, even spraying on a layer of carbon nanotubes would probably make for a better heat pipe....
  • txjr88 - Monday, July 20, 2015 - link

    Go to EPA website and learn. In making chips, you don't spray stuff and glop it on by hand. The existing machines/processes are extremely fine tuned and any new material is a huge major big deal.
  • boeush - Tuesday, July 21, 2015 - link

    Learn what? If graphene had any appreciable toxicity, we'd have to urgently reclassify all graphite-lead pencils as toxic waste. As for material application, spraying, spin coating, CVD, annealing - whatever it takes. But no, not by hand...
  • Aspiring Techie - Friday, July 17, 2015 - link

    The only problem with that would be heat dissipation. Each core would have to be clocked extremely low so that not much heat is generated. The crippled single core performance would most likely outweigh the gain in parallelization.
  • boeush - Friday, July 17, 2015 - link

    Perhaps, to a degree... But the whole idea of inserting heat spreader layers between cores is to suck the heat away toward the perimeter of the 'sandwich', where it would presumably be transferred to some sort of a heatsink/cooler. Can't argue real numbers since nobody (that I know of) had tried it before, and I'm not skilled/knowledgeable/equipped enough to realistically simulate it - but it would be worth a try IMHO.
  • boeush - Friday, July 17, 2015 - link

    Of course, it would also require some pretty amazing yields on any given process: if you're going to stack 8 or 16 or whatever CPUs on top of each other, you'd want to make sure that few if any of them are duds...
  • boeush - Friday, July 17, 2015 - link

    Of course the ultimate upshot of this, is an evolution of circuit design itself from 2D to 3D. Building a CPU core in a cube instead of on a square - the density/proximity of components and greatly shortened (on average) connections between them would facilitate much higher - clocked operation of the processor as a whole... And would also utilize the substrate more efficiently (closer packing, and lessvspace wasted on wiring.). But that's more of an aspirational long-term vision thing...
  • txjr88 - Monday, July 20, 2015 - link

    What if you used "spare" conductor ball grids traced to a motherboard heat pipe to help pull heat away? I do not know the pin outs for existing cpu sockets regarding iGPU based or other possible spare pin outs. Perhaps less PCI lanes can be sacrificed for heat output. Faster clock for fewer PCI lanes or other output trade off seems good to me. It obviously poses the problem of custom motherboard but if the clock increases are significant, it could be worth pursuing.
  • JNo - Thursday, July 16, 2015 - link

    "No other industry is tasked with breaking the laws of physics every two years"

    As far as I'm aware, the semi-conductor industry hasn't broken the laws of physics even once....
  • Oxford Guy - Thursday, July 16, 2015 - link

    Shhh... you'll cut into the hype.
  • krumme - Friday, July 17, 2015 - link

    Intel break the laws of physics every two years.
    This year is an exception because they met som rocks.
  • Aspiring Techie - Friday, July 17, 2015 - link

    That's correct. No industry can break the laws of physics. They can only work around them.
  • abrowne1993 - Friday, July 17, 2015 - link

    Please tell me I'm not the only one who thought of the Kesha song while reading the title.
  • Wolfpup - Friday, July 17, 2015 - link

    It's a good thing 22 and 28nm have turned out to be fantastic processes, and supported strong CPUs and GPUs!

    Really this died after just 3 cycles, as there wasn't a proper update after Nehalam, across the board.

    It makes me wonder what's going to happen long term, but at least for now we've still got a ton of awesome technology to look forward to!

    I'm really impressed by how well Nvidia and AMD have dealt with this too. They started with incredibly strong products at 28 nm, and have managed to make even better products, without it even seeming like they were holding something back. They just got that much more efficient.
  • Gigaplex - Sunday, July 19, 2015 - link

    No proper update after Nehalem? Sandy Bridge was a pretty significant release.
  • jameskatt - Saturday, July 18, 2015 - link

    QUOTE: Intel’s traditional development model for processors over the last decade has been the company’s famous tick-tock model – releasing processors built on an existing architecture and a new manufacturing node (tick), and then following that up with a new architecture built on the then-mature manufacturing node (tock), and repeating the cycle all over again ...

    RESPONSE: The dependence on a new manufacturing node to speed up processors is the previously EASY AND LAZY FIX to having to develop new architectures to improve CPU performance. And it has become much more difficult to develop smaller nodes simply because of the laws of physics.

    I believe Intel - with all its resources - simply should always develop new architectures just like Apple is doing with its processors.

    Now that new nodes are delayed, it behooves Intel to get off its intellectual butt and come up with new architectures to attract new purchases of its CPUs.

    I for one would be happier if Intel further improved on its embedded GPUs.

    I would be happy to have more PCI slots available to the i7 series that are seen in the Xeon processors.

    I would be happy to see faster multi-core architectures. It is appalling to see how Intel Processors become slower the more cores they have for single core tasks.

    Come on, Intel. Get off your lazy butt.
  • Oxford Guy - Saturday, July 18, 2015 - link

    This could be a good thing for consumers. Once AMD's Zen is around the same process size as Intel's stuff and Intel won't quickly have a node advantage as it pretty much always does, AMD might be able to be more competitive.

    A competitive AMD means less inflated pricing from Intel.
  • parlinone - Sunday, July 19, 2015 - link

    Intel probably decided to cut back on expenditure precisely becasue they expect Zen to be a competitive part. So they need to be more price competitve while maintaining margins. This means less expense for R&D and longer ROI.
    Moore's law faltered with the arrival of FinFET; it applies to 2D silicon for which expenses decrease with smaller attributes. Now improvement needs to come from better/smaller 2.5/3D layouts and exotic materials.
    The increased complexity of lithography itself also means that for Intel to move to another, smaller node in a cost efficient way it needs EUV, which hasnt fully matured yet.
    ASML and Samsung are in the race for Apple's huge next gen SoC carrot so it seems they will bypass EUV and go for triple or quadruple patterning for their first gen 10nm process.

    Intel in effect, has conceaded defeat to the ARM foundries, and is readjusting to compete with AMD on price.
  • parlinone - Sunday, July 19, 2015 - link

    ASML should read TSMC of course...
  • Oxford Guy - Tuesday, July 21, 2015 - link

    The ancient 8350 FX chip is apparently doing pretty well with Witcher 3. Now that games are using a bunch of threads AMD's CMT design seems to be becoming relevant for gaming. Of course, now that that's happening AMD is reportedly abandoning CMT in favor of a me-too product that copies Intel's hyperthreading. Oh well...
  • Phartindust - Sunday, July 19, 2015 - link

    This should make things interesting next year when Zen comes online at 14nm. It's been awhile since AMD and Intel were on the same process node.
  • systemBuilder - Sunday, October 4, 2015 - link

    The new strategy is called, "Tick-Tock-Toe".
  • yeeeeman - Saturday, March 14, 2020 - link

    Well, well, well, this was the starting point. Nobody could imagine that after skylake Intel will have at least 4-5 more products based on the same process.

Log in

Don't have an account? Sign up now