To be honest, this is good enough for me and most of us.
I'd be happy to see Qualcomm focusing more on server CPUs and computers/notebook running Windows on AMR chips.
It's been something like 15 or even 20 years since coders/developers stopped worrying about optimizations, performance improvements and now they only rely on the much improvement hardware being available year after year.
We were building optimized web pages 20 years ago, that looked good and loaded in less than 10 seconds on a 5,6 KB connection.
Now idiots build sites where the Home Page is 300 MB heavy and complain about mobile CPUs and mobile networks not being fast enough.
"t's been something like 15 or even 20 years since coders/developers stopped worrying about optimizations, performance improvements and now they only rely on the much improvement hardware being available year after year."
Speaking as a software developer, I will say that your statement is bullshit. I have yet to work on any product where performance wasn't considered and efforts to improve efficiency and performance weren't made.
Also everything your browser does now is 10,000 times more complicated than anything that browsers did 20 years ago. All of the effort that has gone into developing these technologies didn't go nowhere. You are just making false equivalencies.
And if a page took 5 seconds to load in 2019, let alone 10 seconds, you'd be screaming about how terrible the experience is.
It's usually the case that people talking confidently about what computers were like 20 yrs ago (especially how they were faster than today...) are in the age range from not yet born to barely five years old at the relevant time.
Those of us who lived through those years (and better yet, who still own machines from those years) have a rather more realistic view.
Really? What's the 'realistic' view? For background, the first computer I had regular access to was a TRS-80 Model 1 when they first came out in 1977, so I've been doing this a LONG time. Software today is a bloated mess. It's not all the programmers' fault though, there is this pressing need for more and more features in each new version - features that you're lucky if 1% of the users actually even utilize. Web pages now auto start videos on load and also link a dozen ads from sites with questionable response times. That would have been unthinkable in the days 56k and slower dialup, and it just wasn't done. I even optimized my BBS in college - on campus we had (for the time) blazing fast 19.2k connections between all dorm rooms and the computing center, at a time when most people were lucky to have a 1200bps modem, and the really lucky ones had the new 2400s. So I set up my animated ANSI graphic signons in a way that on campus users at 19.2k would get the full experience and off campus users, connecting via the bank of 1200 baud modems we had, would get a simple plain text login. In today's world, there is a much grater speed disparity in internet connections. I have no problem with pretty much any site - but I have over 250mbps download on my home connection. Go visit family across the state - the best they can get a a DSL connection that nets about 500k on a good day on a speed test - and so many sites fail to load, or only ever partially load. But there are plenty of sites that don;t try to force graphics and videos down your throat that still work fine. No, things weren't faster back in the day - but because the resources were more limited, both for apps running on the local computer in terms of RAM, storage, and video performance as well as external connectivity, programs had to be more efficient. Heck, the first computer I actually owned had a whole 256 bytes of RAM - to do anything I had to be VERY efficient.
So pay per minute slow internet, the non-standard compliance of Netscape 2.0 and IE 3.0, an internet without any video streaming, were there "good ol days"? Sorry but I remember bloated pages that took a minute plus to download or never loaded. I remember waiting 3 minutes for one single high res jpeg to download... They were not glory days. Can your 256 byte computer even handle Unicode? No way.
Purely in terms of CPU and GPU performance, I wouldn't be surprised if this year Apple positioned A11 as Entry, A12 as mid range and A13 as Top End. It would have still win.
But damn that X24 modem.I could only wish the 7660 from Intel is a much improved version of the current 75xx.
Indeed. When you provide a small battery to a powerful SoC you get degraded batteries before the two year warranty(Europe). Problem is Apple's own store diagnostics validated batteries from throttled iphones.
Based on the context of this thread, having a throttled SOC that's still well ahead of the competition would still provide a great experience. You dug yourself into a hole with that comment.
What's interesting to me is that its performance per Watt seems to be pretty close to the A12. Which more or less confirms what I've suspected for a while - that a large part of the superior performance of the A12 is simply due to Apple getting priority at the newer 7nm fabs. While the A12 vs 845 comparison made chronological sense (in that both were available in phones at the same time), the A12 was 7nm while the 845 was 10nm.
Catch Apple, no. Alot of the performance is tight integration, not actual CPU speed. If Apple had used Qualcom Snapdragon and Android OEM's had the A12, Apple would still be faster at benchmarks. Take it all with a grain or 10 of salt.
Untrue. Apples cores are wider, deeper, more OoO than anything else in mobile, and use massive caches at that. You have it reversed, if Android could use the A12 it would post impressive benchmarks, it's hardware design.
Low level benchmarks are meant to remove the OS from the equation. Proof is in the pudding.
The A12 is a great CPU, but it's not magic. It's all ARM. The difference is in the implementation and control that Apple has with integration. Whatever though, both ways have benefits and downsides. I am just saying that people that think it's all about this CPU that is somehow years ahead of everyone else are mistaken as to the reality of the situation. Suffice to say, it's all fast.
This just doesn't make sense. "It's all ARM." Yeah, sure, and one companies implementation of that instruction set can absolutely be superior.
That's like saying "It's all x86 / x86-64." when we're comparing AMD and Intel. One can *absolutely* be faster than the other at implementing that instruction set - and in practice, is.
Apple makes amazing ARM chips, irrespective of iOS.
They are great chips, I am just saying they are not (hardware wise) way beyond what the competition is doing. Alot of that performance is OS, tight integration with apps, drivers, API's etc as its all controlled by one company. That isnt a bad thing, that is a good thing for Apple customers.
What do you call their massive cache and issue width advantage if not being hardware wise beyond the competition? It's not magic, but Apple is clearly spending more on die area than Qualcomm is.
Yeah I don't think you know what you're talking about. I think you read somewhere that some of Apple's performance/stability superiority over Android come from Apple controlling the whole stack and you've generalized that into places where the statement just isn't true.
You seem to conflate the ARM instruction set with the actual design of the chip. You then play off Apple's obvious advantages as some sort of magic... err.. "integration" as you call it. That's nonsense. You might be able to claim that for a specific application, but not for generic benchmarks.
I didn't say it was magic. I said it's not entirely down to some ambiguous "optimization" with the OS. The cores themselves are physically impressive regardless of OS.
"It's all ARM."
This shows me you may have missed crucial step, Apple is only licencing the ARM instruction set, but otherwise they design the whole very wide, deep, very OoO core themselves.
I didn't say it was magic. I said it's not entirely down to some ambiguous "optimization" with the OS. The cores themselves are physically impressive regardless of OS. It's when people play it off as some pie in the sky optimization advantage that they're claiming magic, you can't make a 3-wide Braswell core fly just with vertical integration.
"It's all ARM."
This shows me you may have missed crucial step, Apple is only licencing the ARM instruction set, but otherwise they design the whole very wide, deep, very OoO core themselves.
Are you saying that somehow the iOS vs. Android OS integration with the CPU has an affect onb geekbench benchmarks?
That makes no sense. Geekbench is a purely mathematical benchmark and the OS should have no affect on its results other than how it power manages the SoC during the benchmark.
Take a look at the last 2-3 years speed (real-world apps, productivity and games) tests, iPhone with its fastest SoC on imaginary paper lags behind the competition.
He meant those "speed test" that consist on opening apps with your fingers trying to match the touch... you know, measuring the storage and ram and not so much the SoC's capabilities...
Also, iOS and android diverge in animation speed, sometimes assets and most of the time longer screen logos (or inexistent on android).
As if you can use the SoC alone, you know computer don't just run with processor alone so yeah in actual usage A12 inside.iPhone is just as fast if not slower than competition.
It's a game-related issue (I think there is an FPS limiter set to 30fps on Android version of Fortnite) and most of them are optimized for iOS, and FWIW, a phone with mediocre resolution will have fast FPS on games.
Ironchef, comments like that are just going to trigger a bunch of brand loyalty arguments where people banter over insignificant points and ultimately change no one's opinion about which piece of consumer electronics gets them feeling warm and gooey inside.
The short answer is no, it's unlikely anyone will catch Apple on ARM. ARM reference designs are several generations behind Apple and when companies like Samsung attempt their own design like they have with recent Exynos, it's clear that not only can't they match performance, but efficiency is horrible by comparison. Apple has a world class team that is likely the best design team on the planet right now.
Eventually, sure. Apple will stall out on process related stuff eventually and they'll have a chance to catch up. Unlikely until then as they're still making big gains too and have a 2-3 year lead.
We need a bit more on the GPU side in the next years for foldable. Pixel count will increase, SoC power needs to decrease (more power and mechanical volume goes towards the display) and mobile gaming should gain in popularity with x2+ larger displays.
We can establish that single core performance/power is good, but what about multi-core? Wouldn't the other 3 big core be running at the highest voltage while potentially running at ~2GHz in real world workload?
We read about all this when the 845 was about to launch a year ago. I didn't see some monumental improvement in responsiveness or efficiency despite all these whitepapers stating so. Unless you are so kind of smartphone gaming fanatic, real-world use differences between each year look great mostly on paper.
Performance has been good enough since 2013 with the release of the SD 800. Every year we get a performance bump that just gets offset by feature bloat that doesn't really improve performance outside of benchmarks. I can pull out my old LG G2 running an Android 4.4.2 custom ROM/kernel and that thing just flies compared to any phone from the past year.
I have a Z3 compact, which is a SD801. Bought recently a Galaxy S7 second hand to replace the Z3. I can safely say that the Exynos 8890 is noticeably faster in opening apps, playing intensive games and generally in multitasking. Z3 usually lags when phone is started for first time and many apps sync. Galaxy S7 is buttery smooth. So yeah, I think we can feel the progress in performance of these chips, but maybe at a later point when apps get to their limit of computing power. Then you actually see that a newer chipset is noticeably faster. But nevertheless, the Z3 compact with SD801 is still a great fast phone. It runs a bit slower than the Exynos as I said but in general it is not slow at all on Android 6.0. So yeah, a chipset like it could be easily used in today's times if you don't a bit of slow down here and there.
Sorry, but that's just not true. I have yet to use a phone that feels consistently faster than the OnePlus 6 I'm currently using as a daily driver, and I've done a whole bunch of messing with custom ROMs / kernels, starting back with Cyanogenmod 6 on a Dell Streak.
Sounds very positive given that phones already perform great at the flagship level. The single core improvement is greatly welcomed given how much that matters for javascript.
I like their performance over time graph on page 1. It shows the 855 to be faster than the 845, which is faster than the 835, which is slower than the 820. What? Their performance dropped in that generation?
Yes. In floating point, the SD820 based on their own custom cores (built on an evolution of Krait cores called Kryo) was much better than everything, including next gen SD835 which used an IP from ARM the cortex A72.
so it pretty much means their graph is worthless. Floating point should not be the primary indicator of performance, integers are much more used by most popular use cases
He didn't say the graph shows FP performance, he just mentioned that 820 was unusually strong in that area. My guess is it's a representation of overall performance based on some or other standard benchmark. That doesn't make it "worthless", because it's literally only there to show a rough comparison between historical chipsets.
Well, it’s all very interesting, but still the elephant in the room is Apple’s A series, no matter what. Take that out, and the 855 and 980 are excellent chips, but with it in, they are just mediocre.
They are excellent chips no matter what. A12 big cores are twice as large or more than a76 cores. No android Oem is willing to pay a big premium for their flagship socs, so the qualcomms and huaweis of the world don't pressure arm to spend the big $$ needed to fund the development of truly wide cores. The only one who seems interested in going big is Samsung, but they can't get their act together.
Still performance is more than adecuate in the a76 flagship SOCS, and efficiency is slightly better than a12, so for me this generation is the best in the android space since the SD800.
What makes the Ax series so fast is the tight OS integration. It's a good chip, but not years ahead hardware-wise. What makes the whole thing so fast is the OS and how it's implemented. Either way good for Apple, but it's more SW than HW
I don't buy that either. It's pretty well known Apple has some damn good chip designers in house. I'm no expert but one of the biggest things that stand out to me when comparing Apples designs is how much cache they use. The A12 has 128KB instruction and 128 KB data L1 cache and 8MB of L2 cache. It seems the 855 has basically ~2MB L2 cache (divided among each "cluster") and 2 MB of L3 cache. I haven't seen a Android avalible SOC that comes close the amount of cache that Apple puts on its SOC's which from what I understand is quite expensive to do, and results in a larger die size. But give large performance benefits. Of course that's only one example of something they do differently, considering that with a 2 high power plus 4 low power cores setup they are still so far ahead they must be making significant changes compared to the reference design they get from ARM.
Their hardware team deserves serious credit for staying so far ahead for so long.
One big question I have always had with ARM based device especially in performance. - How does it compared with x86 platform except for power. This can be difficult to actually truly represent - especially with design difference in OS and applications.
Application why a good example is running AutoCad - can even latest iPad Pro truly have performance of say latest quad or six core x86 based CPU and high end mobile GPU. I know Apple has iPad Pro version of Photoshop - but this is based on Photoshop CS and I personally like the earlier series - which I own CS 5.0
I think on ARM we long way from having a full version of Autocad, Solidworks, Lightwave 3d, 3dmax and others high end professional applications.
A12/A12X devices compare very favorably with U series Intel chips, and smack Y series chips. Lack of software is not due to lack of power, but perceived demand.
Benchmarks clearly show performance is about the same. In fact it looks like A12X is well ahead in terms of raw power, for example by 30% on compilation (LLVM test).
I differ with a lot on this - I think A12/A12X and other ARM related device actually perceive faster because of marketing. Also with App architexture of the OS running on such devices hide the actual performance of chips - I think it specifically depends on what you using the device. For normal word processing, emails and internet - it can easy be shown that same as U series - and this depends on which model - likely dual core x86 and possibly even AMD notebooks - but not a 4+ ghz laptop like my Dell XPS 15 2in1. Keep in mind on a phone and even android tablet or iPad there is less screen to drive. I am talking about real professional software and not apps
One thing is interesting about 855 design is big core designed - running the primary core at higher speed then other 3 primary core - is smart - this means the primary thread is running at higher speed. I assume that smaller cores would be use threads for background tasks. Intel has a similar designed large single core and 3 minor atom based cores - I would think that device is closer on performance to A12 based devices not the U series.
My big question is that I think it hard to actual compare performance between any x86 base and any ARM based. It depends on designed of OS and applications running on devices. I would be 100% sure any software that uses AVX 512 would blow any ARM based application with similar abilities. In fact with AVX 512 application it would be big difference with AVX 2 based computer.
All I am saying is performance depends on application running, not just web browsing and other things
One thing also - the speed of cpu, number of cores, or even node process does not make the performance of device - it how it used with architexture inside that makes the difference.
Single-threaded integer score is within 2.5%. Mind you that's a 10W SoC compared with a 65W CPU! I'm awaiting your list of excuses how that is possible...
My point is that in a world of benchmarks, you are looking at it very myopically. ARM isnt anywhere near as fast as x86 in raw power. Very good and super efficient at alot of multimedia tasks though.
What benchmarks??? Name another cross platform benchmark which is NOT a useless browser test. Apart from SPEC, Geekbench is one of the very few benchmarks that allow reasonable cross platform comparisons.
Geekbench is rubbish. There's a reason why Apple blocks benchmarking apps with only a very few exceptions that show them in the best light. They go so far as to even block games that have benchmarking utilities built into them. Apple flat out goes out of it's way to obscure the real world performance of it's chips. Until Apple stops acting borderline fraudulent about performance numbers I am calling BS.
If ARM is so amazingly efficient, why is there not a push to use it in laptops and desktops?
Could it be because, outside of specifically recompiled apps, ARM is still nowhere close to x86 in real world performance? Perhaps once apple finally makes an ARM MAC we can find out, but until then the capability of ARM devices to excel at geekbench is worthless, as they are tied to devices with incapable OSes and cannot run production software.
You lack perception. ARM is a small design firm that sells SoC designs, and gets its revenue from the licensing fees of over 1 billion smartphones sold every year. The amount of revenues they would get from laptops this way is simply not worth the extra investment. ARM naturally choses to focus all their resources on where the money and competition really is.
As for the custom SoC vendors such as Samsung and Huawei, the story is similar. Staying at the top of the smartphone game is an absolute priority for them. The margins and stakes are far bigger than what they can hope to get from laptops. Also, all the most advanced semiconductor lines are initially reserved for smartphone chips, with desktop and laptop businesses standing in the waiting line.
You might be clueless too! There weren't any "4k rendering" benchmarks in that link - but there were 4k encoding benchmarks.
And as for that encoding performance you are apparently referencing, it is definitely using fixed function encoders - it's not the CPU performance as Geekbench tests use (and I want to stress cross-platform Geekbench isn't 1:1 scoring - you'll never find Andandtech comparing various CPU architectures with Geekbench as it even uses fixed function resources like AES in its crypto stuff). And the speeds the laptops show definitely point to a CPU encoder being used. A fixed function encoder will barely hit the CPU, while CPU encoding will max those cores at 100%. The CPU encoding is higher quality at the cost of heat and speed.
Recently Adobe updated Premier to support Intel's fixed function encoder (called quick sync) read here http://www.dvxuser.com/V6/showthread.php?362263-Ad... post #8 - and Rush may not have gotten that update yet or the benchmark site referenced didn't update their program https://www.laptopmag.com/reviews/laptops/new-ipad... but I managed to find a benchmark for the quick sync in Premier https://forums.creativecow.net/thread/335/101459 - and Intel's quick sync fixed function stuff is all relatively the same afaik so the desktop CPU has less of an impact - gives a 1:20 min 4K -> 1080p conversion at 91 sec w/ CPU and 45 sec w/ fixed function, scale that up to 12 min (x9) and we get 13:39 w/ CPU (it's a nice CPU, i7-8700K) and the fixed function encoder gets 6:45. It'll probably scale pretty linearly. So 6:45 vs 7:47 with fixed function encoding - which isn't comparing CPUs at all at this point but rather their fixed function encoder!
So the iPad has some nice hardware, sure, but it's not outperforming Intel's brand new MB Pro 13" by leaps and bounds. They'll probably be about the same speed with fixed function encoding and the MB Pro 13" will win in a non-encoder setting thanks to its increased TDP.
Okay.. So in short, the A12X is "about the same" in CPU performance as Intel's actively cooled, CPU-specific and twice more power hungry chip while also having a 1+TFLOPS GPU, 4G modem and advanced ISP on the same die.
Overall, if that is what you call "nice", then Intel's hardware is what? Trash.
Let's compare Intel i7-8500Y and Apple A12X. The i7-8500Y is a dual core 5W 14nm notebook/tablet processor. A12X is a octa core 7nm tablet processor with unknown power usage. 8500Y uses the x86-64 instruction set, while A12X uses ARMv8. They have very few benchmarks in common, which introduces notable amounts of uncertainty.
Let's start with Geekbench 4.1/4.2 Single threaded: 8500Y scored 4885 and A12X 4993. A12X leads with 2%, which is within margin of error. Same benchmark, but multithreaded: 8500Y scored 8225 and A12X 17866. A12X demolishes the dual core with 117% higher performance. This is clearly because of the 4-core-cluster in A12X having double core count compared with the dual core 8500Y. Next up we have Mozilla Kraken 1.1 showing browser performance: 8500Y scores 1822ms and A12X 609ms. The A12X took 67% less time to complete the task, which amounts to a 199% increase in performance. Octane V2 is another browser performance benchmark: 8500Y scores 24567 and A12X 45080. A12X bests the Intel cpu by 83%. 3D Mark has two versions of Ice Storm Physics and unfortunately our processors use different versions. They use the same resolution however. 8500Y scores 25064 in standard physics and A12X 39393 in unlimited physics. A12X scores a 57% lead.
It's hard to establish system performance with such a limited amount of benchmarks. Geekbench and 3DMark are synthetics and the two others show browser performance. The processors are equal in ST, but the A12Xs higher core count allows it to double the 8500Ys MT score. The A12X outpaces the 8500Y in 3dMark. The A12X is clearly superior in browser performance. Apples A12 drops closer to the Intel in synthetics, but performs similar to it's larger sibling in web benchmarks. Winner: A12X
Overall, the 855 was thought to be head and shoulders above Kirin, but it seems that it will be on the same level at best. I'm typing this from my already heavily used mate 20pro, so if the US wouldn't nuke Huawei global-wide right now, the Kirin would certainly push ahead, which I hope it will do, since it seems more competitive price-wise. Huawei bypassed the power issue with larger batteries, but to be honest, the Kirin doesn't seem to be that hungry anyway. For me, the 855 is a letdown, I was hoping for more, but it seems my mate20pro will be relevant for longer then I thought, so not too bad of a news, I guess. Thank you, Andrei, for the in-depth review!
I really doubt we’ll see battery life improve much with this generation. Hint - 5G. Maybe that’s why 855 focuses on overall efficiency, and the GPU gains are modest. Let’s hope I’m wrong.
Yeah, that's the big wrench in the works... Hopefully there's at least *some* flagships without 5G! Though I doubt I'll be looking for an upgrade from my Pixel 3 this year or next.
Wow that was really impressive, said no one. Apple is like 2 to 3 years ahead and QC isn't gaining any ground. 5G is a big nothingburger. If my Note4 ever burns out I'mma get a fruit-phone next time. Unrivaled performance, better security, LTS for the OS, etc. Rather have an equivalent Pixel but Google isn't even trying from the look of it.
@Andrei: Thanks, always appreciate the level of detail in your reviews. Question: QC's concept of 1xbigFast + 3 bigNotsofast + 4 little cores might be especially suited to execute, for example, JavaScript during web browsing, as JS is single-threaded. Did you observe that to be the case?
To be more specific: I am also a bit underwhelmed by 855's performance in the web-browsing benchmarks. How much weight does JavaScript performance have in those benchmarks. If JS performance is a significant component of the web benchmarks, the single fast Big core layout would indeed be a bit of a dud.
What were the 845 benchmarks run with? Because the Pixel 3/XL has its A75 core clocked at 2.5GHz, while the Galaxy S9's is clocked at 2.7GHz and the LG G7's A75 runs at the stock 2.8GHz. Not a huge difference admittedly, but it definitely skews results slightly if you're comparing the maximum clock for the 855 versus the lower clocked version of the 845.
Another year that we'll see Apple's SOCs slaughtering Android's SOCs in raw performance. While I'm glad for the efficiency gains, that's far from enough if Qualcomm wants to remain competitive. Failing to meet the performance of a SOC that Apple released 1 year and a half ago and sometimes even failing meeting A10 performance just isn't enough for a 7nm SOC.
I'm guessing part of it is due to how much of a monopoly they are. When it comes to Android it dominates the market. A few like Samsung, Huawei etc make their own SOC's but that's about it. MediaTek is the only other I can think off but it's rare at least in the US to see them in anything but mid to low range phones and even then its still usually Qualcomn you'll get in that market range.
From what I've read/seen one of the things that makes Apple's SOC's so fast is the large amount of cache they use. A12 having 8 MB of L3 cache, the 855 has 2 MB. Having so much cache is quite expensive and makes the die size larger which of course increases costs as well. Maybe Qualcomn just isn't getting much demand from OEM's to build such a expensive chip. If they made it and no OEM wanted to pay the price for it they would be stuck holding the bag.
Apple doesn't have an integrated modem so it has been choosing to use that extra space for beefy, 7-wide, die space hungry CPUs and caches. Although they bench well and have world leading efficiency, those CPUs draw too much current for the small batteries that come with an iPhone and lead to the controversial and unannounced throttling they decided to enact through iOS updates to preserve battery longevity. An on die modem also saves consumers money in the end from lower packaging cost and simpler PCB layouts.
This SoC is VERY disappointing. Wont be upgrading this year to any phone that has this.
Hopefully Samsungs new SoC for 2019 is much MUCH better than the utter joke they had in the Galaxy S9... That was a fucking disaster... I'm hoping it actually performs well in REAL WORLD cases this time, not just mostly meaningless benchmarks, but i doubt it. Samsung reminds me of when desktop PC GPU makers would cheat in benchmarks by using driver hacks.
It looks like Qualcomm is having an Intel-like moment where performance stops having meaningful increases year-on-year. For me, an SD835 device is more than fast enough and an 855 only makes sense for higher-performing larger devices like Windows tablets.
Time to focus on the software then... my old SD650 device got a new lease on life with a clean build of Lineage on Android Pie. The same slowdown that hit the laptop/PC markets could cause a crash in the smartphone market as people keep their phones for 3-4 years instead of upgrading annually.
I doubt it. Apple seems to be the only one with a cracking chip design team. Huawei seems to be doing decent work with HiSilicon but ARM designs are still far behind.
Does it matter though? Apple's latest chips are so overpowered for their phones and tablets, the performance numbers are more for bragging rights than anything else. That could change once they start using A-series chips in their laptops.
Disclaimer: I have been using a Galaxy S8 (SD835) and I don't feel the need to upgrade anytime soon, however: a) Having more power is always nice, and there are times where the phone ... stutters, especially when moving from "heavy" games. b) The gap with the iphones is embarrassing, and I don't see any technical justification. Yes, the iPhones are optimized all around (HW+SW) thanks to their closed ecosystem, but nearly 2x gap on the GPU is a deliberate choice which should be, at the very least justified.
We need competition in the mobile CPU/GPU, or Qualcomm will quickly become the Intel of mobile, sitting on his a** until something better comes along.
Dunno why Andrei is so kind in his wording about the new snapdragon. For start it's nothing that more powerful vs the kirin 980 (while it was shaped to literally obliterate it in both CPU/GPU, especially the GPU). Also even the A11 is making a joke of it and the A12 sustained performance is higher than the sd855 peak in GPU... and CPU is also for the A12 and even A11.
I would kinda understand for the sd855 to lack behind the A12 with 15% in CPU/GPU, will accept somewhat to lose to the A12 even with more, but losing to the A11 is roflmao funny. How in the world you can call it success when qualcomm(as a leader in that regard) can't catch up with apple for years? Recently even the GPU started to lag behind BIG time, before atleast that was on par. Well.. cool.
Does this A76 CPU support the pointer authentication codes that Apple have had since iPhone X? It seems like this is a potentially very useful security feature that it would be nice to see on Android devices.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
132 Comments
Back to Article
cknobman - Tuesday, January 15, 2019 - link
So better power consumption but performance wise it looks like a swing and a miss.Nothing too meaningful over the 845.
IGTrading - Tuesday, January 15, 2019 - link
To be honest, this is good enough for me and most of us.I'd be happy to see Qualcomm focusing more on server CPUs and computers/notebook running Windows on AMR chips.
It's been something like 15 or even 20 years since coders/developers stopped worrying about optimizations, performance improvements and now they only rely on the much improvement hardware being available year after year.
We were building optimized web pages 20 years ago, that looked good and loaded in less than 10 seconds on a 5,6 KB connection.
Now idiots build sites where the Home Page is 300 MB heavy and complain about mobile CPUs and mobile networks not being fast enough.
bji - Tuesday, January 15, 2019 - link
"t's been something like 15 or even 20 years since coders/developers stopped worrying about optimizations, performance improvements and now they only rely on the much improvement hardware being available year after year."Speaking as a software developer, I will say that your statement is bullshit. I have yet to work on any product where performance wasn't considered and efforts to improve efficiency and performance weren't made.
bji - Tuesday, January 15, 2019 - link
Also everything your browser does now is 10,000 times more complicated than anything that browsers did 20 years ago. All of the effort that has gone into developing these technologies didn't go nowhere. You are just making false equivalencies.And if a page took 5 seconds to load in 2019, let alone 10 seconds, you'd be screaming about how terrible the experience is.
name99 - Tuesday, January 15, 2019 - link
It's usually the case that people talking confidently about what computers were like 20 yrs ago (especially how they were faster than today...) are in the age range from not yet born to barely five years old at the relevant time.Those of us who lived through those years (and better yet, who still own machines from those years) have a rather more realistic view.
rrinker - Wednesday, January 16, 2019 - link
Really? What's the 'realistic' view? For background, the first computer I had regular access to was a TRS-80 Model 1 when they first came out in 1977, so I've been doing this a LONG time. Software today is a bloated mess. It's not all the programmers' fault though, there is this pressing need for more and more features in each new version - features that you're lucky if 1% of the users actually even utilize. Web pages now auto start videos on load and also link a dozen ads from sites with questionable response times. That would have been unthinkable in the days 56k and slower dialup, and it just wasn't done. I even optimized my BBS in college - on campus we had (for the time) blazing fast 19.2k connections between all dorm rooms and the computing center, at a time when most people were lucky to have a 1200bps modem, and the really lucky ones had the new 2400s. So I set up my animated ANSI graphic signons in a way that on campus users at 19.2k would get the full experience and off campus users, connecting via the bank of 1200 baud modems we had, would get a simple plain text login. In today's world, there is a much grater speed disparity in internet connections. I have no problem with pretty much any site - but I have over 250mbps download on my home connection. Go visit family across the state - the best they can get a a DSL connection that nets about 500k on a good day on a speed test - and so many sites fail to load, or only ever partially load. But there are plenty of sites that don;t try to force graphics and videos down your throat that still work fine.No, things weren't faster back in the day - but because the resources were more limited, both for apps running on the local computer in terms of RAM, storage, and video performance as well as external connectivity, programs had to be more efficient. Heck, the first computer I actually owned had a whole 256 bytes of RAM - to do anything I had to be VERY efficient.
Klinky1984 - Friday, January 18, 2019 - link
So pay per minute slow internet, the non-standard compliance of Netscape 2.0 and IE 3.0, an internet without any video streaming, were there "good ol days"? Sorry but I remember bloated pages that took a minute plus to download or never loaded. I remember waiting 3 minutes for one single high res jpeg to download... They were not glory days. Can your 256 byte computer even handle Unicode? No way.seamadan - Tuesday, January 22, 2019 - link
I bet your pages looked REALLY good. Like REALLY REALLY good. I'm in awe and I haven't even seen themKrysto - Tuesday, January 15, 2019 - link
That bold has sailed. They've already given all the server IP on a silver platter to their forced Chinese "partner".That said, Snapdragon 8cx for notebooks does look quite intriguing, mainly because of its 10MB shared cache.
Krysto - Tuesday, January 15, 2019 - link
boat*porcupineLTD - Tuesday, January 15, 2019 - link
Do you have a link? All I remember is that they disbanded the server custom architecture team.Krysto - Tuesday, January 15, 2019 - link
Look for theregister qualcomm axes staff post. They have other relevant links in there.tuxRoller - Wednesday, January 16, 2019 - link
A 51% & 61% increase in performance over the previous generation isn't enough?NICOXIS - Wednesday, January 16, 2019 - link
not if it doesn't translate into real world performanceiwod - Tuesday, January 15, 2019 - link
Purely in terms of CPU and GPU performance, I wouldn't be surprised if this year Apple positioned A11 as Entry, A12 as mid range and A13 as Top End. It would have still win.But damn that X24 modem.I could only wish the 7660 from Intel is a much improved version of the current 75xx.
Ironchef3500 - Tuesday, January 15, 2019 - link
Damn, are we ever going to catch Apple? Like ever...Oyeve - Tuesday, January 15, 2019 - link
Why? These are benchmarks, not real world. Apple can have the fastest whatever but at the end of the day its still an iphone.melgross - Tuesday, January 15, 2019 - link
And that’s a very good thing.tipoo - Tuesday, January 15, 2019 - link
What seems like overkill at first also gets them 5 years of OS updatesid4andrei - Tuesday, January 15, 2019 - link
And degraded batteries.tipoo - Wednesday, January 16, 2019 - link
That would be physics.id4andrei - Wednesday, January 16, 2019 - link
Indeed. When you provide a small battery to a powerful SoC you get degraded batteries before the two year warranty(Europe). Problem is Apple's own store diagnostics validated batteries from throttled iphones.melgross - Wednesday, January 16, 2019 - link
Better batteries than Android phones.id4andrei - Wednesday, January 16, 2019 - link
Millions of iphone users walked and some keep walking around with throttled iphones. Let that sink in.Trackster11230 - Thursday, January 17, 2019 - link
Based on the context of this thread, having a throttled SOC that's still well ahead of the competition would still provide a great experience. You dug yourself into a hole with that comment.Sailor23M - Thursday, January 24, 2019 - link
“The” iphone :-)Solandri - Friday, February 15, 2019 - link
What's interesting to me is that its performance per Watt seems to be pretty close to the A12. Which more or less confirms what I've suspected for a while - that a large part of the superior performance of the A12 is simply due to Apple getting priority at the newer 7nm fabs. While the A12 vs 845 comparison made chronological sense (in that both were available in phones at the same time), the A12 was 7nm while the 845 was 10nm.jonrevis1985 - Sunday, February 17, 2019 - link
So true Oyeve, you could put a turbo charged v8 in a ford pinto but it will only ever be a pinto.goatfajitas - Tuesday, January 15, 2019 - link
Catch Apple, no. Alot of the performance is tight integration, not actual CPU speed. If Apple had used Qualcom Snapdragon and Android OEM's had the A12, Apple would still be faster at benchmarks. Take it all with a grain or 10 of salt.Ironchef3500 - Tuesday, January 15, 2019 - link
Very truetipoo - Tuesday, January 15, 2019 - link
Untrue. Apples cores are wider, deeper, more OoO than anything else in mobile, and use massive caches at that. You have it reversed, if Android could use the A12 it would post impressive benchmarks, it's hardware design.Low level benchmarks are meant to remove the OS from the equation. Proof is in the pudding.
goatfajitas - Tuesday, January 15, 2019 - link
The A12 is a great CPU, but it's not magic. It's all ARM. The difference is in the implementation and control that Apple has with integration. Whatever though, both ways have benefits and downsides. I am just saying that people that think it's all about this CPU that is somehow years ahead of everyone else are mistaken as to the reality of the situation. Suffice to say, it's all fast.axius81 - Tuesday, January 15, 2019 - link
This just doesn't make sense. "It's all ARM." Yeah, sure, and one companies implementation of that instruction set can absolutely be superior.That's like saying "It's all x86 / x86-64." when we're comparing AMD and Intel. One can *absolutely* be faster than the other at implementing that instruction set - and in practice, is.
Apple makes amazing ARM chips, irrespective of iOS.
goatfajitas - Tuesday, January 15, 2019 - link
They are great chips, I am just saying they are not (hardware wise) way beyond what the competition is doing. Alot of that performance is OS, tight integration with apps, drivers, API's etc as its all controlled by one company. That isnt a bad thing, that is a good thing for Apple customers.techconc - Tuesday, January 15, 2019 - link
Actually, Apple is significantly ahead of what the competition is doing with ARM based chips. This can be objectively measured.tipoo - Wednesday, January 16, 2019 - link
What do you call their massive cache and issue width advantage if not being hardware wise beyond the competition? It's not magic, but Apple is clearly spending more on die area than Qualcomm is.bji - Tuesday, January 15, 2019 - link
Yeah I don't think you know what you're talking about. I think you read somewhere that some of Apple's performance/stability superiority over Android come from Apple controlling the whole stack and you've generalized that into places where the statement just isn't true.techconc - Tuesday, January 15, 2019 - link
You seem to conflate the ARM instruction set with the actual design of the chip. You then play off Apple's obvious advantages as some sort of magic... err.. "integration" as you call it. That's nonsense. You might be able to claim that for a specific application, but not for generic benchmarks.tipoo - Wednesday, January 16, 2019 - link
I didn't say it was magic. I said it's not entirely down to some ambiguous "optimization" with the OS. The cores themselves are physically impressive regardless of OS."It's all ARM."
This shows me you may have missed crucial step, Apple is only licencing the ARM instruction set, but otherwise they design the whole very wide, deep, very OoO core themselves.
tipoo - Wednesday, January 16, 2019 - link
I didn't say it was magic. I said it's not entirely down to some ambiguous "optimization" with the OS. The cores themselves are physically impressive regardless of OS. It's when people play it off as some pie in the sky optimization advantage that they're claiming magic, you can't make a 3-wide Braswell core fly just with vertical integration."It's all ARM."
This shows me you may have missed crucial step, Apple is only licencing the ARM instruction set, but otherwise they design the whole very wide, deep, very OoO core themselves.
SirMaster - Sunday, January 20, 2019 - link
Very untrue.Are you saying that somehow the iOS vs. Android OS integration with the CPU has an affect onb geekbench benchmarks?
That makes no sense. Geekbench is a purely mathematical benchmark and the OS should have no affect on its results other than how it power manages the SoC during the benchmark.
joms_us - Tuesday, January 15, 2019 - link
Take a look at the last 2-3 years speed (real-world apps, productivity and games) tests, iPhone with its fastest SoC on imaginary paper lags behind the competition.29a - Tuesday, January 15, 2019 - link
I'd be bitter too if I had to use and Android phone.cwolf78 - Tuesday, January 15, 2019 - link
Ditching my iPhone for a Pixel was a smart move. OS is light years ahead.techconc - Tuesday, January 15, 2019 - link
Examples of your claim? Android devices can't even do something like Fortnite at 60fps and you claim Android devices are faster? Pretty funny.dudedud - Tuesday, January 15, 2019 - link
He meant those "speed test" that consist on opening apps with your fingers trying to match the touch... you know, measuring the storage and ram and not so much the SoC's capabilities...Also, iOS and android diverge in animation speed, sometimes assets and most of the time longer screen logos (or inexistent on android).
A total waste of time.
joms_us - Wednesday, January 16, 2019 - link
As if you can use the SoC alone, you know computer don't just run with processor alone so yeah in actual usage A12 inside.iPhone is just as fast if not slower than competition.joms_us - Wednesday, January 16, 2019 - link
It's a game-related issue (I think there is an FPS limiter set to 30fps on Android version of Fortnite) and most of them are optimized for iOS, and FWIW, a phone with mediocre resolution will have fast FPS on games.PeachNCream - Tuesday, January 15, 2019 - link
Ironchef, comments like that are just going to trigger a bunch of brand loyalty arguments where people banter over insignificant points and ultimately change no one's opinion about which piece of consumer electronics gets them feeling warm and gooey inside.techconc - Tuesday, January 15, 2019 - link
The short answer is no, it's unlikely anyone will catch Apple on ARM. ARM reference designs are several generations behind Apple and when companies like Samsung attempt their own design like they have with recent Exynos, it's clear that not only can't they match performance, but efficiency is horrible by comparison. Apple has a world class team that is likely the best design team on the planet right now.Midwayman - Wednesday, January 16, 2019 - link
People would have said that about AMD not long ago.... Just saying.29a - Wednesday, January 16, 2019 - link
They used the same engineer, Jim Keller. He works for Intel now.Midwayman - Wednesday, January 16, 2019 - link
Eventually, sure. Apple will stall out on process related stuff eventually and they'll have a chance to catch up. Unlikely until then as they're still making big gains too and have a 2-3 year lead.jjj - Tuesday, January 15, 2019 - link
We need a bit more on the GPU side in the next years for foldable. Pixel count will increase, SoC power needs to decrease (more power and mechanical volume goes towards the display) and mobile gaming should gain in popularity with x2+ larger displays.levizx - Tuesday, January 15, 2019 - link
We can establish that single core performance/power is good, but what about multi-core? Wouldn't the other 3 big core be running at the highest voltage while potentially running at ~2GHz in real world workload?Andrei Frumusanu - Tuesday, January 15, 2019 - link
Correct. We'll have to see how efficiency performs once we get commercial devices.Chaser - Tuesday, January 15, 2019 - link
We read about all this when the 845 was about to launch a year ago. I didn't see some monumental improvement in responsiveness or efficiency despite all these whitepapers stating so. Unless you are so kind of smartphone gaming fanatic, real-world use differences between each year look great mostly on paper.SquarePeg - Tuesday, January 15, 2019 - link
Performance has been good enough since 2013 with the release of the SD 800. Every year we get a performance bump that just gets offset by feature bloat that doesn't really improve performance outside of benchmarks. I can pull out my old LG G2 running an Android 4.4.2 custom ROM/kernel and that thing just flies compared to any phone from the past year.A5 - Tuesday, January 15, 2019 - link
I promise you it won’t. SD 800 will feel terribleyeeeeman - Wednesday, January 16, 2019 - link
I have a Z3 compact, which is a SD801. Bought recently a Galaxy S7 second hand to replace the Z3. I can safely say that the Exynos 8890 is noticeably faster in opening apps, playing intensive games and generally in multitasking. Z3 usually lags when phone is started for first time and many apps sync. Galaxy S7 is buttery smooth. So yeah, I think we can feel the progress in performance of these chips, but maybe at a later point when apps get to their limit of computing power. Then you actually see that a newer chipset is noticeably faster.But nevertheless, the Z3 compact with SD801 is still a great fast phone. It runs a bit slower than the Exynos as I said but in general it is not slow at all on Android 6.0. So yeah, a chipset like it could be easily used in today's times if you don't a bit of slow down here and there.
Spunjji - Wednesday, January 16, 2019 - link
Sorry, but that's just not true. I have yet to use a phone that feels consistently faster than the OnePlus 6 I'm currently using as a daily driver, and I've done a whole bunch of messing with custom ROMs / kernels, starting back with Cyanogenmod 6 on a Dell Streak.gijames1225 - Tuesday, January 15, 2019 - link
Sounds very positive given that phones already perform great at the flagship level. The single core improvement is greatly welcomed given how much that matters for javascript.fred666 - Tuesday, January 15, 2019 - link
I like their performance over time graph on page 1.It shows the 855 to be faster than the 845, which is faster than the 835, which is slower than the 820. What? Their performance dropped in that generation?
yeeeeman - Wednesday, January 16, 2019 - link
Yes. In floating point, the SD820 based on their own custom cores (built on an evolution of Krait cores called Kryo) was much better than everything, including next gen SD835 which used an IP from ARM the cortex A72.fred666 - Wednesday, January 16, 2019 - link
so it pretty much means their graph is worthless. Floating point should not be the primary indicator of performance, integers are much more used by most popular use casesSpunjji - Wednesday, January 16, 2019 - link
He didn't say the graph shows FP performance, he just mentioned that 820 was unusually strong in that area. My guess is it's a representation of overall performance based on some or other standard benchmark. That doesn't make it "worthless", because it's literally only there to show a rough comparison between historical chipsets.cpkennit83 - Thursday, January 17, 2019 - link
Actually it was the A73. The A72 is actually stronger in fp but slower in integer workloadsstennan - Tuesday, January 15, 2019 - link
Please do a podcast soon. There has been so much going on with pc Cpu/gpu and now incoming mobile cpu that I miss having the anandtech deep dive!melgross - Tuesday, January 15, 2019 - link
Well, it’s all very interesting, but still the elephant in the room is Apple’s A series, no matter what. Take that out, and the 855 and 980 are excellent chips, but with it in, they are just mediocre.cpkennit83 - Tuesday, January 15, 2019 - link
They are excellent chips no matter what. A12 big cores are twice as large or more than a76 cores.No android Oem is willing to pay a big premium for their flagship socs, so the qualcomms and huaweis of the world don't pressure arm to spend the big $$ needed to fund the development of truly wide cores. The only one who seems interested in going big is Samsung, but they can't get their act together.
Still performance is more than adecuate in the a76 flagship SOCS, and efficiency is slightly better than a12, so for me this generation is the best in the android space since the SD800.
goatfajitas - Tuesday, January 15, 2019 - link
What makes the Ax series so fast is the tight OS integration. It's a good chip, but not years ahead hardware-wise. What makes the whole thing so fast is the OS and how it's implemented. Either way good for Apple, but it's more SW than HWbji - Tuesday, January 15, 2019 - link
You tried to make this point before and failed. Give it up maybe?goatfajitas - Tuesday, January 15, 2019 - link
You may have failed to grasp it, but that is on you.Graag - Tuesday, January 15, 2019 - link
No, it's just blatantly wrong.tuxRoller - Wednesday, January 16, 2019 - link
Proof?sean8102 - Wednesday, January 16, 2019 - link
I don't buy that either. It's pretty well known Apple has some damn good chip designers in house. I'm no expert but one of the biggest things that stand out to me when comparing Apples designs is how much cache they use. The A12 has 128KB instruction and 128 KB data L1 cache and 8MB of L2 cache. It seems the 855 has basically ~2MB L2 cache (divided among each "cluster") and 2 MB of L3 cache. I haven't seen a Android avalible SOC that comes close the amount of cache that Apple puts on its SOC's which from what I understand is quite expensive to do, and results in a larger die size. But give large performance benefits. Of course that's only one example of something they do differently, considering that with a 2 high power plus 4 low power cores setup they are still so far ahead they must be making significant changes compared to the reference design they get from ARM.Their hardware team deserves serious credit for staying so far ahead for so long.
HStewart - Tuesday, January 15, 2019 - link
One big question I have always had with ARM based device especially in performance. - How does it compared with x86 platform except for power. This can be difficult to actually truly represent - especially with design difference in OS and applications.Application why a good example is running AutoCad - can even latest iPad Pro truly have performance of say latest quad or six core x86 based CPU and high end mobile GPU. I know Apple has iPad Pro version of Photoshop - but this is based on Photoshop CS and I personally like the earlier series - which I own CS 5.0
I think on ARM we long way from having a full version of Autocad, Solidworks, Lightwave 3d, 3dmax and others high end professional applications.
cpkennit83 - Tuesday, January 15, 2019 - link
A12/A12X devices compare very favorably with U series Intel chips, and smack Y series chips. Lack of software is not due to lack of power, but perceived demand.goatfajitas - Tuesday, January 15, 2019 - link
"A12/A12X devices compare very favorably with U series Intel chips" on selective tasks. It's a long way off from it in raw power.Wilco1 - Tuesday, January 15, 2019 - link
Benchmarks clearly show performance is about the same. In fact it looks like A12X is well ahead in terms of raw power, for example by 30% on compilation (LLVM test).Rudde - Friday, January 18, 2019 - link
Don't cherry-pick results.Rudde - Friday, January 18, 2019 - link
What is raw performance? I could calculate som fused multiplies per second for you, but is that 'raw performance'?HStewart - Tuesday, January 15, 2019 - link
I differ with a lot on this - I think A12/A12X and other ARM related device actually perceive faster because of marketing. Also with App architexture of the OS running on such devices hide the actual performance of chips - I think it specifically depends on what you using the device. For normal word processing, emails and internet - it can easy be shown that same as U series - and this depends on which model - likely dual core x86 and possibly even AMD notebooks - but not a 4+ ghz laptop like my Dell XPS 15 2in1. Keep in mind on a phone and even android tablet or iPad there is less screen to drive. I am talking about real professional software and not appsOne thing is interesting about 855 design is big core designed - running the primary core at higher speed then other 3 primary core - is smart - this means the primary thread is running at higher speed. I assume that smaller cores would be use threads for background tasks. Intel has a similar designed large single core and 3 minor atom based cores - I would think that device is closer on performance to A12 based devices not the U series.
My big question is that I think it hard to actual compare performance between any x86 base and any ARM based. It depends on designed of OS and applications running on devices. I would be 100% sure any software that uses AVX 512 would blow any ARM based application with similar abilities. In fact with AVX 512 application it would be big difference with AVX 2 based computer.
All I am saying is performance depends on application running, not just web browsing and other things
HStewart - Tuesday, January 15, 2019 - link
One thing also - the speed of cpu, number of cores, or even node process does not make the performance of device - it how it used with architexture inside that makes the difference.Wilco1 - Tuesday, January 15, 2019 - link
You're absolutely wrong, A12X can keep up with your beloved laptop - this is the latest and fastest variant: http://browser.geekbench.com/v4/cpu/compare/109702...Single-threaded integer score is within 2.5%. Mind you that's a 10W SoC compared with a 65W CPU! I'm awaiting your list of excuses how that is possible...
goatfajitas - Tuesday, January 15, 2019 - link
One test on one specific thing. Try 100'sWilco1 - Tuesday, January 15, 2019 - link
No it's not one test, nor one specific thing. Like most benchmarks there are many different tests and the average is reported.goatfajitas - Tuesday, January 15, 2019 - link
My point is that in a world of benchmarks, you are looking at it very myopically. ARM isnt anywhere near as fast as x86 in raw power. Very good and super efficient at alot of multimedia tasks though.Wilco1 - Tuesday, January 15, 2019 - link
What benchmarks??? Name another cross platform benchmark which is NOT a useless browser test. Apart from SPEC, Geekbench is one of the very few benchmarks that allow reasonable cross platform comparisons.goatfajitas - Tuesday, January 15, 2019 - link
"cross platform" benchmarks are virtually useless. Your grasp of benchmarking in general needs work. It's not apples to apples.Wilco1 - Wednesday, January 16, 2019 - link
That's rubbish. Both Geekbench and SPEC are good cross-platform benchmarks as long as you use the same compiler and options.TheinsanegamerN - Tuesday, January 22, 2019 - link
Which you, inevitably, do NOT do when com[paring an ARM and X86 platform.SquarePeg - Tuesday, January 15, 2019 - link
Geekbench is rubbish. There's a reason why Apple blocks benchmarking apps with only a very few exceptions that show them in the best light. They go so far as to even block games that have benchmarking utilities built into them. Apple flat out goes out of it's way to obscure the real world performance of it's chips. Until Apple stops acting borderline fraudulent about performance numbers I am calling BS.goatfajitas - Tuesday, January 15, 2019 - link
"Geekbench is rubbish. There's a reason why Apple blocks benchmarking apps with only a very few exceptions that show them in the best light."Exactly.
Wilco1 - Tuesday, January 15, 2019 - link
Name a better CPU benchmark then. Just one.TheinsanegamerN - Tuesday, January 22, 2019 - link
If ARM is so amazingly efficient, why is there not a push to use it in laptops and desktops?Could it be because, outside of specifically recompiled apps, ARM is still nowhere close to x86 in real world performance? Perhaps once apple finally makes an ARM MAC we can find out, but until then the capability of ARM devices to excel at geekbench is worthless, as they are tied to devices with incapable OSes and cannot run production software.
darkich - Thursday, January 24, 2019 - link
You lack perception.ARM is a small design firm that sells SoC designs, and gets its revenue from the licensing fees of over 1 billion smartphones sold every year.
The amount of revenues they would get from laptops this way is simply not worth the extra investment.
ARM naturally choses to focus all their resources on where the money and competition really is.
As for the custom SoC vendors such as Samsung and Huawei, the story is similar.
Staying at the top of the smartphone game is an absolute priority for them.
The margins and stakes are far bigger than what they can hope to get from laptops.
Also, all the most advanced semiconductor lines are initially reserved for smartphone chips, with desktop and laptop businesses standing in the waiting line.
techconc - Tuesday, January 15, 2019 - link
Actual benchmarks, including larger ones like SPEC clearly demonstrate that you are wrong. Marketing has nothing to do with such results.End-User - Wednesday, January 16, 2019 - link
Not even the 9900K has AVX-512.darkich - Wednesday, January 16, 2019 - link
Man you are CLUELESS.In 4K rendering, iPad pro DESTROYS a core i7 laptop!!
https://www.laptopmag.com/reviews/laptops/new-ipad...
It really is a high time you desktop backward looking ignorants wake up
genekellyjr - Wednesday, January 16, 2019 - link
You might be clueless too! There weren't any "4k rendering" benchmarks in that link - but there were 4k encoding benchmarks.And as for that encoding performance you are apparently referencing, it is definitely using fixed function encoders - it's not the CPU performance as Geekbench tests use (and I want to stress cross-platform Geekbench isn't 1:1 scoring - you'll never find Andandtech comparing various CPU architectures with Geekbench as it even uses fixed function resources like AES in its crypto stuff). And the speeds the laptops show definitely point to a CPU encoder being used. A fixed function encoder will barely hit the CPU, while CPU encoding will max those cores at 100%. The CPU encoding is higher quality at the cost of heat and speed.
Recently Adobe updated Premier to support Intel's fixed function encoder (called quick sync) read here http://www.dvxuser.com/V6/showthread.php?362263-Ad... post #8 - and Rush may not have gotten that update yet or the benchmark site referenced didn't update their program https://www.laptopmag.com/reviews/laptops/new-ipad... but I managed to find a benchmark for the quick sync in Premier https://forums.creativecow.net/thread/335/101459 - and Intel's quick sync fixed function stuff is all relatively the same afaik so the desktop CPU has less of an impact - gives a 1:20 min 4K -> 1080p conversion at 91 sec w/ CPU and 45 sec w/ fixed function, scale that up to 12 min (x9) and we get 13:39 w/ CPU (it's a nice CPU, i7-8700K) and the fixed function encoder gets 6:45. It'll probably scale pretty linearly. So 6:45 vs 7:47 with fixed function encoding - which isn't comparing CPUs at all at this point but rather their fixed function encoder!
So the iPad has some nice hardware, sure, but it's not outperforming Intel's brand new MB Pro 13" by leaps and bounds. They'll probably be about the same speed with fixed function encoding and the MB Pro 13" will win in a non-encoder setting thanks to its increased TDP.
darkich - Friday, January 18, 2019 - link
Okay.. So in short, the A12X is "about the same" in CPU performance as Intel's actively cooled, CPU-specific and twice more power hungry chip while also having a 1+TFLOPS GPU, 4G modem and advanced ISP on the same die.Overall, if that is what you call "nice", then Intel's hardware is what?
Trash.
Rudde - Friday, January 18, 2019 - link
Let's compare Intel i7-8500Y and Apple A12X. The i7-8500Y is a dual core 5W 14nm notebook/tablet processor. A12X is a octa core 7nm tablet processor with unknown power usage. 8500Y uses the x86-64 instruction set, while A12X uses ARMv8. They have very few benchmarks in common, which introduces notable amounts of uncertainty.Let's start with Geekbench 4.1/4.2 Single threaded:
8500Y scored 4885 and A12X 4993. A12X leads with 2%, which is within margin of error.
Same benchmark, but multithreaded:
8500Y scored 8225 and A12X 17866. A12X demolishes the dual core with 117% higher performance. This is clearly because of the 4-core-cluster in A12X having double core count compared with the dual core 8500Y.
Next up we have Mozilla Kraken 1.1 showing browser performance:
8500Y scores 1822ms and A12X 609ms. The A12X took 67% less time to complete the task, which amounts to a 199% increase in performance.
Octane V2 is another browser performance benchmark:
8500Y scores 24567 and A12X 45080. A12X bests the Intel cpu by 83%.
3D Mark has two versions of Ice Storm Physics and unfortunately our processors use different versions. They use the same resolution however.
8500Y scores 25064 in standard physics and A12X 39393 in unlimited physics. A12X scores a 57% lead.
It's hard to establish system performance with such a limited amount of benchmarks. Geekbench and 3DMark are synthetics and the two others show browser performance.
The processors are equal in ST, but the A12Xs higher core count allows it to double the 8500Ys MT score. The A12X outpaces the 8500Y in 3dMark. The A12X is clearly superior in browser performance. Apples A12 drops closer to the Intel in synthetics, but performs similar to it's larger sibling in web benchmarks.
Winner: A12X
Nemaca - Tuesday, January 15, 2019 - link
Overall, the 855 was thought to be head and shoulders above Kirin, but it seems that it will be on the same level at best.I'm typing this from my already heavily used mate 20pro, so if the US wouldn't nuke Huawei global-wide right now, the Kirin would certainly push ahead, which I hope it will do, since it seems more competitive price-wise.
Huawei bypassed the power issue with larger batteries, but to be honest, the Kirin doesn't seem to be that hungry anyway.
For me, the 855 is a letdown, I was hoping for more, but it seems my mate20pro will be relevant for longer then I thought, so not too bad of a news, I guess.
Thank you, Andrei, for the in-depth review!
Achtung_BG - Wednesday, January 16, 2019 - link
Snapdragon 855 .......https://youtu.be/mqFLXayD6e8
darkich - Tuesday, January 15, 2019 - link
This here proves once and for all that your system performance benchmarks are just bogus and irrelevant.Are we seriously supposed to believe that Snapdragon actually made a lower performing chipset than their previous one?
BS
darkich - Tuesday, January 15, 2019 - link
*Qualcomm, not SnapdragonIcehawk - Tuesday, January 15, 2019 - link
It's happened before in the chase for efficiencynpp - Tuesday, January 15, 2019 - link
I really doubt we’ll see battery life improve much with this generation. Hint - 5G. Maybe that’s why 855 focuses on overall efficiency, and the GPU gains are modest. Let’s hope I’m wrong.Impulses - Tuesday, January 15, 2019 - link
Yeah, that's the big wrench in the works... Hopefully there's at least *some* flagships without 5G! Though I doubt I'll be looking for an upgrade from my Pixel 3 this year or next.austonia - Tuesday, January 15, 2019 - link
Wow that was really impressive, said no one. Apple is like 2 to 3 years ahead and QC isn't gaining any ground. 5G is a big nothingburger. If my Note4 ever burns out I'mma get a fruit-phone next time. Unrivaled performance, better security, LTS for the OS, etc. Rather have an equivalent Pixel but Google isn't even trying from the look of it.eastcoast_pete - Tuesday, January 15, 2019 - link
@Andrei: Thanks, always appreciate the level of detail in your reviews. Question: QC's concept of 1xbigFast + 3 bigNotsofast + 4 little cores might be especially suited to execute, for example, JavaScript during web browsing, as JS is single-threaded. Did you observe that to be the case?eastcoast_pete - Tuesday, January 15, 2019 - link
To be more specific: I am also a bit underwhelmed by 855's performance in the web-browsing benchmarks. How much weight does JavaScript performance have in those benchmarks. If JS performance is a significant component of the web benchmarks, the single fast Big core layout would indeed be a bit of a dud.0iron - Tuesday, January 15, 2019 - link
Trivia: How many "here" in this article especially at the beginning of sentences? :)Hung - Wednesday, January 16, 2019 - link
What were the 845 benchmarks run with? Because the Pixel 3/XL has its A75 core clocked at 2.5GHz, while the Galaxy S9's is clocked at 2.7GHz and the LG G7's A75 runs at the stock 2.8GHz. Not a huge difference admittedly, but it definitely skews results slightly if you're comparing the maximum clock for the 855 versus the lower clocked version of the 845.Andrei Frumusanu - Wednesday, January 16, 2019 - link
Wrong, I don't know where you got the idea they're clocked differently. They're all 2.8.coit - Wednesday, January 16, 2019 - link
To me the trouble is being stuck with Google.Jredubr - Wednesday, January 16, 2019 - link
Another year that we'll see Apple's SOCs slaughtering Android's SOCs in raw performance.While I'm glad for the efficiency gains, that's far from enough if Qualcomm wants to remain competitive.
Failing to meet the performance of a SOC that Apple released 1 year and a half ago and sometimes even failing meeting A10 performance just isn't enough for a 7nm SOC.
sean8102 - Wednesday, January 16, 2019 - link
I'm guessing part of it is due to how much of a monopoly they are. When it comes to Android it dominates the market. A few like Samsung, Huawei etc make their own SOC's but that's about it. MediaTek is the only other I can think off but it's rare at least in the US to see them in anything but mid to low range phones and even then its still usually Qualcomn you'll get in that market range.From what I've read/seen one of the things that makes Apple's SOC's so fast is the large amount of cache they use. A12 having 8 MB of L3 cache, the 855 has 2 MB. Having so much cache is quite expensive and makes the die size larger which of course increases costs as well. Maybe Qualcomn just isn't getting much demand from OEM's to build such a expensive chip. If they made it and no OEM wanted to pay the price for it they would be stuck holding the bag.
Raqia - Wednesday, January 16, 2019 - link
Apple doesn't have an integrated modem so it has been choosing to use that extra space for beefy, 7-wide, die space hungry CPUs and caches. Although they bench well and have world leading efficiency, those CPUs draw too much current for the small batteries that come with an iPhone and lead to the controversial and unannounced throttling they decided to enact through iOS updates to preserve battery longevity. An on die modem also saves consumers money in the end from lower packaging cost and simpler PCB layouts.WildBikerBill - Wednesday, January 16, 2019 - link
What I want to know is...now that we know the high end, what do the affordable mid-range products become?B3an - Wednesday, January 16, 2019 - link
This SoC is VERY disappointing. Wont be upgrading this year to any phone that has this.Hopefully Samsungs new SoC for 2019 is much MUCH better than the utter joke they had in the Galaxy S9... That was a fucking disaster... I'm hoping it actually performs well in REAL WORLD cases this time, not just mostly meaningless benchmarks, but i doubt it. Samsung reminds me of when desktop PC GPU makers would cheat in benchmarks by using driver hacks.
serendip - Wednesday, January 16, 2019 - link
It looks like Qualcomm is having an Intel-like moment where performance stops having meaningful increases year-on-year. For me, an SD835 device is more than fast enough and an 855 only makes sense for higher-performing larger devices like Windows tablets.Time to focus on the software then... my old SD650 device got a new lease on life with a clean build of Lineage on Android Pie. The same slowdown that hit the laptop/PC markets could cause a crash in the smartphone market as people keep their phones for 3-4 years instead of upgrading annually.
serendip - Wednesday, January 16, 2019 - link
I doubt it. Apple seems to be the only one with a cracking chip design team. Huawei seems to be doing decent work with HiSilicon but ARM designs are still far behind.Does it matter though? Apple's latest chips are so overpowered for their phones and tablets, the performance numbers are more for bragging rights than anything else. That could change once they start using A-series chips in their laptops.
yankeeDDL - Sunday, January 20, 2019 - link
Disclaimer: I have been using a Galaxy S8 (SD835) and I don't feel the need to upgrade anytime soon, however:a) Having more power is always nice, and there are times where the phone ... stutters, especially when moving from "heavy" games.
b) The gap with the iphones is embarrassing, and I don't see any technical justification. Yes, the iPhones are optimized all around (HW+SW) thanks to their closed ecosystem, but nearly 2x gap on the GPU is a deliberate choice which should be, at the very least justified.
We need competition in the mobile CPU/GPU, or Qualcomm will quickly become the Intel of mobile, sitting on his a** until something better comes along.
cha0z_ - Monday, January 21, 2019 - link
Dunno why Andrei is so kind in his wording about the new snapdragon. For start it's nothing that more powerful vs the kirin 980 (while it was shaped to literally obliterate it in both CPU/GPU, especially the GPU). Also even the A11 is making a joke of it and the A12 sustained performance is higher than the sd855 peak in GPU... and CPU is also for the A12 and even A11.I would kinda understand for the sd855 to lack behind the A12 with 15% in CPU/GPU, will accept somewhat to lose to the A12 even with more, but losing to the A11 is roflmao funny. How in the world you can call it success when qualcomm(as a leader in that regard) can't catch up with apple for years? Recently even the GPU started to lag behind BIG time, before atleast that was on par. Well.. cool.
mfaisalkemal - Sunday, January 20, 2019 - link
Hi Andrei, what do you think about 3 benchmark on https://www.asteroidsbenchmarks.com ?all of them include metal and vulkan api benchmark so we can comparing iOS and Vulkan mobile.
The benchmark include off screen test.
DontTreadOnMe - Thursday, January 24, 2019 - link
Does this A76 CPU support the pointer authentication codes that Apple have had since iPhone X? It seems like this is a potentially very useful security feature that it would be nice to see on Android devices.Uol - Tuesday, January 29, 2019 - link
thank you <a href="https://www.wikipedia.org/">site</a>Nystiael - Thursday, February 7, 2019 - link
Very nice. I will but it by I don't have the money atm.Greetings
https://showbox-apk.mobi
Yuhai - Friday, June 14, 2019 - link
Very great share. Thankshttps://terrariumapk.com