SK hynix is one of the major DRAM and NAND flash memory manufacturers, but for years they've had little to no presence in the retail consumer SSD market and have focused primarily on the OEM market. That started to change last year when they launched their Gold S31 SATA SSDs, but that wasn't really enough to establish SK hynix as an important brand for consumer SSDs. At CES in January, they previewed NVMe SSDs: the Gold P31 and Platinum P31.

When SK hynix PR reached out earlier this month to offer a review sample of the Gold P31 at short notice in advance of last week's launch, I was expecting a fairly straightforward review. Rushed review embargo periods are often a sign that the vendor doesn't want you to spend too much time digging into the details of the product, so I didn't expect the Gold P31 to stand out in any way from the crowd of other PCIe Gen3 NVMe SSDs already on the market. I was expecting that the most interesting facet of the Gold P31 would be the fact that SK hynix actually managed to be first to market with 128L NAND, after years of trying to run ahead of the competition in terms of 3D NAND layer count but with little success in following through in a timely fashion.

The 128L 3D NAND is an important accomplishment, but it turns out to have big consequences for more than just cost. The Gold P31 turned in the most surprising set of benchmark results I've seen, with power efficiency scores that seemed almost too good to be true (more on this later). SK hynix won't share quite as much technical information about these drives as we'd like to have, but they've shared enough to confirm that the Gold P31 does not fit the mold for a typical high-end consumer NVMe SSD.

The Gold P31 offers performance that is competitive with most other high-end consumer SSDs, but it achieves that performance with a very different strategy. For years, the standard formula for this product segment has been the use of TLC NAND with SLC caching, a controller with four lanes of PCIe 3.0 (and starting to move toward PCIe 4.0), DRAM in a ratio of 1 GB per 1TB of flash, and an eight-channel interface between the SSD controller and the NAND flash itself. It's that last piece that the Gold P31 changes, by using just four channels.

NAND Interface Speeds Matter Again

The SK hynix Gold P31 hits high-end performance targets with half the channels by using a much higher IO speed for those channels than we're used to. This NAND interface speed hasn't been an important factor in recent years because an 8 channel SSD can saturate a PCIe 3 x4 host connection with fairly pedestrian NAND interface speeds like 533MT/s. So even though NAND manufacturers and SSD controllers have been moving toward higher interface speeds, it has hardly mattered and has been much less interesting than talking about improvements in latency, power and density.

For the 64-layer 3D NAND generation, NAND interface speeds were typically in the 533-667MT/s range. The extremely popular Phison E12 controller supports up to 667MT/s, but SSDs using Toshiba (now Kioxia) 64L BiCS3 TLC at 533MT/s had no trouble offering good performance—usually only marginally slower than Silicon Motion's SM2262(EN) that supports up to 800MT/s. Now that the NAND market has moved to 96 layers and beyond, most NAND manufacturers can support speeds of 1.2GT/s (1200MT/s) or higher—but those speeds aren't a given. Just like DRAM, NAND chips can come in different speed grades, and not all SSD controllers can handle those speeds yet. SK hynix designs their own SSD controllers, and they updated their controller design to match the capabilities of their 128L NAND, running at 1.2GT/s. This provides plenty of headroom for a 4-channel SSD to saturate a PCIe 3.0 x4 link.

(Side note: The Sony PS5 will use a 12 channel controller, which means it should be able to hit its performance targets with a 533MT/s NAND interface speed. The Xbox Series X is likely using a 4-channel controller, so it will probably need to use a faster NAND interface speed despite the SSD as a whole offering less than half the throughput.)

We're used to seeing 4-channel NVMe SSDs as entry-level products. Those have mostly been using slower NAND interface speeds and offered peak throughput in the ballpark of just 2-2.5 GB/s, and the lower channel count has primarily been a cost-cutting measure (often paired with an elimination of the DRAM buffer to further reduce costs). The SK hynix controller used in the Gold P31 may be cheaper and smaller than the typical 8-channel PCIe gen3 NVMe controller, but it's designed to compete directly against them. SK hynix won't say whether the controller is built on the 28nm process that's used for most PCIe Gen3 NVMe controllers or if they've moved to a smaller FinFET-based node as we're seeing for almost all PCIE Gen4 controller designs. Either way, their controller starts off with a significant power advantage over larger 8-channel controllers.

SK Hynix Gold P31 SSD Specifications
Capacity 500 GB 1 TB
Form Factor M.2 2280 single-sided
Interface PCIe 3 x4 NVMe
Controller SK Hynix in-house
DRAM SK Hynix LPDDR4-4266
NAND Flash SK Hynix 128L 3D TLC
Sequential Read (128kB) 3500 MB/s
Sequential Write
(128kB)
SLC 3100 MB/s 3200 MB/s
TLC 950 MB/s 1700 MB/s
Random Read (4kB) SLC 570k
TLC 500k
Random Write (4kB) SLC 600k
TLC 220k 370k
Power Active 6.3 W
Idle < 50 mW
L1.2 Idle < 5 mW
Warranty 5 years
Write Endurance 500 TB
0.5 DWPD
750 TB
0.4 DWPD
MSRP $74.99
(15¢/GB)
$134.99
(13¢/GB)

The SK hynix Gold P31 product line consists of just two capacities: 500GB and 1TB, with only the latter sampled for this review. The Platinum P31 that was announced at CES will be a 2TB model and we expect it to otherwise be identical to the Gold P31, but no further information about the Platinum P31 or its release date is available at this time. SK hynix provides fairly detailed performance specs for the Gold P31: we don't get the low queue depth ratings that matter most, but the high queue depth specs are broken down to show both SLC cache and post-cache TLC performance. The 500 GB model is rated for much lower TLC write speeds than the 1 TB model, but otherwise their performance ratings are similar and typical for a high-end PCIe gen3 drive.

The Gold P31 comes with better than average write endurance ratings: 500 TBW for the 500 GB model and 750 TBW for the 1TB model, putting them both above the standard 0.3 DWPD. Introductory pricing is fairly competitive at $75 and $135.

 

SK hynix won't confirm details about the NAND other than the layer count and the 1.2GT/s IO speed, but based on their previous announcements about their 128L 3D NAND we believe these are 1Tbit dies, each divided into four planes so they can offer comparable parallelism to 512Gbit dies that are only divided into two planes. (UPDATE: TechInsights insights is analyzing this flash, and it seems to be 512Gbit dies.) With their short-lived 96L generation, SK hynix adopted what they call a PuC: periphery under cell. This is basically the same idea as the Intel/Micron "CMOS under the Array" 3D NAND design, that moves most of the peripheral logic circuitry under the memory cell array, rather than alongside. This design has allowed Intel/Micron 3D NAND to achieve the smallest die sizes for a given layer count and die capacity. Now that SK hynix has adopted this and delivered a leading layer count, they should have one of the most cost-effective TLC parts on the market (provided they're getting good yields). The rest of the major NAND flash manufacturers have a similar technique on their roadmaps.

The Competition

For this review, we are primarily comparing the SK hynix Gold P31 against other high-end TLC-based NVMe drives. Some of these are drives using 64L NAND rather than 96L NAND because we have either been unable to get samples of the newer drives, or we haven't gotten around to testing them.  The high-end TLC drives included in this review are:

  • Samsung 970 EVO Plus: 92L TLC, same controller as original 970s
  • Kingston KC2000: SM2262EN + Toshiba/Kioxia 96L TLC, now replaced by the KC2500 with higher clock speeds
  • ADATA XPG SX8200 Pro: SM2262EN + Micron 64L TLC
  • Seagate FireCuda 510: Phison E12 + 64L TLC (newer version in the supply chain now uses 96L) and FireCuda 520: Phison E16 + 96L TLC, but restricted to PCIe gen3 on this testbed
  • WD Black SN750: High-end NVMe drive with 64L TLC. Until now, our benchmark for most efficient high-end NVMe drive
  • Toshiba (now Kioxia) XG6 - The first 96L consumer SSD (OEM only)

We have also included results from a few entry-level NVMe drives:

  • Toshiba (now Kioxia) BG4 - low-power DRAMless NVMe, 96L TLC
  • Intel 660p - 4-channel SM2263 controller with DRAM, 64L QLC (now replaced by 665p with 96L QLC)

The SK hynix Gold S31 and Samsung 860 EVO SATA drives also included for context.

AnandTech 2018 Consumer SSD Testbed
CPU Intel Xeon E3 1240 v5
Motherboard ASRock Fatal1ty E3V5 Performance Gaming/OC
Chipset Intel C232
Memory 4x 8GB G.SKILL Ripjaws DDR4-2400 CL15
Graphics AMD Radeon HD 5450, 1920x1200@60Hz
Software Windows 10 x64, version 1709
Linux kernel version 4.14, fio version 3.6
Spectre/Meltdown microcode and OS patches current as of May 2018
Cache Size Effects
Comments Locked

80 Comments

View All Comments

  • vladx - Thursday, August 27, 2020 - link

    I have a SX8200 Pro on my laptop, do I need to enable the laptop Power Management state or is it detected automatically by the firmware?
  • Billy Tallis - Thursday, August 27, 2020 - link

    That really depends on what combination of firmware and driver bugs the laptop vendor gave you. But in theory, if the machine originally came with a M.2 NVMe drive, it should have been configured for proper power management and should continue to work well with an aftermarket SSD that doesn't bring any new power management bugs. I think the SX8200 Pro is okay on that score; the slow wake-up times shouldn't prevent the system from trying to use the deep idle states because the drive still promises the OS that it will have reasonable wake-up times.
  • vladx - Thursday, August 27, 2020 - link

    My laptop is a MSI Creator 17 that came with a Samsung PM981 drive. Could HWinfo offer any help in identifying the active power states?
  • Billy Tallis - Thursday, August 27, 2020 - link

    I'm not sure. I think you can figure out what PCIe power management settings are being used by digging through the PCI configuration space, but I'm not sure how easy it is to get that info while running Windows. As for the NVMe power management settings, my understanding is that it's impossible or very nearly impossible to access that information under Windows, at least with the usual NVMe drivers. The only reliable way I know of to confirm that everything is working correctly to get your SSD idling below 10mW is to have expensive power measurement equipment.
  • vladx - Thursday, August 27, 2020 - link

    Ok thanks, Billy. I was going to install Fedora anyways as secondary OS so I guess I'll try the Linux route then.
  • MrCommunistGen - Thursday, August 27, 2020 - link

    vladx, I'm really interested in how you go about trying to tease the NVMe power management info out of the drive. I did some internet searches a while back and didn't find anything definitive that I was able to follow and get results from. I've only ever used Debian-based distros, but if you're able to figure it out in Fedora then at least I'll know it is possible.
  • Foeketijn - Thursday, August 27, 2020 - link

    Did it happen? Did Samsung finally get an actual competitor? It doesn't really beat the 970 evo that much, so the 970 pro would still be better, but not at this price point, and definitely not with this power usage.
    Last time intel did that, Samsung suddenly woke up and beat them down again to a place where they stayed since.
    Interesting to see what the new evo and pro line will bring.
    Not high margin prices this time arround I guess.
  • LarsBolender - Thursday, August 27, 2020 - link

    This has to be one of the most positive AnandTech articles I have read in years. Good job SK Hynix!
  • Luminar - Thursday, August 27, 2020 - link

    No recommendation sticker, though.
  • Zan Lynx - Thursday, August 27, 2020 - link

    It would be handy if you could add a power loss consistency test. I have a Dell with an older hynix NVMe and one time the battery ran down in the bag, and on reboot its btrfs was corrupt.

    Imagine these are sequence numbers in metadata blocks.
    Correct: 10 12 22 30
    Actual: 10 12 11 30

    The hynix had committed writes for SOME of the blocks but a few in the middle of the update chain were old versions of the data. According to btrfs flush rules that is un-possible. Which means that the drive reported a successful write for 22 and for 30 but after powerloss recovery it lost that write for 22 and reverted to an older block.

    I mean, that's better than some of the older flash drives that would trash the entire FTL and lose all the data. But it is not exactly GOOD.

    I'm pretty sure Samsung consumer drives will also lose the data but at least they will revert all of the writes following the lost data, so in my example it would revert write 30 also. That would at least leave things in a logically consistent state.

Log in

Don't have an account? Sign up now