The first is 256Gb, they state that. The Toshiba press release provides some additional insight.
"This 96-layer BiCS FLASH™ will be manufactured at Yokkaichi Operations in Fab 5, the new Fab 2, and Fab 6, which will open in summer 2018. " So they won't ship much in 2018 if the fabs open in the summer but they might ship a little if nothing goes wrong.
"a capacity increase of approximately 40% per unit chip size over the 64-layer stacking process" 50% more layers, 40% increase in capacity, remains to be seen why and if it's about enabling QCL or 96 layer without string stacking.
You do need to slow it down with the string stacking rhetoric before you have at least some kind of hint that Tosh/WD are using it. The other week you made the claim that their 64L is using string stacking and there is zero support for that claim Today you assume that this product that is 1.5 years away uses it but that's also baseless. They might use it but we got no reason to assume one way or the other.
I'm not reading that the way you are. I think the "will open in summer 2018" qualifier is only applied to Fab 6. Fabs 5 and 2's timelines are independent of that fab 6 project.
Worn out hardware and older processes disrupt the ability to make money with an old factory. Upgrading them periodically is a mandatory cost of business. And at the level of a large factory the hardware has to be ordered years in advance. You can't decide to delay it for six months or a year if you misjudged the boom/bust cycle unless you've got a factory sized warehouse to stash a factory's worth of new gear or are willing to piss off your suppliers. The latter ends to have bad consequences the next time you go shopping to build/upgrade a factory.
Think about that again. You have a X mm2 die with Y array efficiency, you add 50% more layers, if array efficiency remains flat, you gain 50% density. If array efficiency decrease by a bit, it's still far from justifying the gap here.
AFAIK little or none of the control circuitry around the flash is stacked, as you add more flash cells to the stacks you need more of all the per cells parts which eat into die area.
You got me there. This is what happens when posting right after work... Anyway, the real reason likely lies in the wordline connectors due to the staircase design. As more layers are added, the die area consumed by the connectors increases.
That's a fair point but i'm not sure it's that meaningful.
The big questions here are QLC and 96 layer. 96 could be sting stacked , i don't think it is as going from 64L to 2x48L can be difficult from a cost perspective. 96L in one is doable but gonna be tricky and they might need thinner films but that's not without issues. Micron will likely stay with string stacking , makes sense for them after 2x32L. Then there is endurance for QLC, , the market for WORM is limited and costs still not great so they need to get QLC to good enough in consumer. So i don't quite expect a straight up scaling, just layers added but all else roughly the same.
Anyway, it seems that they would get to over 7Gb/mm2 with a 1Tb QLC die - their 64L 512Gb TLC is at about 3.88Gb/mm2 - and costs bellow 5 cents per GB, hopefully well bellow.
The actual memory strings are like skyscrapers. You are right that the overall shape is a pyramid, but the memory array is a rectangle. There are no memory cells in the wordline connectors that make the pyramid shape. You can see this in the SEM photos by TechInsights for example.
Looks like it. Unless you know exactly how you're going to use that flash and the risks associated with it, then it definitely recommend staying the heck away....
Agreed. And double ugh. TLC from a performance POV is only starting to get decent, and 1000 PE cycles is enough for good capacity. 150 write cycles? HEEEELLLLL no.
Data retention time is usually 1 year for consumer flash. I recently had a USB stick in my car at somewhat elevated temperatures. The music was heavily corrupted after ~14 months. You'll definitely want the controler to read and refresh the data before you loose those few electrons comprising each bit, i.e. you'll need more write endurance than that. There are definitely applications where 100 - 150 cycles are perfectly fine, though.Personally I have never worn out any USB stick or flash card.
They would not be good for a primary/sole drive, but I see them as potential replacements for mechanical mass storage drives. Something like NVMe for primary, QLC for storage. I would still like to see those figures come up before I use one. But even with low P/E ratings, as secondary storage they would still very likely be more reliable than typical consumer grade mechanical drives. They also won't make any noise during use.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
22 Comments
Back to Article
jjj - Wednesday, June 28, 2017 - link
The first is 256Gb, they state that.The Toshiba press release provides some additional insight.
"This 96-layer BiCS FLASH™ will be manufactured at Yokkaichi Operations in Fab 5, the new Fab 2, and Fab 6, which will open in summer 2018. "
So they won't ship much in 2018 if the fabs open in the summer but they might ship a little if nothing goes wrong.
"a capacity increase of approximately 40% per unit chip size over the 64-layer stacking process"
50% more layers, 40% increase in capacity, remains to be seen why and if it's about enabling QCL or 96 layer without string stacking.
You do need to slow it down with the string stacking rhetoric before you have at least some kind of hint that Tosh/WD are using it. The other week you made the claim that their 64L is using string stacking and there is zero support for that claim Today you assume that this product that is 1.5 years away uses it but that's also baseless. They might use it but we got no reason to assume one way or the other.
DanNeely - Wednesday, June 28, 2017 - link
I'm not reading that the way you are. I think the "will open in summer 2018" qualifier is only applied to Fab 6. Fabs 5 and 2's timelines are independent of that fab 6 project.jjj - Wednesday, June 28, 2017 - link
Conversion disrupts output, they are better off ramping the new capacity first and converting the other 2 location once they get to yield.DanNeely - Friday, June 30, 2017 - link
Worn out hardware and older processes disrupt the ability to make money with an old factory. Upgrading them periodically is a mandatory cost of business. And at the level of a large factory the hardware has to be ordered years in advance. You can't decide to delay it for six months or a year if you misjudged the boom/bust cycle unless you've got a factory sized warehouse to stash a factory's worth of new gear or are willing to piss off your suppliers. The latter ends to have bad consequences the next time you go shopping to build/upgrade a factory.Kristian Vättö - Wednesday, June 28, 2017 - link
"50% more layers, 40% increase in capacity, remains to be seen why"Only the memory array is increasing in density. Peripheral circuitry, IO logic, row decoders etc eat up 20-30% of the die area.
jjj - Wednesday, June 28, 2017 - link
Think about that again. You have a X mm2 die with Y array efficiency, you add 50% more layers, if array efficiency remains flat, you gain 50% density. If array efficiency decrease by a bit, it's still far from justifying the gap here.DanNeely - Wednesday, June 28, 2017 - link
AFAIK little or none of the control circuitry around the flash is stacked, as you add more flash cells to the stacks you need more of all the per cells parts which eat into die area.Kristian Vättö - Wednesday, June 28, 2017 - link
You got me there. This is what happens when posting right after work... Anyway, the real reason likely lies in the wordline connectors due to the staircase design. As more layers are added, the die area consumed by the connectors increases.jjj - Thursday, June 29, 2017 - link
That's a fair point but i'm not sure it's that meaningful.The big questions here are QLC and 96 layer. 96 could be sting stacked , i don't think it is as going from 64L to 2x48L can be difficult from a cost perspective. 96L in one is doable but gonna be tricky and they might need thinner films but that's not without issues. Micron will likely stay with string stacking , makes sense for them after 2x32L.
Then there is endurance for QLC, , the market for WORM is limited and costs still not great so they need to get QLC to good enough in consumer.
So i don't quite expect a straight up scaling, just layers added but all else roughly the same.
Anyway, it seems that they would get to over 7Gb/mm2 with a 1Tb QLC die - their 64L 512Gb TLC is at about 3.88Gb/mm2 - and costs bellow 5 cents per GB, hopefully well bellow.
jjj - Thursday, June 29, 2017 - link
An image for Samsung's 32L that gives us an idea about scale for the staircase https://www.3dincites.com/wp-content/uploads/AndyF...Seems they only need about 20um.
theeldest - Wednesday, June 28, 2017 - link
The stacks aren't skyscrapers, they're more like pyramids (the bottoms of pyramids, at least).So the top layers aren't as large as the bottom layers.
Kristian Vättö - Thursday, June 29, 2017 - link
The actual memory strings are like skyscrapers. You are right that the overall shape is a pyramid, but the memory array is a rectangle. There are no memory cells in the wordline connectors that make the pyramid shape. You can see this in the SEM photos by TechInsights for example.MrSpadge - Wednesday, June 28, 2017 - link
The image just above your post has a bar going through the middle of the BiCS4 and BiCS5 arrays.limitedaccess - Wednesday, June 28, 2017 - link
Are buying things like USB drives and SD cards going to be a nightmare after QLC hits the market?lilmoe - Wednesday, June 28, 2017 - link
Looks like it. Unless you know exactly how you're going to use that flash and the risks associated with it, then it definitely recommend staying the heck away....cfenton - Wednesday, June 28, 2017 - link
The cheap stuff will probably all move to QLC, but I'm sure they'll keep making good stuff for prosumer and professional use.MrCommunistGen - Wednesday, June 28, 2017 - link
I know that it is inevitable... but *ugh* QLC.bill.rookard - Wednesday, June 28, 2017 - link
Agreed. And double ugh. TLC from a performance POV is only starting to get decent, and 1000 PE cycles is enough for good capacity. 150 write cycles? HEEEELLLLL no.Santoval - Wednesday, June 28, 2017 - link
"100 – 150 P/E cycles"?!? Oh dear, this QLC will really be the ultimate gutter of NAND flash..shabby - Wednesday, June 28, 2017 - link
QLC is clearly for storage and that's it, gimme a 100tb ssd with 1 p/e cycle and i'll be happy. Just need to fill it up once and that's it.MrSpadge - Wednesday, June 28, 2017 - link
Data retention time is usually 1 year for consumer flash. I recently had a USB stick in my car at somewhat elevated temperatures. The music was heavily corrupted after ~14 months. You'll definitely want the controler to read and refresh the data before you loose those few electrons comprising each bit, i.e. you'll need more write endurance than that. There are definitely applications where 100 - 150 cycles are perfectly fine, though.Personally I have never worn out any USB stick or flash card.Alexvrb - Wednesday, June 28, 2017 - link
They would not be good for a primary/sole drive, but I see them as potential replacements for mechanical mass storage drives. Something like NVMe for primary, QLC for storage. I would still like to see those figures come up before I use one. But even with low P/E ratings, as secondary storage they would still very likely be more reliable than typical consumer grade mechanical drives. They also won't make any noise during use.This is all down the road anyway. We'll see.