Friday, May 15, 2020

5 Takeaways from HDD and SSD Market Reports




We were reviewing some reports on SSD and HDD market in Q1 2020 and the impact to the memory and Storage businesses. Mark Webb, MKW Ventures Consulting, rev 0







Note: in my terminology: “enterprise” includes datacenter, cloud, hyperscale, classic enterprise definitions. Not PC, Not CE

Takeaways:

 SSD growth in Enterprise: SSD capacity grow in the Enterprise was ~125% YoY in Q1. While NAND prices stabilized in Late 2019 and then increased in Q1, The revenue for Enterprise SSD was just now approaching mid 2018 numbers. Capacity growth was accelerated by previous price crash and end result is revenue back to 2018 numbers. Since NAND prices increased recently, we would expect growth to be affected.

SSD Growth in Client: SSD Capacity increased 75% YoY in Q1 2020. Client SSD revenue is at all time high as bits have nearly tripled since the NAND price peak in mid 2018. This market is still the most competitive with very low margins but it also drives adoption of the newest NAND technologies with more layers and QLC. Client SSDs passed Client HDDs in units sold back in Q1 2019 and the gap continues to grow as client SSDs (units) outsell HDDs 2:1. Due to small size of SSDs and larger size of HDDs…. HDD Capacity outsells SSD capacity 2:1. Interesting symmetry!

Overall bit growth: this places NAND bit growth in SSDs just short of 2x per year. SSDs are where most bits are shipped and by far the fastest growth. Mobile is struggling in 2020 so NAND will require SSD growth to continue to absorb capacity and prevent inventory builds.

Market share leaders: Samsung leads in both Enterprise and in Client in capacity shipped and although the competition is working hard to gain especially in Enterprise, Samsung holds on. Intel is second in enterprise, WD is second in Client. This just in: Samsung is kinda dominant in memories

 HDD Vs SSD numbers: SSD are growing at a faster percentage than HDDs and people expect that someday SSD replace HDDs. This may happen someday but we are still a long way away from that. More bits are added to HDD than SSD every year by a wide margin. 83% of bits shipped are HDD. In enterprise 90% of bits shipped are HDD. Reasons:
a.      SSDs are faster but HDDs are still much cheaper in both client and enterprise.
b.      Client and nearline are not as performance obsessed
c.      HDD costs are dropping faster than we predicted… and innovation continues
d.      SSD dominate in the performance applications in the datacenter. But those are the dominant consumer of bits.
e.      Lots more reasons and detailed data on this

Bonus: Mission critical 10K and 15K HDDs…. A market I said should disappear in 2015 since there is no logical reason for them …. Still sell with only minor capacity erosion over time. Reason: Once something is established, enterprise is very slow to move away from it. Remember that.


Mark Webb
www.mkwventures.com


Tuesday, October 29, 2019

A Quick Summary on the Micron X100 3D XPoint SSD


A quick Summary on Micron X100 3D XPoint SSD.
THE Fastest SSD in the World!












Reminder: Micron will use the term 3D Xpoint. Intel, who owns the copyright to this term, will not use it. They use the term Optane Memory Media

Micron announced the worlds Fastest SSD, the x100 SSD. 2.5M IOPS, 9GB/Sec, 8uSec Latency. It is based on second generation 3D Xpoint. We showed the cost and density for this chip early in the year and again at FMS in 2019. And yes, we were correct in our predictions for what the chip would look like.

I bought 2 of them on Amazon last night…. JUST KIDDING!!. The product is not available to anyone yet and is supposed to sample to some select customer by the end of the year. It is apparently becoming more common to announce products that have not sampled with anyone yet. Standard qualification processes and history would indicate that the product is a year from volumes sales (More on this later).

FACTS
  • Product is a full height card.
  • Product is PCIe3 x16. If you do x16, you will get some serious bandwidth.
  • If you build a x16 SSD with 3D XPoint and optimize it for speed, it will be the fastest SSD.

Some speculation from MKW Ventures (I will confirm when I get data
  • Density wasn’t announced but someone mentioned 750GByte at meeting
  • The product is probably heavily overprovisioned and costs $1000s of dollars per SSD. It is aimed only at the fastest market. If it was available, pricing would probably be in the $10/GByte region.
  • The product is probably FPGA based. The x16, FPGA, and the overall formfactor indicate that this is test vehicle and will not ramp in volume ( ie 100s of units sold total). Micron did this before with their super fast SLC NAND SSDs and IDT controller (P320h). They were very very fast and sold in very small numbers. This is not a bad thing, it allows Micron to learn, customers to try it out, and Micron to develop a product that will ramp. Plus you can do some fun virtual memory stuff with low latency SSDs like this.
  • This product and its follow on products will compete in the low latency SSD market with Intel, Samsung, Toshiba. It’s a real market, the volumes are just very low at this time. They will follow the ramp that NVMe SSDs have over the last 6 years.
  • We don’t expect to see 3D Xpoint DIMMS from Micron anytime soon. These require custom memory controller or a NVDIMM-P or CXL or GenZ like bus. None of those are ready yet.

Summary: 
  • Micron has a 3D XPoint SSD that they will sample this year. It is second generation 4 layer 3D Xpoint
  • It is arguably the fastest SSD available (when it ships).
  • It is confirmation that Micron plans to plan in the 3D Xpoint market
Mark Webb
www.mkwventures.com




Wednesday, March 27, 2019

Memory and Storage Speeds on Optane DIMMS


Memory and Storage Speeds on Optane DIMMS

At Flash Memory Summit, I showed why 3D XPoint was advertised as 1000x faster but ended up only 7x Faster. It has to do with the fact that individual IOPS, latency, sequential read speed may or may not be limiting factor. Actual impact is always less. The best example is based on HDD to SSD comparisons from the many benchmarking sites. SSDs are technically 500x faster than HDDs… but in most applications your apps run <2x faster....




Next week, Intel will publicly update us with info on Cascade Lake and Optane DIMMS. The above mentioned challenge will be a bit of a two edged sword for Optane DIMMS.

  1. Optane DIMMS used as persistent memory will be 50 times faster than NVMe SSDs based on latency and bus speeds. This sounds great but we will find most applications do not see that level of benefit. The UCSD team did a great study on this. Whether it is worth the price and time to optimize is up to you and your applications
  2. On the other hand, Optane DIMMs used in main memory with a DRAM cache (“memory mode”) will provide tons of main memory that is theoretically 7-10x slower that DRAM RDIMMs based on latency… but actual speed difference will be less than that and if the data is “cache friendly”, maybe 90% of applications will see no measurable difference in actual use. Lots of memory, lower cost than DRAM, same measurable performance in most cases. Whether it is worth the price and time to optimize is up to you and your applications


So we will find that in actual use, Optane DIMMS are not 50x faster than SSDs… but they are not 10x slower than DRAM either. We will see what performance actual customers see in months to come

Wednesday, March 20, 2019

Cost and Performance of Optane DIMM Options in Use Today




Now that we have information on the performance of Optane DIMMs and data on recommended configurations from Intel, we are updating a blog from last year on cost and performance











Reference configuration is 192GB DRAM with 1TB NVMe SSD

With Optane, Intel is proposing 128GB DRAM with 1.5TB of Optane operating in memory mode (volatile/not persistent). The purpose is to add lots of cheaper, slower main memory. Persistence doesnt matter

Cost is about the same as today's configuration. What can we expect from this configuration? In theory…
·         Read Latency is 5-7x higher for Optane, but speed (MT/S) is only 3x slower for Optane. This is the same effect we see on latency vs speed in DDR2,3,4 transitions
·         Data sets larger than 192GB would run much faster on Optane DIMMS as no swapping is required with SSD
·         Datasets between 128 and 192GB would run slower because you got rid of some of the DRAM and replaced it with Optane. These now run with some of the memory running at 1/3 the speed
·         The trade off is simple. cost is the same. Do you have large datasets taking up more than 192GBs of memory? Are you OK with 1/3 the performance when accessing the Optane vs DRAM?
·         Reminder: In Memory mode the Optane is configured by controller as volatile memory.

A second configuration is App direct mode. In this mode we have true persistent memory. You can use it with load/store commands like memory or block level storage like SSDs. An example is replacing the NVMe SSD with persistent memory. Simplistic sense, Its like an SSD on the DRAM bus. Example

  •  Replace 512GB NVMe accelerator SSD with 512GBs of Optane persistent memory in app direct mode.
  • Cost goes from ~$400 for NVMe to ~$1800 for Optane DIMMs.
  • When accessing the Optane DIMM the speed is 10-50x faster depending on your metric and application.
  • It is persistent so data is never lost
  • Much more expensive but as an accelerator, much faster.

If you are an IT expert, a great summary of performance improvement seen from Optane DIMMs is shown here from the UCSD NVM Team. https://arxiv.org/abs/1903.05714

We have a matrix of lots of cost/performance options, including NVDIMMS available from other suppliers (Samsung, Netlist, Micron, Viking, Smart Modular, etc). call for more info.



Mark Webb



Monday, January 28, 2019

Five Thoughts from 2019 SNIA Persistent Memory Summit


Last week I attended the Persistent Memory Summit in Santa Clara. This is a great one day conference each year bringing together experts on Persistent Memory examples, system support, and applications. The presentations are posted and there is a video as well (thank you SNIA!)








5 thoughts:
  1. Now that persistent memory has moved from a “wouldn’t it be great if we had this?” concept to a “we have some options, now what?” debate…. We need to define “persistent memory” based on the new reality. Rob Peglar and Stephen Bates reminded us that using the term SCM is not politically correct and can only be used in a safe space away miles from a SNIA conference (Starbucks Milpitas worked for me). This is good since it was way too vague and theoretical. Andy Rudoff offered a simple definition: it needs to be address in load/store like memory (Not blocks and pages), and persistent. Speed is in the eye of the beholder but a year ago there was a definition of <2us latency in applications which I liked. The NVDIMM-N and NVDIMM-P definitions would indicate that it does not need to be one type of memory but is a DIMM or system. These simple definitions would seem to eliminate some products that are often referred to as “persistent memory” (side discussion)
  2. The most common persistent memory today arguably is NVDIMM-N which provides us with up to 32GB DIMMS that can be written to like DRAM but never lose data. The challenges here are that the use of DRAM for entire capacity plus NAND plus energy support leads to a high cost that is 3x or more per bit compared to DRAM. As a result, a small amount of systems (typically SANs) use them today. Multiple providers were at the conference and you can buy this persistent memory whenever you wish.
  3. Frank Hady presented Intel Optane Persistent Memory and the applications. Two modes, one is persistent memory (App Direct) and one is memory Mode (which loses data on power cycle). Memory mode is great for adding tons of memory that is somewhat slower and cheaper but it is not persistent per Intel documentation. This is poised to grow rapidly with Intel backing but it is off to a slow start. From talking to customers, most say they still can’t get Optane PM to build their own system and the availability today is running apps on cloud systems. I have details on modes and projected revenue in other publications
  4. NVDIMM-P is proposed as an open source version similar to Optane PM where the architecture supports some DRAM plus NAND or other memory type to optimize for cost. This will allow DIMMs that are LESS expensive than DRAM, higher density, and more non-proprietary options. We need this ASAP! When can I get one???
  5. From the conference, it feels like Infrastructure support and application drivers are ahead of the actual hardware…. This is probably not totally true but there is drive from Intel and SNIA to get all the support in place and the OS supports it and we have applications. Once Intel ships significant volume and competitors start shipping their versions of PM, we can test out all the applications

See more info on our blogs or website. Thanks to Chris Mellor of Register fame for republishing some of my FMS work on persistent memory and Optane with all the gory details and numbers.

Mark Webb
www.mkwventures.com


Tuesday, January 22, 2019

Relative Cost and Price for Optane and other Memory

Jan 2019 UPDATED: What are the relative costs for the memory types in new SSDs and DIMMs? We have the estimates below!

This was shown about a year ago and is still a good representation of what is going on. All of the costs are lower (we publish a monthly report with details).... but the summary is still true.


  • 3D XPoint is lower cost than DRAM today and is selling for half the price of DRAM or less in DIMMs and SSDs (volumes are low and yes Intel loses money on this). 
  • 3D XPoint will decrease in cost with ramp and maturity. It is not fully ramped yet as demand is not there yet. Also 2nd Gen will be 30% cheaper.
  • Optane DIMMs are now known to have a controller and DRAM on each DIMM so the DIMM has some additional cost compared to DRAM. Plus they are overprovisioned more than DRAM DIMMS
  • Fast NAND (Low Latency NAND) is still much cheaper and is useful in Fast SSDs and NVDIMM-P applications. We expect Low latency NAND from Samsung, Toshiba, WDC, and Hynix in 2019  (YMTC someday ... and yes we have data on when someday is)






                                                                                               
Actual values, assumptions, prices and how these will change over the next two years is available as well

We will be at persistent memory summit if you want to discuss details more. Text us to chat.


#flashmem
#optane


Mark Webb
www.mkwventures.com

Wednesday, January 16, 2019

Intel Optane H10 Vs Samsung 970EVO

Intel announced an Optane H10 Hybrid SSD at CES last week. How might it compare to Samsung 970EVO on cost and performance?










Intel announced a Optane H10 M.2 SSD which will be in laptops sometime in Q2 or Q3. It combines Intel optane memory (the 16G or 32G cache for HDD) and a Intel QLC SSD all on one module. Optane/3D XPoint acts as a cache for the QLC SSD. By merging them together, Intel sells both NAND and 3D Xpoint and saves the PC manufacturer a M.2 slot. It is two SSDs on one board managed by Intel RST driver and custom firmware

Performance: we can expect that the Optane will perform like other Optane Memory tests. Lower latency on anything cached, slow down to SSD (QLC in this case) performance on huge serial transfers. This alone will make it a high performance SSD. Most likely faster than 970EVO on QD1 latency and speed, slightly slower than 970 on large file transfers or un-cached as QLC is slower than TLC in most applications. Power is unknown but historically, Optane is a power hungry technology

Cost is the key as always: Our data indicates H10 cost is slightly higher than 970EVO at 500GB, slightly lower than 970EVO at 1TB. See details below. Since Intel is highly motivated to sell this product to move its built up NAND and 3D XPoint inventory, they can price it aggressively... even at cost. This should make it a competitive product for PC OEMs. Not HDD cheap, and not as cheap as Intel's pure QLC NVMe SSD but a less expensive SSD with high NVMe performance.

It should be a solid competitor to 970EVO for the highest performance notebook storage.

H10 Review
https://www.tomshardware.com/news/intel-optane-h10-qlc-ssd,38387.html

Optane Memory Review
https://www.tomshardware.com/reviews/intel-optane-3d-xpoint-memory,5032.html

970EVO Review
https://www.tomshardware.com/reviews/samsung-970-evo-ssd-review,5573.html






We have costs for all SSDs, NAND, and new NVM technologies along with performance numbers for Optane/3d XPoint

Mark Webb
www.mkwventures.com