Who needed that SSD/RAM?!?
Chris Katko

This might be right up yer alley:

https://arstechnica.com/information-technology/2017/03/intels-first-optane-ssd-375gb-that-you-can-also-use-as-ram/

New type of (unknown) non-volatile RAM from Intel using "charge resistance" or something similarly magically named.

- Faster, denser, cheaper than flashs SSDs. NO WEAR LIMIT. Can literally be nuked multiple times a day and never "runs out" of useful sectors like a flash SSD.

- Denser and cheaper than DRAM. (Slower, but still faster than Flash.)

- Non-volatile, retains state forver.

- Byte addressable (*)

- (Looks like it generates very low heat. Not mentioned, but I'm judging from heatsink on screenshot.)

- Working models already ready to ship, and 2x,4x,8x,16x size are coming within 6 months each 6 months.

- Planning 1.5 TB models in a year or two.

- Uses PCIe. (DIMM models are planned. What... why???)

(*) I have no idea why it's byte addressable. I was just mentioning to some friends that it's likely a side-effect of the technology and not a design decision. Not even RAM is byte addressable, it's ~64/128 bytes. (IIRC) And my guess would be that the reading and then writing needed to blast away a page/sector costs more using that technology than simply re-writing to a single byte, and perhaps at that form-factor nanometer process, the extra millions of transistors needed to select at the byte level is a mere drop in the ocean. That is, they traded "more transistors" (since we got more than we need!) for a "dumber" simpler addressing scheme (byte level), so they wouldn't need a "smarter" (=slower) but more compact addressing scheme using page level access--which requires for every write, a read of the page and then re-writing of the page which is pretty common for SSDs, physical HDDs, and even DRAM.

Obviously, it's just a guess but these guys are literal geniuses. So they wouldn't come up with "byte addressable" without a reason. Especially when they're targeting a market that already uses page-level addressing. Why would they change from "the norm"?

And while I know I'm just guessing, I'm excited for "Why?". Little things like that offer insight into the internal technology.

Eric Johnson

Planning 1.5 TB models in a year or two.

Holy shit! :o

type568

I'm wondering what applications is it good for though?
The area of application ain't that wide.. The transfer rates aren't any great at all for the price, so it's all about delay.. So where do you need it, with such a wide random access pattern to require it all cached to this timing?

Bob Keane

Will it work in your new rig?

Thomas Fjellstrom

The throughput will probably improve a lot. But you get near ram speed non volatile storage at super low latencies (compared to even NVME Flash), so you could just run things from this new stuff.

I'm very interested in it from a disk cache perspective. I recently bought a el-cheapo Samsung 960 EVO 256GB nvme m.2 stick (and ironically a pcie riser card for it) to slap in my big dual socket Opteron box as a VM storage cache. It has greatly improved random access performance to the 4x2TB Raid10 (4TB storage total) vm storage pool.

Sadly the EVO (especially the 256GB one) a cheap stick with poor queued performance and even worse sequential write performance once its accelerated SLC section runs out. But sequential performance isn't super important on a shared storage pool.

type568

The throughput will probably improve a lot. But you get near ram speed non volatile storage at super low latencies (compared to even NVME Flash), so you could just run things from this new stuff.

I guess we may see a low-end computers operating without RAM within our lifespans:
very fast permanent storage devices, and bigger CPU cache. And well, due to RAM still being much faster high end computers will use it..

However, there's another but.. Operating without RAM could change programming SO much..

Thread #616803. Printed from Allegro.cc