《SNIA-SDC23-Flynn-Introducing-Need-NFS-eSSDs.pdf》由会员分享,可在线阅读,更多相关《SNIA-SDC23-Flynn-Introducing-Need-NFS-eSSDs.pdf(16页珍藏版)》请在三个皮匠报告上搜索。
1、1|2023 SNIA.All Rights Reserved.Virtual ConferenceSeptember 28-29,2021The Case for NFS-eSSDsDavid Flynn,Hammerspace Founder&CEO2|2023 SNIA.All Rights Reserved.Why Parallel NFS is Relevant Now More Than EverThe Current Reality:Data orchestration is an absolute requirement across silos,sites,&clouds.H
2、igh-performance requirements have gone mainstream.The world is moving to software-defined on commodity infrastructure.Linux is ubiquitous enables a sophisticated,standards-based,open-source client to come built-in(not third-party).Therefore:NFS 4.2 solves these problems.File access that bridges stor
3、age silos,sites&clouds.Parallel file system with no need to install third-party client&management tools.Avoids need to rewrite apps to use object storage.3|2023 SNIA.All Rights Reserved.Unstructured Data Orchestration System in Action4|2023 SNIA.All Rights Reserved.SSDCPUSATADRAM1RAIDCTRLPCIe2The RA
4、ID controller is the bottleneck and adds an additional serial data retransmission.CTRLGPU3Direct Attached Storage5|2023 SNIA.All Rights Reserved.NVMePCIe1CTRLNVMe eliminates the RAID controllerCPUDRAMGPU2Direct Attached Storage-NVMe6|2023 SNIA.All Rights Reserved.NVMePCIe1Block to flash address mapp
5、ingDENTRY to INODE mappingFile offset to block mappingCTRLGPU Direct eliminates the host CPU and memoryBut,what about with shared storage?DRAMCPUGPUDirect Attached Storage NVMe and GPU Direct7|2023 SNIA.All Rights Reserved.PCIeNIC9GPUDRAMCPUNetworkXNetworkNetworkCPUPCIePCIeNICNICDRAMNVMePCIePCIeNICX
6、NetworkStorage SystemNFS312345678File offset to block mappingClientBlock to flash address mappingCPUDRAMDENTRY to INODE mappingCTRLThe storage back-end host(CPU and memory)is the first bottleneck.Network Attached Storage(e.g.NetApp,Isilon,Pure,Qumulo,Ceph)8|2023 SNIA.All Rights Reserved.FabricXNetwo
7、rkNetworkCPUPCIePCIeNICNICDRAMNVMePCIeXFabricStorage SystemNVMEoFNFS31456File offset to block mappingBlock to flash address mappingDENTRY to INODE mappingCTRLNICNICThe file server front end is an even bigger bottleneck.DRAMCPUPCIeNIC8GPUDRAMCPU7Client23Network Attached Storage(using NVMEoF,e.g.VAST,
8、Weka)9|2023 SNIA.All Rights Reserved.PCIeNIC4GPUDRAMCPUXNetworkNetworkClient23File offset to block mappingNFS4.2CPUPCIeNICDRAMStorage SystemBlock to flash address mappingNFS3DENTRY to INODE mappingNVMeCTRLDRAMNICNICMetadataNFS4.2 has no bottlenecks,eliminates 4 of 9 data retransmissions,and doesnt n
9、eed NVMEoF or even an internal network!DRAMCPUPCIe1Network Attached Storage(using NFS4.2,e.g.Hammerspace)10|2023 SNIA.All Rights Reserved.Hammerspace Architecture11|2023 SNIA.All Rights Reserved.PCIeNIC3GPUDRAMCPUFile offset to block mappingXNetworkNetworkClient12Block to flash address mappingDENTRY
10、 to INODE mappingCPUPCIeNICDRAMStorage SystemSOCMetadataNFS-eSSDNFS3NFS4.2Network Attached Storage(using NFS4.2 and NFS-eSSD)Network Attached Storage(using NFS4.2 and NFS-eSSD)12|2023 SNIA.All Rights Reserved.PCIeNIC3GPUDRAMCPUFile offset to block mappingNetworkNetworkClient12DENTRY to INODE mapping
11、CPUPCIeNICDRAMStorage SystemBlock to flash address mappingFile offset to flash address mappingSOCMetadataSOCSOCNFS-eSSDNFS4.2 with proposed NFS-eSSDseliminates 6 of 9 data retransmissions,eliminates the double mapping layers,and scales 1x1 with network ports!NFS3XNFS4.2Network Attached Storage(using
12、 NFS4.2 and NFS-eSSD)C13|2023 SNIA.All Rights Reserved.PCIeNICGPUDRAMCPUFile offset to block mappingNetworkNetworkClientDENTRY to INODE mappingCPUPCIeNICDRAMStorage SystemLower latency Lower power consumptionLower operational(and capital)costsLower write amplificationHigher density without sacrifice
13、 of potential performance Higher access densityBetter inherent reliability,availability and serviceability Much wider dynamic range of scaleScale up(hyperscale)Scale down(SOHO,maybe on USB-C)Enables Computational StorageCompression,deduplication,encryption,erasure/error coding,copy/clone,filter,sear
14、ch,join,map reduce,etc.can be offloaded to the SSD now that it understands file layoutBlock to flash address mappingFile offset to flash address mappingSOCMetadataSOCSOCNFS-eSSDNFS3XNFS4.2312Benefits14|2023 SNIA.All Rights Reserved.PCIeNICGPUDRAMCPUFile offset to block mappingNetworkNetworkClientDEN
15、TRY to INODE mappingCPUPCIeNICDRAMStorage SystemBlock to flash address mappingFile offset to flash address mappingSOCMetadataSOCSOCNFS-eSSDAI/ML workloads demanding efficient performanceData governance/cloud computing needs orchestrationFlash performance can easily saturate PCIe/EthernetE1.S and oth
16、er form factors(density and power)64-bit processor IP availabilityProcessor performance densityIPv6,RoCEEmbedded Linux with High performance,lightweight filesystems(XFS)High performance,lightweight NFS server(kNFSd)Standardized Parallel NFS 4.2 Flexible FilesWhy NowNFS3XNFS4.231215|2023 SNIA.All Rights Reserved.Virtual ConferenceSeptember 28-29,2021Questions?16|2023 SNIA.All Rights Reserved.Please take a moment to rate this session.Your feedback is important to us.