《SNIA-SDC23-Flynn-Standards-Based-Parallel-Global-File-Systems.pdf》由会员分享,可在线阅读,更多相关《SNIA-SDC23-Flynn-Standards-Based-Parallel-Global-File-Systems.pdf(14页珍藏版)》请在三个皮匠报告上搜索。
1、1|2023 SNIA.All Rights Reserved.Virtual ConferenceSeptember 28-29,2021Standards-based Parallel Global File SystemAutomated Data Orchestration with NFSDavid Flynn,Hammerspace Founder&CEO2|2023 SNIA.All Rights Reserved.Overview1.Why Parallel NFS now2.How NFSv4.2 makes Parallel NFS enterprise NAS capab
2、le3.Building a standards-based parallel global file system on NFSv4.24.Performance5.Customers and use-cases6.Q&A3|2023 SNIA.All Rights Reserved.Why Parallel NFS is Relevant Now More Than EverThe Current Reality:Data orchestration is an absolute requirement across silos,sites,&clouds.High-performance
3、 requirements have gone mainstream.The world is moving to software-defined on commodity infrastructure.Linux is ubiquitous enables a sophisticated,standards-based,open-source client to come built-in(not third-party).Therefore:NFS 4.2 solves these problems.File access that bridges storage silos,sites
4、&clouds.Parallel file system with no need to install third-party client&management tools.Avoids need to rewrite apps to use object storage.4|2023 SNIA.All Rights Reserved.NFS4.2 NFS Enhancements and Fixes Elimination of excess protocol chatter using Compound operations(versus serialized)Caching and
5、delegations(including client-side timestamp generation,eliminating need to go to the server)This eliminates 80%of NFSv3s GETATTR traffic File open/create is one single round trip to the metadata service(vs three serial round trips for NFSv3)Subsequent open and read of a file just written is ZERO rou
6、nd trips(vs two serial round trips on NFSv3)Multiple parallel network connections between client and server and optional RDMA Avoids TCP stack performance limitations Ability to write to multiple storage nodes synchronously(striping,mirroring)To build highly reliable,highly available systems from un
7、reliable storage nodes To distribute even a single file access across multiple back-end NFSv3 storage nodes Ability to move data while it is live being accessed w/o interruption File-granular access/performance telemetry gathering and reporting Ability to serve SMB over NFS Mapping of Active Directo
8、ry principals and ACLs over the NFS protocol SMB extended attributes carried over the NFS protocol(future)Converged file range locking(future)5|2023 SNIA.All Rights Reserved.MetadataNFSv4.2FlexFilesHammerspace Architecture OverviewDataNFSv3File/NASLinux MetadataHammerspace“Anvil”Bare-metal,virtual,o
9、r container deploymentSynchronous replicated cluster for HABillions of inodes with millions active openFull enterprise NAS data servicesInstant data-in-place assimilation ClientNFS v4.2 in-box from RHEL 7.6 onward DataAny NFS v3 NAS Leverages NTAP,Isilon file clone APIsLinear scalable data-path perf
10、ormance6|2023 SNIA.All Rights Reserved.DSX Store Function Bare-metal,virtual or container deployment Parallel,linear scalable performance Sources any block storage Direct attached SSD,NVMe,HDD Optional local striping and mirroring Network attached SAN,iSCSI,EBS Supports share snapshots and file clon
11、e Client can mirror writes to multiple DSX nodes Or use erasure encoded groups of DSX nodesMetadataNFSv4.2FlexFilesDataNFSv3File/NASNVMe/SCSI/ATABlock/DAS/SANLinux7|2023 SNIA.All Rights Reserved.DSX Mover/Cloud Mover Function Bare-metal,virtual or container deployment Parallel,linear scalable perfor
12、mance Stateless,scale-out Fully automatic scheduling File to file mobility NFSv3 No interruption to ongoing access File to object mobility S3,Azure Blob,etc.over HTTPs Global dedupe,compression,encryption Transfer&egress optimizedMetadataNFSv4.2FlexFilesDataNFSv3File/NASLinuxS3,Azure Blob,etc.Object
13、/CloudBlock/DAS/SANNVMe/SCSI/ATA8|2023 SNIA.All Rights Reserved.DSX Portal Function Legacy Client Support Bare-metal,virtual or container deployment Parallel,linear scalable performance Stateless,scale-out Virtual IPs with fail-over NFS v3,SMB 2.x/3 and S3 Global file locking Extensive Caching Metad
14、ata Read data Write-back and write-through caching as appropriateMetadataNFSv4.2FlexFilesDataNFSv3Win,Mac,ESXFile/NASLinuxS3,Azure Blob,etc.Object/CloudData+MetadataNFSv3,SMB,S3Block/DAS/SANNVMe/SCSI/ATA9|2023 SNIA.All Rights Reserved.DSX Containerized Microservices Deployment flexibility Co-residen
15、t on client nodes(hyper-converged)Dedicated storage-only nodes Eliminates networking hops Port,cost and latency reduction Bypasses serialization over NFS IO short-circuits in the kernel Achieves full NVMe performance Tens of Gbytes per second Millions of IOPS Microsecond latencyMetadataNFSv4.2FlexFi
16、lesDataNFSv3Win,Mac,ESXFile/NASLinuxS3,Azure Blob,etc.Object/CloudData+MetadataNFSv3,SMB,S3Block/DAS/SANNVMe/SCSI/ATA10|2023 SNIA.All Rights Reserved.Unstructured Data Orchestration System in Action11|2023 SNIA.All Rights Reserved.Example:Linear Scalability Saturating Infrastructure Performance test
17、ing showed linearly scale from small to large:Saturating the network for throughput-dependent workloads.And saturating the backend storage for IOPS-dependent workloads.Testing showed 16 DSX nodes hit 1.17 Tbits/s with 32kb file sizes with low CPU overhead.In testing for raw IOPS with this configurat
18、ion,the same test using small 4k files achieved 6.17m IOPS.Testing showed linear scalability to limits of network and storage,by adding more nodes.Test Suite:192 clients 16 DSX Nodes Can scale to 500 DSX nodes per cluster x 16 clusters.DSX Nodes can be mixed instance types.I/O Pattern Randomized 90/
19、10 R/W mix NFS Exports were mounted with NFS 4.212|2023 SNIA.All Rights Reserved.Data Orchestration Powering Space Exploration13|2023 SNIA.All Rights Reserved.Summary NFS 4.2 solves global high-performance file access Flexfiles Layouts provide flexibility to bridge block,file&object at scale,globall
20、y Enables transparent live data mobility Supports software-defined commodity model Leverages existing ubiquitous NFS client No third-party client required.Supports extreme scale-out&high-performance file workflows across silos,sites,&clouds14|2023 SNIA.All Rights Reserved.Virtual ConferenceSeptember 28-29,2021Questions?