Amazon Elastic File System (Amazon EFS) permits EC2 situations, AWS Lambda capabilities, and containers to share entry to a fully-managed file system. First introduced in 2015 and usually out there in 2016, Amazon EFS delivers low-latency efficiency for all kinds of workloads and might scale to hundreds of concurrent shoppers or connections. Because the 2016 launch we’ve continued to pay attention and to innovate, and have added many new options and capabilities in response to your suggestions. These embody on-premises entry through Direct Join (2016), encryption of knowledge at relaxation (2017), provisioned throughput and encryption of knowledge in transit (2018), an rare entry storage class (2019), IAM authorization & entry factors (2020), lower-cost one zone storage courses (2021), and extra.
At the moment I’m comfortable to announce which you can now use replication to routinely keep copies of your EFS file techniques for enterprise continuity or that will help you to fulfill compliance necessities as a part of your catastrophe restoration technique. You possibly can set this up in minutes for brand new or current EFS file techniques, with replication both inside a single AWS area or between two AWS areas in the identical AWS partition.
As soon as configured, replication begins instantly. All replication visitors stays on the AWS international spine, and most adjustments are replicated inside a minute, with an general Restoration Level Goal (RPO) of quarter-hour for many file techniques. Replication doesn’t eat any burst credit and it doesn’t rely towards the provisioned throughput of the file system.
To configure replication, I open the Amazon EFS Console , view the file system that I need to replicate, and choose the Replication tab:
I click on Create replication, select the specified vacation spot area, and choose the specified storage (Regional or One Zone). I can use the default KMS key for encryption or I can select one other one. I evaluate my settings and click on Create replication to proceed:
Replication begins straight away and I can see the brand new, read-only file system instantly:
A brand new CloudWatch metric, TimeSinceLastSync, is printed when the preliminary replication is full, and periodically after that:
The reproduction is created within the chosen area. I create any mandatory mount targets and mount the reproduction on an EC2 occasion:
EFS tracks modifications to the blocks (presently 4 MB) which are used to retailer recordsdata and metadata, and replicates the adjustments at a charge of as much as 300 MB per second. As a result of replication is block-based, it isn’t crash-consistent; if you happen to want crash-consistency it’s your decision to check out AWS Backup.
After I’ve arrange replication, I can change the lifecycle administration, clever tiering, throughput mode, and automated backup setting for the vacation spot file system. The efficiency mode is chosen when the file system is created, and can’t be modified.
Initiating a Fail-Over
If I have to fail over to the reproduction, I merely delete the replication. I can do that from both facet (supply or vacation spot), by clicking Delete and confirming my intent:
delete, and click on Delete replication to proceed:
The previous read-only reproduction is now a writable file system that I can use as a part of my restoration course of. To fail-back, I create a reproduction within the authentic location, await replication to complete, and delete the replication.
I can even use the command line and the EFS APIs to handle replication. For instance:
CreateReplicationConfiguration – Set up replication for an current file system.
DescribeReplicationConfigurations – See the replication configuration for a supply or vacation spot file system, or for all replication configurations in an AWS account. The information returned for a vacation spot file system additionally contains
LastReplicatedTimestamp, the time of the final profitable sync.
DeleteReplicationConfiguration – Finish replication for a file system.
Out there Now
This new function is out there now and you can begin utilizing it as we speak within the AWS US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Eire), Europe (London), Europe (Paris), Europe (Stockholm), South America (São Paulo), and GovCloud Areas.
You pay the same old storage charges for the unique and reproduction file techniques and any relevant cross-region or intra-region knowledge switch costs.