Amazon S3 provides highly available object storage with excellent durability. Amazon S3 can be used for a wide range of use cases including storing backup data and archival content. In addition, it is possible to host a static website directly from within an S3 bucket adding to the versatility of the platform. In this 4th and final part of the series, we discuss additional features of the S3 service. This 4-part series will you help in your revision of the S3 platform and prepare you for both the AWS Certified Solutions Architect – Associate exam as well as the AWS Certified Developer – Associate exam.
Object URL
Each object in a bucket has a unique URL and can be accessed over the Internet. The URL of every object is formed by incorporating the bucket name which needs to be unique throughout the AWS platform. S3 is considered flat file storage. This means that you cannot use it as a file system or install operating systems or application on it. However, in order to provide a hierarchical feel, buckets can contain folders and then objects can be stored within those folders. Referencing the objects then takes place by using string which incorporates delimiters and prefixes (more below) to help define the unique object URL
For example:
- https://exambucket.s3.amazonaws.com/examtips.doc
- https://exambucket.s3.amazonaws.com/solutionsarchitect/2017/examtips.doc
Prefixes and Delimiters
Following on from the above, S3 supports prefix and delimiters when listing out key names. This feature enables you to organise your objects in a hierarchy. Using prefixes and delimiters you can organise and browse and access objects in a bucket in a hierarchy fashion. You can use slash (/) as a delimiter and use key names to emulate file and folder structure.
So you can group objects to appear in a hierarchy format by placing them in ‘folders’ which essentially groups them together. The folder becomes a prefix with a delimiter of a slash (/). This enables better administration but does not give the features associated with a file system. For example, no NTFS permissions on folders.
Multipart Upload
With Amazon S3, you can upload files in a single ‘put’ operation up to a maximum of 5GB. Files that are larger than 5GB must be uploaded using the Multipart Uploader API. The maximum file size even when using a Multipart Uploader is still 5TB. Multipart Uploader API improves the upload experience for larger objects. Object parts can be uploaded independently, in any order, and in parallel. Retransmission of parts can be performed if required and when all the parts are uploaded, S3 will assemble the parts to create the object.
- Amazon recommended that any file greater than 100MB should be uploaded using a Multipart Uploader
- You can configure an object lifecycle policy so that incomplete uploads are aborted after a specified time
Using multipart upload provides the following advantages:
- Improved throughput—You can upload parts in parallel to improve throughput.
- Quick recovery from any network issues—Smaller part size minimises the impact of restarting a failed upload due to a network error.
- Pause and resume object uploads—You can upload object parts over time. Once you initiate a multipart upload there is no expiry; you must explicitly complete or abort the multipart upload.
- Begin an upload before you know the final object size—You can upload an object as you are creating it.
Billing
Once you initiate a multipart upload, Amazon S3 will store all parts until you either complete or abort the upload. You will be billed for all storage, bandwidth, and requests for this multipart upload and its associated parts. If you abort the multipart upload, Amazon S3 deletes upload and any parts that you have uploaded and billing stops.
Amazon only completes the object after all parts have been successfully uploaded and a successful request is sent to complete the multipart upload. Until them, Amazon S3 will not assemble the parts. You can configure a lifecycle rule using the AbortIncompleteMultipartUpload action for all failed multipart uploads to help manage your costs
Logging
Amazon S3 access logs can be enabled and are turned off by default. When enabled, you need to define a bucket to store the logs. Logging enables you to access information such as:
- Requestor account and IP Address
- Bucket Name
- Request Time
- Action (Get, Put, List, etc.)
- Response status and error codes
Notifications
You can send out Event Notifications to actions that are performed on your bucket such as uploading a new object or deleting an object. You can make this very granular so that for example you are notified each time someone uploads a file of a specific type, with specific prefixes or delimiters or any other similar attribute.
In addition, you can use Event Notification to setup up triggers that perform specific actions on your objects; for example, transcoding media files when they are uploaded on performing object manipulation. Notifications can be sent through
- Simple Notification Services (Amazon SNS)
- Simple Queue Service (Amazon SQS)
- AWS Lambda to invoke a Lambda function
Cross-Origin Resource Sharing (CORS)
Enabling one S3 bucket to access another S3 bucket when you want to share assets requires you to setup Cross-origin resource sharing (CORS). For example, you can configure a website of a particular domain to load content from another bucket of a different domain. This enables you to build rich client-side web applications.
Example
You configure an S3 bucket to host static web content from say https://bucketname.s3-website-us-east-1.amazonaws.com. You then want to use JavaScript on the web pages stored in the above bucket and index files to make authenticated GET and PUT requests against another bucket using the endpoint for the bucket, website.s3.amazonaws.com. Under normal circumstances, the request will get blocked. Using CORS, you can configure your bucket to enable cross-origin requests from the endpoint for the bucket, website.s3-website-us-east-1.amazonaws.com.
To create a CORS configuration, you need to configure an XML document with rules that identify the origins that you will allow to access your bucket, the operations (HTTP methods) will support for each origin, and other operation-specific information. Note that, You can add up to 100 rules to the configuration.
Example CORS Rule 1
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>https://www.example1.com</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Example CORS Rule 2
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>https://www.example.com</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
</CORSRule>
</CORSConfiguration>
AllowedOrigin – Here, you need to specify the origins that you want to allow cross-domain requests from, for example, https://www.example.com. You can also specify * wildcard character, such as https://*.example.com. Furthermore, you can specify * as the origin to enable all the origins to send cross-origin requests
AllowedHeader – Here, you need to specify which headers are allowed in a pre-fight request through the Access-Control-Request-Headers header. Each header name in the Access-Control-Request-Headers header must match a corresponding entry in the rule before a response will be sent.
AllowedMethod – In the CORS configuration, you can specify the following values for the AllowedMethod element.
- GET
- PUT
- POST
- DELETE
- HEAD
ExposeHeader – Here you identify a header in the response that you want customers to be able to access from their applications (for example, from a JavaScript XMLHttpRequest object)
MaxAgeSeconds – Here you need to specify the time in seconds that your browser can cache the response for a pre-fight request as identified by the resource, the HTTP method, and the origin.
Additional Exam Tips:
- Amazon Simple Storage Service – S3 Exam Tips Part 1
- Amazon Simple Storage Service – S3 Exam Tips Part 2
- Amazon Simple Storage Service – S3 Exam Tips Part 3
180 Practice Exam Questions – Get Prepared for your Exam Day!
Our Exam Simulator with 180 practice exam questions comes with comprehensive explanations that will help you prepare for one of the most sought-after IT Certifications of the year. Register Today and start preparing for your AWS Certified Solutions Architect – Associate Exam.