Tag: aws

  • Copy Files to S3 Using AWS CLI Tools

    Copy Files to S3 Using AWS CLI Tools

    Introduction to the AWS CLI

    There are three methods to upload and download data to Amazon Web Services. You can use the command line (CLI), AWS SDK, or the S3 REST API. In this article, we will explore the command Line interface, and the most common commands to manage an S3 bucket.

    The maximum size of a file that you can upload by using the Amazon S3 console is 160 GB. The maximum bucket size is 5TB. You can not use s3api on files uploads larger than 5GB. Command line tools can achieve upload speeds greater than 7 MB’s. But, you can go even faster if you turn on acceleration. It is not recommended because an additional cost will be incurred.

    Common switches

    • –dryrun = test what files would be uploaded, prior to running command.
    • — summarize = include a total at the bottom of the output.
    • — human-readable = show files sizes in Gb and not Bytes.
    • –output text = format the output on separate lines
    • –content-type=text/plain = Tell aws the upload data is text data (not video or other).
    • –recursive = show full file path
    • –exclude – leave out certain files.
    • –include = include certain files.
    • –delete = this flag is needed to remove any files.
    • –meta-data = Use this flag to upload custom data like the true MD5 hash

    List contents of a bucket

    Copy a single file

    If the file is large, the cp command will automatically handle a multi-part upload dynamically. If the full path is not present, it will create it automatically in the s3 bucket.

    Copy multiple files from a local directory

    There are two commands that can be used to copy multiple files. Use sync or cp with the –recursive switch.

    OR

    Copy only files with .sum extension

    Copy a directory and exclude two files

  • Backup Files to S3 using Bash

    Backup Files to S3 using Bash

    Description

    A bash script will be used to copy a file from a Linux server to an S3 bucket. Next, it will run a checksum on the results to verify the upload. Finally, it will output the local file size, the local etag , the aws file size, and the aws etag value for easy comparison. This should give the end user enough confidence that the uploaded file has maintained it’s integrity.

    The script assumes you have an account in AWS with a login credentials. You have the cli AWS tools and credentials downloaded to /home/user/.aws/config and /home/user/.aws/credentials. These two files are needed to successfully authenticate to the s3 bucket.

    Amazon Web Service S3 Bucket

    AWS is a flat file system. There are no folders or directories. The “full” name of a file includes all the subdirectories as well. i.e. “/file1/file2/file3.txt” is the file name and not “file3.txt”. AWS will show all subdirectories as folders in the console, for ease of human navigate.

    Begin

    Start the script by defining that it will run as bash and add any notes to the head.

    Send any log output to a custom log file and code to exit the script if any commands in a pipeline fails.

    Get the number of processing units available and add it to a variable.

    Define the remaining local variables.

    Define the AWS variables.

    When a file is uploaded to AWS, it will calculate what is called an ETAG value. This is the checksum value of the upload file. To verify file integrity, we will compare the uploaded aws calculated ETAG against the local file’s calculated ETAG.

    The ETAG will match a true md5 hash value if the file size is < 5 GB. If the file is > 5 GB, the aws ‘cp’ command will automatically break the file into 8 MB chunks and upload 4 threads of data simultaneously, until the upload is complete. Each uploaded thread will have an md5 calculated. The resulting ETAG will be a sum of all the uploaded data chunks, rather than a true md5 hash against the completed file.

    In order to compare the ETAG’s and verify they match, we must calculate the local file’s ETAG value. Then compare that value to the value calculated by AWS. The script contains two methods to calculate the ETAG value, you will need to review and consider what is needed. In my case, I always know the value I will upload will be > 5 GB.

    To calculate the local files ETAG value, for files < 5GB. use:

    For files > 5 GB, we can use the code from https://gist.github.com/rajivnarayan/1a8e5f2b6783701e0b3717dbcfd324ba.

    Next, we will copy the files to the s3 bucket using the ‘cp’ command. We will be using the CLI copy command, rather than the s3api command, as the api can not handle file’s large then 5 GB. Copy the content to S3 and tell AWS that the data is just a plain text file.

    Get the ETAG value that AWS calculated during the upload.

    Next, we will get both the local file size and the uploaded file sizes.

    Finally, display the file sizes and the ETAG values of both the uploaded file and the local file side by side for comparison.