Deploying to AWS S3 from Travis CI

Deploying to AWS S3 from Travis CI

These steps were used to setup the Door43 Jekyll codebase to build the live.door43.org site and deploy it to an AWS S3 bucket.

Configure .travis.yml File

Here is a copy of the .travis.yml file from the door43.org repo:

language: ruby
rvm:
- 2.1
script: ./cibuild.sh
env:
  global:
  - NOKOGIRI_USE_SYSTEM_LIBRARIES=true
sudo: false
git:
  depth: 10
before_deploy: pip install --user s3cmd
deploy:
- provider: script
  skip_cleanup: true
  script: ./s3_dev_push.sh
  on:
    branch: develop
- provider: script
  skip_cleanup: true
  script: ./s3_prod_push.sh
  on:
    branch: master

There are few important parts to this:

  • script: ./cibuild.sh - This is what builds the Jekyll site. See the file here. Note that this runs a lint check on the Markdown files and also a HTML check on the site after it is built. This is the build + test suite for the site.
  • before_deploy: pip install --user s3cmd - I’m using the s3cmd to deploy to S3, so I need to have that installed into the Travis build environment (this only happens before deployment, so if the build/test fails then this never happens). The reason for this is that I need the deployment to delete files as well as update and add new ones. The built-in Travis deploy to S3 will add and update but it will not delete files.
  • The deploy section has 2 very similar script entries. The main difference here is that our master branch is set to deploy to our production environment and the develop branch is set to deploy to our development environment.
    • skip_cleanup: true - This tells Travis to not delete all the build files before deployment. Since Jekyll is a static site generator, the HTML files that are built are exactly what we need to be deployed.
    • The s3_dev_push.sh and s3_prod_push.sh files are almost identical. The important differences are 1. the bucket name, and 2. the config file loaded. The configuration files are what contain the AWS credentials, the explanation for that follows.

Encrypting the AWS Credentials

I opted to follow the Travis CI Encrypting Multiple Files document to encrypt the AWS credentials for deployment. Here is what I did:

  1. Open .gitignore and add the following entries:

    s3cfg-dev
    s3cfg-prod
    secrets.tar
    
  2. Create the s3cfg-dev and s3cfg-prod configuration files in the top level of the repo. These are just the standard AWS configuration file for the s3cmd utility. The only important bits in these are the ACCESS_KEY and the SECRET_ACCESS_KEY. You may want to test these before continuing to make sure the information is correct.

  3. Now put those files in a combined secrets.tar file, like so: tar cvf secrets.tar s3cfg-dev s3cfg-prod.

  4. Encrypt the tar file with travis encrypt-file secrets.tar. This command will output a decryption command that you need in your deployment script. (You’ll need the travis CLI client at this point.)

  5. Add the decryption command that you got from the previous step to your s3_dev_push.sh and s3_prod_push.sh deployment files.

  6. You can add the encrypted file to your git tree now, git add secrets.tar.enc.

  7. Now you can grab some popcorn, push up to Github, and watch the magic happen in the Travis CI log! (Or, if it fails, bang your head against the keyboard and then ask for help.)

Deploying to AWS Lambda from Travis CI using Apex

Contributed by Phil

  1. Add a requirements.txt file to the directory containing the function. Add the python requirements that must be installed in the directory with the function.

  2. Add these to .apexignore:

    *.dist-info/
    .gitignore
    *.pyc
    requirements.txt
    
  3. Create .gitignore in the same directory as .apexignore. Add the directories created for the dependencies in requirements.txt and the following:

    *.dist-info/
    *.pyc
    
  4. Create script travis-install-apex.sh and make it executable. See the example here. This will download the latest Apex executable and install it in the parent directory of the repository on Travis-CI.

  5. Create script install-requirements.sh and make it executable. See the example here. Travis CI will use this to install the python requirements from the requirements.txt file you created in step 1. This removes the need for checking the dependency files into Git.

  6. Create script deploy.sh and make it executable. See the example here. After all the tests have passed, this script will check if the function is ready to be deployed to AWS Lambda. If it is ready, it will deploy the function using Apex.

  7. Add "./travis-install-apex.sh" and the install for awscli to the before_install: section of .travis.yml. The quotation marks are required.

    before_install:
      - pip install awscli
      - "./travis-install-apex.sh"
    
  8. Add "./install-requirements.sh" to the install: section of .travis.yml. The quotation marks are required.

    install:
      - "./install-requirements.sh"
      - pip install coveralls
    
  9. Add "./deploy.sh" to the after_success: section of .travis.yml. The quotation marks are required.

    after_success:
      - coveralls
      - "./deploy.sh"
    
  10. Add the following global environment variable to .travis.yml. This will prevent python from creating *.pyc files:

    env:
      global:
        - PYTHONDONTWRITEBYTECODE=true
    
  11. Add AWS_REGION, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY to the Environment Variables section of the repository setting on travis-ci.org.

Notes:

  • I tried using the Travis cli to encrypt the AWS security values and include them in the .travis.yml but I couldn’t get it to work. There were no error messages but the environment variables were not created either.
  • It looks like local scripts must always be enclosed in quotation marks, even though the Travis CI documentation does not show this.
  • There is a deploy feature that can be included in the .travis.yml file but it won’t work for this type of deployment. After the tests have succeeded and before the deploy script is run, Travis CI resets the git repository and deletes all files that are not in the repository, including the function dependencies that must be uploaded with the function.
  • Response from Rich: I was able to use Environment Variables to make different AWS variables for both development and master. The “deploy:” section also works for me, as it packages up all the python requirements, so not sure why failed for you.