JFrog Artifactory on AWS - Migration Best Practices

  • March 04, 2020

DevOps practices have changed the development landscape drastically over the past several years. For example, in 2014, a survey by NewRelic found that only 4% of organizations released code to production multiple times a day. With 72% saying they released code less frequently — monthly, quarterly, or for some, not at all. Fast forward and today it’s common to see organizations moving code hourly from dev to production. This change in pace has several implications, leading to a key question around the best way to move the software repository to AWS, Azure, Google or another public cloud.

What is a Software Repository?

In software development, an artifact is a binary created/built using the code. By extension, a software repository is where compiled code, or binary, is stored. At a base level, a software repo supports development in a few important ways:

  • Acts as a shared library where projects are easily accessible by others,
  • Helps maintain version control, and 
  • Allows developers to rebuild from a tested artifact, reducing points of failure

While there are many different solutions for artifact storage, one of the most popular among enterprises is JFrog Artifactory. JFrog’s software repository is called Artifactory and is in fact so ubiquitous that it has become the ‘Kleenex’ of software repositories. 

What Role Does the Software Repo Play in DevOps?

As development has evolved, embracing DevOps automation, it’s become increasingly imperative for the software repo to integrate with automation tools and processes. Most notably, the software repo must integrate CI/CD tools and pipelines. This allows clean, compiled code to be pushed via a CI server. 

Software Repositories on AWS

While JFrog Artifactory deployments are available as on-premises and in the cloud as SaaS offerings, we are finding an increased demand for cloud-based software repository setups. Why? As the pace of code flow increases, so does the demand on development’s on-prem systems. 

While a cloud-based software repo offers high availability and continuous access to artifacts, an on-premise setup can be much less reliable. (We’ve all been subject to a down network.) And, the more often you need to call downstream for an artifact, the more often you roll the dice.  Also, these days many organizations have their entire IT footprints on public clouds like AWS. It’s very convenient for them to set up all of their development tools which include software repositories, code repositories, CI/CD servers, issue trackers, etc. within the same cloud set up. 

Moving the software repository to the cloud removes the on-premises challenge from the process. This, in turn, serves to increase automation, while making the transfer of binaries more stable and reliable. It also offers greater system security and the ability to better manage a wide variety of infrastructure configurations.

In our experience, customers want to move their artifact storage solution to the cloud for several reasons:

  • To gain ‘unlimited’ storage ability. Configuring a data backend with Google Cloud storage or AWS S3 storage provides virtually infinite storage capacity with a highly reliable storage backend — all with very little up-front cost.
  • To gain enhanced scalability for the artifact storage server itself. Depending on licensing terms, one can easily architect a solution to provide high availability and autoscaling so that they needn’t overprovision resources. Yet, they are able to meet demand when the need arises.
  • When most other infrastructure — like CI/CD servers — are in the cloud, it is convenient to keep the artifact store in the same network, especially if artifact storage is also used as a container registry.

Manufacturing JFrog Artifactory Migration

We recently migrated Artifactory for a large manufacturer. In this instance, the repository was moved to a new deployment pipeline and the data (all 10+TB of it) was migrated to a new set of instances. Since the migration involved downtime, it took place over a weekend. Using the Artifactory backup tool, we took backups of the company’s repositories. 

Two important lessons learned from this exercise were:

  1. When conducting backups, we highly recommend that the step should be taken one by one. That is, backing up one repo and then the next.
  2. When Artifactory is used as a Docker registry, Docker images can quickly grow and consume a lot of disk space. To address this challenge, we recommend users clean up unused images, take snapshots/backups on a per team basis and then restore them on a new set up.

Financial Services JFrog Artifactory Migration

All of this leads us to a common challenge faced by many organizations moving their software repository deployment from on-prem to the cloud: What is the best practice way to migrate the solution? This very question was asked of us recently by a national bank who sought to bolster its DevOps efforts with a SaaS software repository. Looking to migrate its build artifacts from one SaaS vendor to another SaaS repo, the bank’s team reached out to our cloud consulting team to help it migrate its artifacts from an older JFrog Artifactory onto a new SaaS JFrog Artifactory endpoint in AWS. And, once the migration was complete, it would decommission its old repository endpoint.

The bank had 34 artifacts and about 24 build pipelines hosted in an old Artifactory server which needed to be moved to a new Artifactory server hosted on AWS. Additionally, the bank also had about 50 container images stored in ECR. They wanted to migrate this to the new Artifactory endpoint. Since it was being migrated to a new endpoint, we made sure:

  1. All upstream build pipelines are updated to store new artifacts on the new Artifactory endpoint.
  2. All downstream build pipelines are updated to consume /download build artifacts from the new Artifactory endpoint. In this process, we had to make sure we set up proper access controls in the new Artifactory endpoint, and handle the credential management within all the build pipelines.
  3. Since container registry was also migrated to a new endpoint, we had to make sure that the build pipelines which create and push the container images are updated to use the new endpoint.
  4. And, finally, container image consumers (e.g. services running on Kubernetes) are updated to pull container images from the software repo so that new services can be instantiated on Kubernetes servers using image artifacts from the JFrog Artifactory.

As development releases faster to production, having a software repository that can keep pace while growing the reliability of the release process is important. We recommend that DevOps organizations move their artifactories into a cloud-based offering where it can be tightly integrated with pipelines and other DevOps automation. 

Need help with your Artifactory migration approach? Reach out to our DevOps consulting team today.

Subscribe to our blog

ribbon-logo-dark
Gaurav Rastogi

Gaurav Rastogi is a Solutions Architect at NTT DATA. With more than twenty years of industry experience and AWS knowledge, he helps enterprises map their requirements to available technology, secure architecture patterns and workflows that address the complex integration of multiple services and third-party technologies on public cloud platforms. Rastogi possesses multiple AWS certifications.

Related Blog Posts