February 2019

dbSeer’s customer, Telarix, wanted to increase their productivity and quality by moving to an environment that is accessible by project owners and can be easily scaled up or down depending on their needs. dbSeer migrated Telarix’s SQL Servers and .NET applications to the AWS cloud using Amazon EC2 for Microsoft Windows. dbSeer successfully completed all planned SQL Server migrations to AWS in two months. Learn more about the process by downloading the White Paper!

 Like

As part of the digital transformation, companies are moving their infrastructure and applications to the cloud at a faster rate than ever.

There are many approaches to migrating to the cloud – each with their own benefits and drawbacks. It’s important to be knowledgeable on each option in order to make the best decision possible for your organization.

The three primary choices are Rehost, Replatform, and Refactor, which we will walk through now.

 

 

 

Rehost

Rehosting, often also called lift and shift, is the simplest of the migration options. Applications are simply moved from on-premise to the cloud without any code modification. This is considered a beginner’s approach to migration. Some of the benefits include that it’s a very fast option and requires very little resources. There is also minimal application disruption and it is cheaper than maintaining an on-premises environment. Because this migration is so simple, companies don’t typically benefit from cloud-native features like elasticity, which can be achieved from the other migration techniques.

Overall, if a company is looking for a quick and easy migration that doesn’t disrupt the existing application workflow, Rehosting is the best choice. This is a fast solution for organizations that need to reduce their on-premises physical infrastructure costs as soon as possible. Thankfully, companies can always re-architect and optimize their application once they are already in the cloud.

 

Replatform

Replatforming involves moving a company’s assets to the cloud with a little up-versioning. A portion of the application is changed or optimized before moving to the cloud. Even a small amount of cloud optimization (without changing the core application structure) can lead to significant benefits. This approach takes advantage of containers and VMs, only changing application code if needed to use base platform services.

Replatform provides a suitable middle ground between rehosting and refactoring. It allows companies to take advantage of cloud functionality and cost optimization without using the resources required for refactoring. This approach also allows developers to use the resources they are used to working with, including development frameworks and legacy programming languages. This approach is slower than rehosting and doesn’t provide as many benefits as refactoring.

Organizations should choose this approach if they are looking to leverage more cloud benefits and if minor changes won’t change their applications functioning. Also, if a company’s on-premises infrastructure is complex and is preventing scalability and performance, some slight modifications that would allow them to harness these features in the cloud would be very worthwhile.

 

Refactor

The most complex option is refactoring, which includes a more advanced process of rearchitecting and recoding some portion of an existing application. Unlike Replatforming, this option makes major changes in the application configuration and the application code in order to best utilize cloud-native frameworks and functionality. Due to this, refactoring typically offers the lowest monthly cloud costs. Customers who refactor are maximizing operational cost efficiency in the cloud. Unfortunately, this approach is also very time consuming and resource-intensive.

Companies should choose to refactor when there is a strong business need to add features and performance to the application that is only available in the cloud, including scalability and elasticity. Refactoring puts a company in the best position to boost agility and improve business continuity.

 

There is no migration approach that is always the best option for every case. Rather, companies should take into consideration their short- and long-term business goals and choose what is right for their current situation. If you need help deciding, contact us to discuss options – we’re always happy to talk!

 

 Like

If you’re here, you’re probably experiencing a common issue: trying to access a certain port on an EC2 Instance located in a private subnet of the Virtual Private Cloud (VPC). A couple of months ago, we got a call from one of our customers that was experiencing the same issue. They wanted to open up their API servers on the VPC to one of their customers, but they didn’t know how. In particular, they were looking for a solution that wouldn’t compromise the security of their environment. We realized this issue is not unique to our customer, so we thought a blog post explaining how we solved it would be helpful!

To provide some context, once you have an API server within your VPC, it is closed to the outside world. No one can access or reach that server because of the strong firewall around it. There are a few ways around this, including Virtual Private Network (VPN) connections to your VPC, which allows you to open up private access. Unfortunately, this is not a viable solution if you need to open up your API server to the world, which was the case with our customer. The goal was to provide direct access from the internet outside the VPC for any user without VPN connection.

In order to solve this issue for our customer, one of the architecture changes we recommended was adding an internet-facing AWS TCP Network Load Balancer on the public subnet of the VPC. In addition to this load balancer, we also needed to create an instance-based target group.

Keep reading to learn how you can do this – we even included an architecture diagram to make things easier! Please note that our example includes fake IP addresses.

Problem: Accessing an API endpoint in an EC2 Instance in a Private Subnet from the Internet.

Suggested AWS Architecture Diagram:

 

 

Features of our diagram:

  • Multi AZ: we used a private and public subnet in the same VPC in two different availability zones.
  • Multi EC2 (API Servers): we deployed an API server in each private subnet in each availability zone.
  • Multi NAT Gateways: a NAT gateway will allow the EC2 instances in the private subnets to connect to the internet and achieve high availability. We deployed one NAT gateway in the public subnets in each availability zone.
  • TCP Load balancer health checks: a TCP load balancer will always redirect any user’s requests to the healthy API servers. In case one AZ goes down, there will be another AZ that can handle any user’s requests.

Although we did not make this change, you can also implement Multi-Region to handle a region failure scenario and enable higher availability.

VPC Configurations:

SubnetAZCIDRIGW Route OutNAT GW

Route

Out

public-subnet-aus-east-1a172.16.0.0/24YesNo
public-subnet-bus-east-1b172.16.3.0/24YesNo
private-subnet-aus-east-1a172.16.1.0/24NoYes
private-subnet-bus-east-1b172.16.2.0/24NoYes

 

EC2 Configuration:

NameAZSubnetPrivate IPSecurity Group
API Server1us-east-1aprivate-subnet-a172.16.1.**Allow inbound traffic to TCP port 5000 from 0.0.0.0/0 or any specific source IP address on internet.
API Server2us-east-1bprivate-subnet-b172.16.2.**Allow inbound traffic to TCP port 5000 from 0.0.0.0/0 or any specific source IP address on internet.

 

Solution:

  • Create a TCP network load balancer:
    • Internet facing
    • Add listener on TCP port 5000
    • Choose public subnets with same availability zone (AZ) as your private subnets
  • Create an instance based target group:
    • Use TCP protocol on port 5000
    • For health check, either use TCP on port 5000 or HTTP health check path
    • Add the two API servers to the target instances to achieve high availability and balance the request load between different servers
  • Once the target instances (API servers) become healthy, you will be able to access the API endpoints from the public internet directly using the new TCP load balancer DNS name or elastic IP address on port 5000

 

References:

https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html

 Like