Blog

PostgreSQL11 became available on Amazon Relational Database Service (RDS) in March. Have you tried it? We have, and are here to report all of the awesome enhancements. As a preview, there are major improvements to the table partitioning system, added support for stored procedures capable of transaction management, improved query parallelism, added parallelized data definition capabilities, and just-in-time (JIT) compilation for accelerating the execution of expressions in queries. We’ll now go more in depth about each of these improvements, and by the end of this, trust me, you’ll want to go give it a try!

Improvements to partitioning functionality

  • Partitioning can now be created on hashing a key column
  • Supports for PRIMARY KEY, FOREIGN KEY, and indexes on partitioned tables
  • Partitioned tables can have a “default” partition to store data that does not match any of the other defined partitions
  • On UPDATES, rows are moved to appropriate partitions if partition key column data changes
  • Faster partition elimination during query processing and execution speeds up SELECT queries

Lightweight and Fast ALTER TABLE for NOT NULL Column with DEFAULT Values

  • With this new version, ALTER table doesn’t do a table rewrite when adding a column with non-null default values. This significantly helps when altering tables with millions of records where a new column is added with a default value.

Stored Procedures with Transaction Control

  • Finally, Postgres 11 supports creating stored procedures. Prior versions of Postgres supported functions, however, functions cannot run transactions. With the support of Stored Procedures you can now COMMIT and ROLLBACK transactions with the Stored Procedure.

Improvements to Parallelism

  • CREATE INDEX can now use parallel processing while building a B-tree index
  • Parallelization is now possible in CREATE TABLE…AS, CREATE MATERIALIZED VIEW, and certain queries using UNION
  • Hash joins performed in parallel
  • Improvements to partition scans to more efficiently use parallel workers
  • Sequential scans now perform better with many parallel works

Optimizer

  • Selection of the most common values (MCVs) has been improved. MCVs earlier were chosen based on their frequency compared to all common values. In Postgres 11, MCVs are chosen based on their frequency as compared to non-MCV values
  • Selectivity estimates for >= and <= has been improved. This improves the performance using BETWEEN
  • Improvements to the optimizer row counts for EXISTS and NOT EXISTS queries

Optimal Just-in-Time (JIT) Compilation

  • Just-in-Time (JIT) compilation is the process of turning some form of interpreted program evaluation into a native program, and doing so at run time. JIT would be beneficial for CPU bound queries. JIT currently aims to optimize two essential parts of query execution: expression evaluation and tuple deforming.

Expression evaluation is used to evaluate WHERE clauses, target lists, aggregates, and projections. It can be accelerated by generating code specific to each case.

Tuple deforming is the process of transforming an on-disk tuple into its in-memory representation. It can be accelerated by creating a function specific to the table layout and the number of columns to be extracted.

I know you won’t believe it, but these aren’t even all of the benefits of the new PostgreSQL 11. There are so many improvements for Window functions, indexes, and monitoring that would be greatly beneficial. If that doesn’t get you excited, I don’t know what will! The best way to use PostgreSQL 11 is with Amazon RDS. Reach out to our team if you’d like to get started with AWS or want to unlock the full potential of your current environment!

 Like

Let’s admit it – managing licenses is difficult. This complex process often involves manual or ad-hoc reporting that can quickly become outdated or result in inaccuracies. Within AWS, licenses are used across a variety of tools, which only making the situation worse. We’ve heard this compliant many times from our customer base, and decided it was time to introduce a solution: AWS License Manager!

 

This service is available to all AWS customers and provides an easy way to manage licenses in AWS and on-premises servers from software vendors like Microsoft, SAP, Oracle, and IBM. Here are four reasons why you should take advantage of this service:

 

 

  1. It’s Simple

AWS License Manager gives you a single, centralized view that allows tracking of all the licenses across AWS and on-premises. You can track how many licenses are being used, how many are available, and how many have breached limits – all on the built-in dashboard. AWS License Manager integrates with AWS services to simplify management of licenses across multiple AWS accounts, IT catalogs, and on-premises from one AWS account.

  1. You Have Control

As an Administrator, you can create your own custom licensing rules that fit the terms of your licensing agreements, giving you control over license usage. These rules can be made centrally or you can specify different rules for various groups in your organization. These rules will be enforced when an EC2 instance is launched. With AWS License Manager, you have visibility over how software licenses are used and can prevent misuse before it happens.

  1. Lower Costs

Say goodbye to wondering if the right number of licenses are being used or worrying if additional licenses are required! AWS License Manager does all of this for you, saving you the time and costs of tracking and managing licenses. You can also enforce controls on software usage to reduce the chance of overages.

  1. Reduced Risk of Violations

The consolidated view of your licenses reduces the risk of non-compliance. Additionally, the rules administrators set can limit violations for using more licenses than an agreement stipulates, and by reassigning licenses to different server on a short-term basis. It’s possible to limit a licensing breach by stopping the instance from launching or by automatically notifying the administrators about the infringement.

Well, there you have it – four reasons why you should use AWS License Manager and 0 reasons why you shouldn’t (because they don’t exist)! Do yourself a favor and start using this service to keep you compliant and to save you time, effort, and money.

If you have issues with set-up or have questions about the service, feel free to contact us!

 Like

It’s been over 11 years since AWS began supporting Microsoft Windows workloads. In that time, AWS has innovated constantly to maintain its title as the #1 cloud provider for these workloads. You can run the full Windows Stack on AWS, including Active Directory, SQL Server, and System Center.

Many third parties have completed studies that show why AWS is superior when it comes to performance, cost, and reliability. In 2018, the next-largest cloud provider had almost 7x more downtime hours than AWS. Additionally, SQL Server on AWS boasts a 2-3x better performance record. When costs are calculated correctly, SQL Server running on AWS’s competitor’s platform would be almost twice as much. This includes the cost of storage, compute, and networking.

Reliability is the quality that puts AWS high above the rest. AWS has 64 availability zones within 21 different regions. AWS customers can deploy their applications across multiple zones in the same region for fault tolerance and latency. Instead of having a single-region instance that scales up, AWS’s services are divided into smaller cells that scale out within a region. This design reduces the effects when a cell-level failure occurs. Notably, AWS has never experienced a network event that spans multiple regions.

When migrating your SQL Server Workloads to AWS, there are a few things you should consider. It’s important to optimize your total cost of ownership, which includes optimizing your workloads to benefit from the scalability and flexibility of the cloud. On-premises servers are not optimized, in fact, 84% of workloads are over-provisioned. Many Windows and SQL Server 2008 workloads are running on older and slower server hardware. To optimize your cloud migration, you need to size your workloads for performance and capability, not by physical servers. To reduce cost, you can also decrease the number of licenses that you use by server and core counts.

Another strategy is to decouple your storage and compute processes. When these are combined, they must be scaled together. On the cloud, compute and storage can be separated. Decoupling makes elasticity easier to achieve and manage. Many people question this because SQL Server instances often contain logic to ingest or process data before it is stored in a schema. Many time, ETL logic is written within SQL Server processing engine and the servers are sized to handle a large volume of ETL processes. These ETL processes often run a couple of times a day and the capacity is only needed during the time it is running. By moving the ETL logic outside of the SQL engine, you can utilize the elasticity of the cloud and expand your compute power whenever needed. This will reduce your SQL Server footprint in the long-run. Of course, this doesn’t apply to every use case in SQL Server, but if you have ETL logic, enrichment logic, or load logic inside your SQL Server, decoupling might be the correct choice. This was the case with one of our customers, Telarix. See here to read their white paper.

As part of your migration, you should consider running your SQL Server on a Linux Instance within the AWS platform. The majority of current Windows functionality is supported on the Linux platform. Additionally, there is a minimum of a 20% cost benefit of running SQL Server on a Linux instance! This decision will give you the best performance and save you the most money.

You can also use Amazon RDS to upgrade your database instance to SQL Server. This is performed in place and is initiated with just a couple of clicks. Before you upgrade, you can create a snapshot backup, use it to create a test DB instance, then upgrade that instance to the desired new version. You can also opt-in to automatic upgrades that take place within your preferred maintenance window. 

If you’re considering migrating your Windows Workloads to the cloud, AWS is the optimal choice because of the price, performance, and reliability. This is the perfect time to migrate and modernize your outdated servers. Contact us to learn more or get started on your project.

 Like

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

AWS customers gain numerous benefits for moving or building new applications using Elastic Beanstalk. These include capacity provisioning, load balancing, auto-scaling, and application health monitoring. At the same time, customers retain full control over the AWS resources powering their application and are able to access the underlying resources at any time.

AWS Elastic Beanstalk if free to use and AWS customers only pay for the underlying AWS resources used to store and run the application.

AWS customers have two options for getting started with Elastic Beanstalk:

  • Re-host: This is the fastest option. No code changes are required, and it needs less testing and migration time. Re-hosting does not use all the features of the cloud like multi-AZ for high availability. The DB stays on-premise and the .NET app is moved to EC2 using Elastic Beanstalk.
  • Re-platform: In this method, the DB server is migrated to the cloud using manual backup and RDS SQL. A manual backup for the SQL server must be created first and then restored into the newly created RDS. Next, the RDS connection string will be provided to the Application and deployed using Elastic Beanstalk. This approach requires more testing and comparison between the old and new environment.

In this blog we are going to introduce how to move an existing .Net application to AWS using Elastic Beanstalk.

This tutorial will use the following:

Create a .NET application:

You can use your existing .NET application or create a new one. In this tutorial we are using a sample .NET application that you can download here.

  1. Download and Verify that the application is running. 
  2. Create the Environment – We will create an Elastic Beanstalk environment to deploy the application to AWS. Login to your AWS console and use the Create New Application wizard in the Elastic Beanstalk console to create the application environment. For Platform, choose .NET.

To launch an environment (console):

  1. Open the Elastic Beanstalk console using this preconfigured link: aws.amazon.com/elasticbeanstalk/home#/newApplication?applicationName=tutorials&environmentType=LoadBalanced
  2. For Platform, choose the platform that matches the language used by your application.
  3. For Application code, choose Sample application.
  4. Choose Review and launch.
  5. Review the available options. When you’re satisfied with them, choose Create app.

 

When the environment is up and running, add an Amazon RDS database instance that the application can use to store data. For DB engine, choose sqlserver-ex.

 

 

Add a DB instance to your environment:

  1. Open the Elastic Beanstalk console.
  2. Navigate to the management page for your environment.
  3. Choose Configuration.
  4. On the Database configuration card, choose Modify.
  5. Choose a DB engine, and enter a user name and password.
  6. Choose Apply.

 

Modify the connection string in the application to use the new created RDS and verify that its working as expected. In most cases you want to migrate the existing database to RDS. For more information on this, see here, here, or here.

Deploy the application to AWS using AWS Elastic Beanstalk

  1. In Visual Studio, open sln.
    1. Note: If you haven’t done so already, you can get the sample here.
  2. On the View menu, choose Solution Explorer.
  3. Expand Solution ‘BeanstalkDotNetSample’ (2 projects).
  4. Open the context (right-click) menu for MVC5App, and then choose Publish to AWS.
  5. Add your AWS account credentials by selecting Account profile to use.
  6. Select redeploy to an exiting environment. You should see the created AWS Beanstalk profile.
  7. On the Application Options page, accept all of the defaults, and then choose Next.
  8. On the Review page, choose Deploy.
  9. Monitor the deploying status on the output box. 
  10. When the application has successfully been deployed, the Output box displays completed successfully
  11. Return to the AWS Elastic Beanstalk console and choose the name of the application, which appears next to the environment name. 

 

If you follow these simple steps, you can easily migrate your .NET applications to AWS using AWS Elastic Beanstalk.

When you are finished working with AWS Elastic Beanstalk, you can terminate your .NET environment.

How to Terminate your AWS Elastic Beanstalk environment:

  1. Open the Elastic Beanstalk console.
  2. Navigate to the management page for your environment.
  3. Choose Actions and then choose Terminate Environment.

Elastic Beanstalk cleans up all of the AWS resources associated with your environment. This includes EC2 instances, DB instance, load balancer, security groups, CloudWatch alarms, and more.

 

In this tutorial, we created a new .NET application and RDS SQL server and deployed it to AWS using AWS Elastic Beanstalk. Following these steps, you can deploy your existing .NET application and migrate the DB to AWS RDS to get the benefits of high availability and scalability. Alternatively, you can use your existing on-premise database servers and gain the benefits of AWS scalability and highly available EC2 Instances with AWS Elastic Load Balancer, security, live monitoring, and more.

Using AWS Elastic Beanstalk you can easily deploy your applications and monitor them afterwards by viewing the logs. You can then scale up or down based on your application needs.

Did you try this? Was it helpful? Let us know in the comments!

 

 

 

 Like

Security is often the number one concern of our clients, especially when moving their data and applications to the cloud. The public cloud operates on a shared responsibility model. This means that the customer’s cloud provider (for example, AWS) is responsible for security of the cloud, and the customer is responsible for security within the cloud. This distinction can get confusing for new customers, leaving them wondering what they are really responsible for when it comes to security. To help, we have walked through seven simple ways to secure your RDS architecture below.

  1.  Build your database instance in Amazon Virtual Private Cloud

Amazon Virtual Private Cloud (VPC) give you the greatest possible network access control. With Amazon VPC, you have control over your virtual networking environment. For example, you can create subnets, select our own IP address range, and configure routing and access controls. Amazon RDS functionality is the same whether your DB instance is running in an Amazon VPC or not and there is no additional cost.

 

  1. Encrypt your RDS Resources

You can use RDS Encryption to secure your RDS instances and snapshots at rest. RDS encryption uses the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your RDS instance. Data that is encrypted at rest includes the underlying storage for DB instances, its automated backups, Read Replicas, and snapshots.

 

  1. Encrypting Data at Transit using Secure Socket Layer

You can use Secure Socket Layer (SSL) connections with DB instances running the MySQL, MariaDB, PostgreSQL, Oracle, or Microsoft SQL Server database engines. Each database instance has a different process for implementing SSL, but you can see step by step instructions for each DB instance here.

 

  1. Use AWS Identity and Access Management

AWS Identity and Access Management (IAM) policies are used to assign permissions. These determine who is allowed to manage RDS resources. You can set different permissions for who can create, describe, modify, and delete DB instances, as well as tag resources or modify security groups.

 

  1. Assign Proper Security Groups

You should use security groups to manage what Amazon EC2 instances or IP addresses can connect to your databases on a DB instance. When a DB instance is first created, its firewall prevents any database access except through rules made by an associated security group.

 

  1. Implement Network Encryption

Network encryption and transparent data encryption with Oracle database instances can be used to improve security of your RDS Architecture. With native network encryption, you can encrypt data as it moves to and from a DB instance. Oracle transparent data encryption automatically encrypts data before it is written to storage and automatically decrypts data when the data is read from storage.

 

We hope this blog gave you some fresh ideas on how to secure your RDS architecture! Let us know if you have any questions or issues.

 Like

A common scenario that companies face is controlling the usage of the relational databases (RDS) that they have for development or testing environments. Stopping the RDS when it’s not being used can significantly lower the company’s AWS costs.

One of the ways to stop an RDS Instance is by using AWS Cloud Watch Events in conjunction with AWS Lambda Function written in Python 2.7.

In this example, we have RDS development databases that we need to stop every day at 10 PM EST.

Below are the four steps required to do this:

Step 1 – Create IAM Role/Policy: AWS Lambda will need to assume a role that has access to AWS services like RDS and Cloud Watch.

IAM Policy: First we need to create a new policy that we will attach to the IAM role later:

  • From AWS console, choose IAM service and click on Policies
  • Click on the “Create Policy” button
  • Click on JSON Tab and use below JSON to allow access to some actions in RDS and Cloud Watch Logs Services

  • Click on the “Review Policy” button.
  • Provide a name for the policy “policy_stop_rds” and some description
  • Click “Create Policy”

 

IAM Role:

  • From AWS console, choose IAM service and click on Roles
  • Click on “Create Role” button
  • Choose “Lambda” as the service that will use the role and click “Next: Permissions”
  • Search for the policy we created on previous steps “policy_stop_rds,” check it, and then click “Next: Tags”
  • Add any tag key value pairs needed and then click “Next: Review”
  • Choose a Role name “lambda-stop-rds” and then click “Create role”

Step 2 – Create AWS Lambda Function:

  • From AWS console, choose Lambda service. Make sure that you’re in the same region where your RDS resides
  • Click on “Create Function” and choose “Author from scratch”.
    • Name: rdsInstanceStop
    • Runtime: choose python2.7
    • Role: choose existing role “lambda-stop-rds”

  • The lambda function will be created, resulting in this page:

  • In another tab, open the IAM role we created in previous steps “lambda-stop-rds”
  • Add the Lambda Function ARN to the inline policies in IAM role we created in first step (you can get this from Lambda Function page in the right top corner)
    • Click on “add inline policy” and add the following JSON, replace the resource with the ARN from above step, and save the policy with name “policy_inline_stop_rds”

  • Designer: From Lambda function page, make sure that resources section has “AWS lambda” added.

  • Function Code: Add the following function code to the Lambda function:

  • Environment variables: To use this Lambda function in other environments, we used the environment variables to define the db instance name

  • Tags: Add a key value pairs of tags as needed
  • Execution Role: use the existing execution role “lambda-stop-rds”
  • Keep all other settings with default values

 

Step 3 – Test the Lambda function:

  • On the top right corner of the screen select Testand Configure Test events
  • Select Create New Test Eventand select the Hello World event template
  • When you click Save, the execution should succeed
    • If your DB is not started, there is nothing to stop and hence you will get an error message similar to “Instance <> is not in available state.”

 

Step 4 – Schedule the run of the Lambda function:

  • From AWS console, choose Cloud Watch and click on Rules
  • Click on “Create rule”
  • Choose “Schedule” and use cron expression “0 10 * * ? *” to run the event every day at 10 PM EST.

  • Add the Lambda function we create previously as a target by clicking “Add target” and then click on “Configure details”

  • Fill in the rule name ie. “stop rds” and the description
  • Click Create rule
  • After this, wait until 10 PM EST to see if the Lambda function gets triggered, this can be checked by going to the Cloud Watch Logs under the log group “/aws/lambda/rdsInstanceStop”

 

We hope this was helpful for you! Let us know if you have any issues or recommendations.

 

 Like

As part of the digital transformation, companies are moving their infrastructure and applications to the cloud at a faster rate than ever.

There are many approaches to migrating to the cloud – each with their own benefits and drawbacks. It’s important to be knowledgeable on each option in order to make the best decision possible for your organization.

The three primary choices are Rehost, Replatform, and Refactor, which we will walk through now.

 

 

 

Rehost

Rehosting, often also called lift and shift, is the simplest of the migration options. Applications are simply moved from on-premise to the cloud without any code modification. This is considered a beginner’s approach to migration. Some of the benefits include that it’s a very fast option and requires very little resources. There is also minimal application disruption and it is cheaper than maintaining an on-premises environment. Because this migration is so simple, companies don’t typically benefit from cloud-native features like elasticity, which can be achieved from the other migration techniques.

Overall, if a company is looking for a quick and easy migration that doesn’t disrupt the existing application workflow, Rehosting is the best choice. This is a fast solution for organizations that need to reduce their on-premises physical infrastructure costs as soon as possible. Thankfully, companies can always re-architect and optimize their application once they are already in the cloud.

 

Replatform

Replatforming involves moving a company’s assets to the cloud with a little up-versioning. A portion of the application is changed or optimized before moving to the cloud. Even a small amount of cloud optimization (without changing the core application structure) can lead to significant benefits. This approach takes advantage of containers and VMs, only changing application code if needed to use base platform services.

Replatform provides a suitable middle ground between rehosting and refactoring. It allows companies to take advantage of cloud functionality and cost optimization without using the resources required for refactoring. This approach also allows developers to use the resources they are used to working with, including development frameworks and legacy programming languages. This approach is slower than rehosting and doesn’t provide as many benefits as refactoring.

Organizations should choose this approach if they are looking to leverage more cloud benefits and if minor changes won’t change their applications functioning. Also, if a company’s on-premises infrastructure is complex and is preventing scalability and performance, some slight modifications that would allow them to harness these features in the cloud would be very worthwhile.

 

Refactor

The most complex option is refactoring, which includes a more advanced process of rearchitecting and recoding some portion of an existing application. Unlike Replatforming, this option makes major changes in the application configuration and the application code in order to best utilize cloud-native frameworks and functionality. Due to this, refactoring typically offers the lowest monthly cloud costs. Customers who refactor are maximizing operational cost efficiency in the cloud. Unfortunately, this approach is also very time consuming and resource-intensive.

Companies should choose to refactor when there is a strong business need to add features and performance to the application that is only available in the cloud, including scalability and elasticity. Refactoring puts a company in the best position to boost agility and improve business continuity.

 

There is no migration approach that is always the best option for every case. Rather, companies should take into consideration their short- and long-term business goals and choose what is right for their current situation. If you need help deciding, contact us to discuss options – we’re always happy to talk!

 

 Like

If you’re here, you’re probably experiencing a common issue: trying to access a certain port on an EC2 Instance located in a private subnet of the Virtual Private Cloud (VPC). A couple of months ago, we got a call from one of our customers that was experiencing the same issue. They wanted to open up their API servers on the VPC to one of their customers, but they didn’t know how. In particular, they were looking for a solution that wouldn’t compromise the security of their environment. We realized this issue is not unique to our customer, so we thought a blog post explaining how we solved it would be helpful!

To provide some context, once you have an API server within your VPC, it is closed to the outside world. No one can access or reach that server because of the strong firewall around it. There are a few ways around this, including Virtual Private Network (VPN) connections to your VPC, which allows you to open up private access. Unfortunately, this is not a viable solution if you need to open up your API server to the world, which was the case with our customer. The goal was to provide direct access from the internet outside the VPC for any user without VPN connection.

In order to solve this issue for our customer, one of the architecture changes we recommended was adding an internet-facing AWS TCP Network Load Balancer on the public subnet of the VPC. In addition to this load balancer, we also needed to create an instance-based target group.

Keep reading to learn how you can do this – we even included an architecture diagram to make things easier! Please note that our example includes fake IP addresses.

Problem: Accessing an API endpoint in an EC2 Instance in a Private Subnet from the Internet.

Suggested AWS Architecture Diagram:

 

 

Features of our diagram:

  • Multi AZ: we used a private and public subnet in the same VPC in two different availability zones.
  • Multi EC2 (API Servers): we deployed an API server in each private subnet in each availability zone.
  • Multi NAT Gateways: a NAT gateway will allow the EC2 instances in the private subnets to connect to the internet and achieve high availability. We deployed one NAT gateway in the public subnets in each availability zone.
  • TCP Load balancer health checks: a TCP load balancer will always redirect any user’s requests to the healthy API servers. In case one AZ goes down, there will be another AZ that can handle any user’s requests.

Although we did not make this change, you can also implement Multi-Region to handle a region failure scenario and enable higher availability.

VPC Configurations:

SubnetAZCIDRIGW Route OutNAT GW

Route

Out

public-subnet-aus-east-1a172.16.0.0/24YesNo
public-subnet-bus-east-1b172.16.3.0/24YesNo
private-subnet-aus-east-1a172.16.1.0/24NoYes
private-subnet-bus-east-1b172.16.2.0/24NoYes

 

EC2 Configuration:

NameAZSubnetPrivate IPSecurity Group
API Server1us-east-1aprivate-subnet-a172.16.1.**Allow inbound traffic to TCP port 5000 from 0.0.0.0/0 or any specific source IP address on internet.
API Server2us-east-1bprivate-subnet-b172.16.2.**Allow inbound traffic to TCP port 5000 from 0.0.0.0/0 or any specific source IP address on internet.

 

Solution:

  • Create a TCP network load balancer:
    • Internet facing
    • Add listener on TCP port 5000
    • Choose public subnets with same availability zone (AZ) as your private subnets
  • Create an instance based target group:
    • Use TCP protocol on port 5000
    • For health check, either use TCP on port 5000 or HTTP health check path
    • Add the two API servers to the target instances to achieve high availability and balance the request load between different servers
  • Once the target instances (API servers) become healthy, you will be able to access the API endpoints from the public internet directly using the new TCP load balancer DNS name or elastic IP address on port 5000

 

References:

https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html

 Like

IDC recently produced a report analyzing the Windows Server Market (access a summary here). The report discovered that more and more organizations are transitioning their Windows workloads to the public cloud. Windows Server Deployments in the cloud more than doubled from 8.8% in 2014 to 17.6% in 2017. Migrating to the cloud allows organizations to grow and work past the limitations of on-premises data centers, providing improved scalability, flexibility, and agility.

The report found that of all Windows Cloud Deployments, 57.7% were hosted on Amazon Web Services (AWS). AWS’s share of the market was nearly 2x that of the nearest cloud provider in 2017. Second in line was Microsoft Azure, which hosted 30.9% of Windows instances deployed on the public cloud IaaS market. Over time, the Windows public cloud IaaS market will continue to expand because of the growing usage of public cloud IaaS among enterprises and the movement of Windows workloads into public cloud IaaS.

When it comes to Windows, AWS helps you build, deploy, scale, and manage Microsoft applications quickly and easily. AWS also has top-notch security and a cost-effective pay-as-you-go model.  On top of this, AWS provides customers with a fully-managed database service to run Microsoft SQL Server. These services give customers the ability to build web, mobile, and custom business applications.

This begs one to ask the question: why is AWS the most sought-after cloud hosting service, considering there are other options in the market? We believe it is because of the plethora of services AWS offers. As a software company in particular, if you’re looking to migrate and rearchitect your legacy system, AWS provides an excellent set of services that companies can utilize to make their application cloud native. The richness of these services often allows companies to decouple their legacy architecture without needing to overhaul the entire legacy application within their Windows platform.

Let’s walk through a few examples. For starters, take an application that would traditionally host storage-intensive data such as video, pictures, and documents. This application can keep its Windows platform as is and move the files into Amazon S3 with minimal architecture change. AWS allows its customers to decouple their storage and move it into S3 in a cost-effective, secure, and efficient manner. Next, applications with heavy back-end processing can benefit from services such as lambda, EMR, and Spark. AWS allows customers to decouple the compute needs into relevant services for their application. Lastly, imagine an application that hosts significant historical data with requirements to search or query that data. Traditionally, some customers would need to keep all the historical and archived data in a database, but with AWS they can maintain the same architecture and move the archived data to S3 and use services like Amazon Athena and Elasticsearch to query and search this data without a need for a database. This helps to reduce their footprint on a database and cut down on costs.

Examples like these demonstrate why we believe AWS is a superior cloud-hosting service for Windows workloads. If a company is looking at their architecture holistically, AWS provides a comprehensive solution for compute, networking, storage, security, analytics, and deployment. This has been proven to us time and time again through customer migrations.

dbSeer has a strong track record of helping customers successfully migrate to the AWS Cloud. Contact us anytime to learn more!

 Like

Earlier this year, we wrote a blog about how to use AWS Auto Scaling with Logi Analytics Applications. In that blog, we promised to release a step-by-step guide outlining the technical details of how a Logi Application can be configured to harness the scalability and elasticity features of AWS. If you were wondering when that would be released, the wait is over and you have come to the right place! Without further ado…

Enabling a multi-web server Logi application on AWS Windows instances requires the right configuration for some of the shared Logi files (cache files, secure key, bookmarks, etc.). To support these shared files, we need a shared network drive that can be accessed by the different Logi webservers. Currently EFS (Elastic File Storage) is not supported on Windows on AWS. Below we have defined how EFS can be mounted on Windows servers and setup so that you can utilize the scalability feature of Logi.

Setting Up the File Server
Overview:
In order for our distributed Logi application to function properly, it needs access to a shared file location. This can be easily implemented with Amazon’s Elastic File System (EFS). However, if you’re using a Windows server to run your Logi application, extra steps are necessary, as Windows does not currently support EFS drives. In order to get around this constraint, it is necessary to create Linux based EC2 instances to serve as an in-between file server. The EFS volumes will be mounted on these locations and then our Windows servers will access the files via the Samba (SMB) protocol.

Steps:

  1. Create EC2
    • Follow the steps as outlined in this AWS Get Started guide and choose:
      • Image: “Ubuntu Server 16.04 LTS (HVM), SSD Volume Type”
      • Create an Instance with desired type e.g.: “t2.micro”
  1. Create AWS EFS volume:
    • Follow the steps listed here and use same VPC and availability zone as used above
  2. Setup AWS EFS inside the EC2:
    • Connect to the EC2 instance we created in Step 1 using SSH
    • Mount the EFS to the EC2 using the following commands:
      sudo apt-get install -y nfs-common
      mkdir /mnt/efs
      mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 EFS_IP_ADDRESS_HERE:/ /mnt/efs
  3. Re-export NFS share to be used in Windows:
    • Give our Windows user access to its files. Let’s do this using samba. Again, drop the following to your shell for installing SMB(Samba) services in your Ubuntu EC2
    • Run the following commands:
      apt-get install -y samba samba-common python-glade2 system-config-samba
      cp -pf /etc/samba/smb.conf /etc/samba/smb.conf.bak
      cat /dev/null -> /etc/samba/smb.conf
      nano /etc/samba/smb.conf
    • And then, paste the text below inside the smb.conf file:
      [global]
      workgroup = WORKGROUP
      server string = AWS-EFS-Windows
      netbios name = ubuntu
      dns proxy = no
      socket options = TCP_NODELAY[efs]
      path = /mnt/efs
      read only = no
      browseable = yes
      guest ok = yes
      writeable = yes
    • Create a Samba user/password. Use the same credentials as your EC2 user
      sudo smbpasswd –a ubuntu
    • Give Ubuntu user access to the mounted folder:
      sudo chown ubuntu:ubuntu /mnt/efs/
    • And finally, restart the samba service:
      sudo /etc/init.d/smbd restart

 

Setting up the Application Server

Overview:
Logi applications require setup in the form of settings, files, licenses, and more. In order to accommodate the elastic auto-scaling, we’ll set up one server – from creation to connecting to our shared drive to installing and configuring Logi – and then make an Amazon Machine Image (AMI) for use later.

Steps:

  1. Create EC2:
    • Follow the steps as outlined in this AWS Get Started guide and choose:
      • Image: “Microsoft Windows Server 2016 Base”
      • Instance type: “t2.micro” or whatever type your application requires
  2. Deploy code:
    • Clone your project repository and deploy the code in IIS
  3. Set Up User Access:
    • Allow your application in IIS to access the shared folder (EFS) that we created inside the File server
    • From the control panel, choose users accounts → manage another account → add a user account
    • Use same username and password we created for the samba user in Ubuntu file server
    • In IIS, add the new Windows user we created above to the application connection pool, IIS → Application Pools → right click on your project application pool → identity → custom account → fill in the new username and password we created earlier.
  4. Test EFS (shared folder) connection:
    • To test the connection between Windows application server and Ubuntu file server, go to:
      • This PC → computer tap → map network drive → in folder textbox type in “\\FILE_SERVER_IP_ADDRESS\efs” → If credentials window appears for you, just use the new username and password we created earlier.

 

Configuring the Logi Application

Sticky and Non-Sticky Sessions
In a standard environment with one server, a session is established with the first HTTP request and all subsequent requests, for the life of the session, will be handled by that same server. However, in a load-balanced or clustered environment, there are two possibilities for handling requests: “sticky” sessions (sometimes called session affinity) and “non-sticky” sessions.

Use a sticky session to handle HTTP requests by centralizing the location of any shared resources and managing session state. You must create a centralized, shared location for cached data (rdDataCache folder), saved Bookmark files, _metaData folder, and saved Dashboard files because they must be accessible to all servers in the cluster.

Managing Session State
IIS is configured by default to manage session information using the “InProc” option. For both standalone and load-balanced, sticky environments, this option allows a single server to manage the session information for the life of the session.

Centralization of Application Resources
In a load-balanced environment, each web server must have Logi Server installed and properly licensed, and must have its own copy of the Logi application with its folder structure, system files, etc. This includes everything in the _SupportFiles folder such as images, style sheets, XML data files, etc., any custom themes, and any HTML or script files. We will achieve this by creating one instance with all the proper configurations, and then using an AMI.

Some application files should be centralized, which also allows for easier configuration management. These files include:

Definitions: Copies of report, process, widget, template, and any other necessary definitions (except _Settings) can be installed on each web server as part of the application, or centralized definitions may be used for easier maintenance (if desired).

The location of definitions is configured in _Settings definition, using the Path element’s Alternative Definition Folder attribute, as shown above. This should be set to the UNC path to a shared network location accessible by all web servers, and the attribute value should include the _Definitions folder. Physically, within that folder, you should create the folders _Reports, _Processes, _Widgets, and _Templates as necessary. Do not include the _Settings definition in any alternate location; it must remain in the application folder on the web server as usual.

“Saved” Files: Many super-elements, such as the Dashboard and Analysis Grid, allow the user to save the current configuration to a file for later reuse. The locations of these files are specified in attributes of the elements.

As shown in the example above, the Save File attribute value should be the UNC path to a shared network location (with file name, if applicable) accessible by all web servers.

Bookmarks: If used in an application, the location of these files should also be centralized:

As shown above, in the _Settings definition, configure the General element’s Bookmark Folder Location attribute, with a UNC path to a shared network folder accessible by all web servers.

 

Using SecureKey security:
If you’re using Logi SecureKey security in a load-balanced environment, you need to configure security to share requests.

In the _Settings definition, set the Security element’s SecureKey Shared Folder attribute to a network path, as shown above. Files in the SecureKey folder are automatically deleted over time, so do not use this folder to store other files. It’s required to create the folder rdSecureKey under myProject shared folder, since it’s not auto created by Logi.

Note
: “Authentication Client Addresses” must be replaced later with subnet IP addresses ranges of the load balancer VPC after completing the setup for load balancer below.
You can Specify ranges of IP addresses with wildcards. To use wildcards, specify an IP address, the space character, then the wildcard mask. For example to allow all addresses in the range of 172.16.*.*, specify:
172.16.0.0 0.0.255.255

Centralizing the Data Cache
The data cache repository is, by default, the rdDataCache folder in a Logi application’s root folder. In a standalone environment, where all the requests are processed by the same server, this default cache configuration is sufficient.

In a load-balanced environment, centralizing the data cache repository is required.

This is accomplished in Studio by editing a Logi application’s _Settings definition, as shown above. The General element’s Data Cache Location attribute value should be set to the UNC path of a shared network location accessible by all web servers. This change should be made in the _Settings definition for each instance of the Logi application (i.e. on each web server).

Note: “mySharedFileServer” IP/DNS address should be replaced later with file servers load balancer dns after completing the setup for load balancer below.

 

Creating and Configuring Your Load-Balancer

Overview:
You’ll need to set up load balancers for both the Linux file server and the Windows application/web server. This process is relatively simple and is outlined below, and in the Getting Started guide here.

Steps:

  1. Windows application/web servers load balancer:
    • Use classic load balancers.
    • Use the same VPC that our Ubuntu file server’s uses.
    • Listener configuration: Keep defaults.
    • Health check configuration: Keep defaults and make sure that ping path is exists, i.e. “/myProject/rdlogon.aspx”
    • Add Instances: Add all Windows web/application servers to the load balancer, and check the status. All servers should give “InService” in 20-30 seconds.
    • To enable stickiness, select ELB > port configuration > edit stickiness > choose “enable load balancer generated cookie stickiness”, set expiration period for the same as well.
  2. Linux file servers load balancer:
    • Use classic load balancers.
    • Use the same VPC that the EFS volume uses.
    • Listener configuration: 
    • Health check configuration: Keep defaults and make sure that ping path is exists, i.e. “/index.html” 
      • NOTE: A simple web application must be deployed to the Linux file servers, in order to set the health check. It should be running inside a web container like tomcat, then modify the ping path for the health checker to the deployed application path.
    • Add Instances: Add all Ubuntu file servers to the load balancer, and check the status, all servers should give “InService” in 20-30 seconds.

 

Using Auto-Scaling

Overview:
In order to achieve auto-scaling, you need to set up a Launch Template and an Auto-Scaling Group. You can follow the steps in the link here, or the ones outlined below.

Steps:

  1. Create Launch Configuration:
    • Search and select the AMI that you created above. 
    • Use same security group you used in your app server EC2 instance. (Windows)
  2. Create an Auto Scaling Group
    • Make sure to select the launch configuration that we created above.
    • Make sure to set the group size, aka how many EE2 instances you want to have in the auto scaling group at all times.
    • Make sure to use same VPC we used for the Windows application server EC2s.
    • Set the Auto scaling policies:
      • Set min/max size of the group:
        • Min: minimum number of instances that will be launched at all times.
        • Max: maximum number of instances that will be launched once a metric condition is met.
    • Click on “Scale the Auto Scaling group using step or simple scaling policies” 
    • Set the required values for:
      • Increase group size
        • Make sure that you create a new alarm that will notify your auto scaling group when the CPU utilization exceeds certain limits.
        • Make sure that you specify the action “add” and the number of instances that we want to add when the above alarm triggered.
      • Decrease group size
        • Make sure that you create a new alarm that will notify your auto scaling group when the CPU utilization is below certain limits.
        • Make sure that you specify the action and the number of instances that we want to add when the above alarm is triggered.
    • You can set the warm up time for the EC2, if necessary. This will depend on whether you have any initialization tasks that run after launching the EC2 instance, and if you want to wait for them to finish before starting to use the newly created instance.
    • You can also add a notification service to know when any instance is launched, terminated, failed to launch or failed to terminate by the auto scaling process. 
    • Add tags to the auto scaling group. You can optionally choose to apply these tags to the instances in the group when they launch. 
    • Review your settings and then click on Create Auto Scaling Group.

 

We hope this detailed how-to guide was helpful in helping you set up your Logi Application on AWS. Please reach out if you have any questions or have any other how-to guide requests. We’re always happy to hear from you!

 

References:

https://en.wikipedia.org/wiki/Load_balancing_(computing)

https://devnet.logianalytics.com/rdPage.aspx?rdReport=Article&dnDocID=2222

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html

https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/EC2_GetStarted.html

https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html

https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/tkv-create-ami-from-instance.html

 

 

 Like