Blog

AWS Database Migration Pitfalls: You must evaluate all the options to see where each piece fits.

We’re addressing four mistakes commonly made in AWS database migrations. You can also read the first blog, “AWS Database Migration Pitfall #1 of 4: Failing to Set Clear Goals,” and download our eBook for further details on all four.

You Don’t Know What You Don’t Know

It’s essential to conduct thorough research prior to making any major purchase. Getting educated about your AWS database options is a necessary prerequisite to making the selections that will best accomplish your migration goals.

Moving your existing infrastructure in a “lift and shift” style might possibly be ideal. But you can’t know that is the best path without first considering the alternatives.

Insight from AWS Chief Evangelist

On the AWS blog, AWS Chief Evangelist Jeff Barr wrote, “I have learned that the developers of the most successful applications and services use a rigorous performance testing and optimization process to choose the right instance type(s) for their application.” To help others do this, he proceeded to review some important EC2 concepts and look at each of the instance types that make up the EC2 instance family.

Barr continues, “The availability of multiple instance types, combined with features like EBS-optimization, and cluster networking allow applications to be optimized for increased performance, improved application resilience, and lower costs.”

And it is that availability of numerous options that results in opportunities for optimization. But evaluating the options first is necessary to know which configuration is optimal.

AWS Database Migration Choices: Open Source, Relational, Decoupled?

Just as the EC2 instance family is comprised of numerous instance types, there are multiple database options available to you when migrating to AWS. Choosing between licensed and open source is one of the many AWS database choices you’ll have to make. Be sure to evaluate columnar data store versus relational, and consider NoSQL options.

Additionally, AWS offers a wide variety of storage solutions with a corresponding variety of price points. In rearchitecting systems for the cloud, you may want to consider:

  • Keeping your raw data in cheap data storage, such as S3
    • And using services such as Elasticsearch, Spark, or Athena to meet app requirements
  • Decoupling batch processes from storage and databases
  • Decoupling data stores for various apps from a single on-premise data store
  • Streamlining time-consuming, mundane infrastructure maintenance tasks, such as backup, high availability

Developing an AWS Database Migration Plan to Improve Infrastructure

Failing to plan is planning to fail. Our first pitfall, “Failing to Set Clear Goals,” addressed the risks of not setting priorities. Once objectives have been prioritized, you must consider how to rearchitect your legacy workloads in order to best meet your goals.

In addition to maximizing cost savings, rearchitecting can enable you to improve lifecycle maintenance and incorporate new services like real-time data analytics and machine learning.

Be sure to:

Download our eBook for more information and contact us. We can help you figure out how to not just leverage the cloud, but leverage it properly.

 Like
You need to balance your organization’s unique AWS database migration objectives

To be fair, there are likely more than four mistakes that can be made in the process of an AWS database migration. But we’ve grouped the most common errors into four key areas. This blog is the first in a series. You can also download our ebook for further details on all four.

Consider AWS Database Migration Objectives

With constant pressure to improve, organizations sometimes move to the cloud in a frenzied rush. In mistakenly thinking the cloud itself achieves all objectives, some abandon proper upfront planning.

Organizations frequently move to the cloud for benefits such as elasticity and costs savings. While everyone moving to the cloud will be able to translate capital expenses to operating expenses, benefits beyond that can vary.

And this variation requires you to set priorities. Every organization is unique, so you must carefully examine your unique objectives.

Common goals include:

  • Reduce costs
  • Improve scalability
  • Reduce maintenance effort and expense
  • Improve availability
  • Increase agility
  • Speed time to market
  • Retire technical debt
  • Adopt new technology

Set Priorities

AWS Migration Priority

 

You may read the above bulleted list and think, “Yes, exactly! I want to achieve that.”

Unfortunately, it may be challenging to accomplish every goal simultaneously, without delaying your migration. You must prioritize your organization’s particular goals so you can make plans which will appropriately balance objectives.

Take a Realistic Approach to AWS

It’s becoming ubiquitous, but still, AWS is no panacea. Goals may conflict with one another. Without first establishing priorities, you can’t determine the tactics which meet your goals.

“Do it all now,” isn’t an effective strategy for success.

AWS Database Migration Issues to Consider

You will likely need to reevaluate your architecture to fully take advantage of all the possible benefits. Issues to take into consideration include:

AWS Migration Problem To Consider

 

  • How much downtime your business could sustain
  • Your current licensing situation
  • Third party support contracts
  • Current use of existing databases. For example, consider the maximum possible number of applications which could use a database
  • Application complexity, where the code is running
  • Skills required – both internally and from an AWS Consulting Partner

 

Various changes are often required, such as shifting to open source services in order to eliminate unnecessary licensing expenses.

 

Tips for Setting AWS Migration Goals

Focus on why you are migrating. Maintaining focus on your specific objectives will impact the way you implement.

Make sure all stakeholders are aligned on prioritization. Take the time to work as a team and get as much consensus as possible from multiple people. Yes, it’s difficult to manage a project by committee, and can lead to delays. But you should, again, strive to balance these often competitive objectives.

Download our eBook for more information and contact us. We can help you determine the ideal pathway for your organization and get started.

 Like

As an increasing number of companies are moving their infrastructure to Microsoft’s Azure, it seems natural to rely on its Active Directory for user authentication. Logi application users can also reap the benefits of this enterprise level security infrastructure without having to duplicate anything. Additionally, even smaller companies who use Office365 without any other infrastructure on the cloud, excluding email of course, can take advantage of this authentication.

 Integrating Logi applications with Microsoft’s Active Directory produces two main benefits: attaining world class security for your Logi applications, and simplifying matters by having a single source of authentication. The following post describes how this integration is done.
1. Register Your Application with Microsoft
First, register your application with Azure Active Directory v. 2.0. This will allow us to request an access token from Microsoft for the user. To do this navigate to “https://apps.dev.microsoft.com/#/appList,” and click the “Add an app” button. After entering your application name, on the following page, click the “Add Platform” button and select “Web”. Under Redirect URLs, enter the URL of your website logon page (sample format: https:////.jsp). Microsoft does not support redirects to http sites, so your page must either use https or localhost. Make note of the redirect URL and application ID for the next step.
2. Create Custom Log-on Page For Logi Application:
Microsoft allows users to give permissions to an application using their OAuth2 sign-in page. This process returns an access token, which has a name, email address, and several other pieces of information embedded within which we use to identify the user. These next steps show you how to create a log-in page that redirects users to the Microsoft sign-in, retrieves the access token, and passes whichever value you want to use to identify the employee to Logi.
1) Download the rdLogonCustom.jsp file or copy paste the contents into a file. Place it in the base folder of your application.
2) Configure the following settings within the rdLogonCustom.jsp file to match your Logi application:
Change the ‘action’ element in the HTML body to the address of your main Logi application page:

Change the “redirectUri” and “appId” in the buildAuthUrl() function to match the information from your application registration with Azure AD v2.0:

The sample log-on page redirects the user to Microsoft’s page, allows the user to sign in before redirecting back to the log-on page. At the log-on page, it parses the token for the email address, passes the value to the authentication element using the hidden input to pass as a request parameter.
If you want to use a different value from the access token to identify the user, adjust the key in the “document.getElementById(‘email’).value = payload.” in the bottom of the custom logon file to match your desired value.

3. Configure Logi App:
In your _Settings.lgx file, add a security element with the following settings:

*If your log-on page and failure page have different names, adjust accordingly.
Under the security element, add an authentication element with a data layer that uses the value found in @Request.email~ to identify the user. Optionally, you can add rights and roles elements to the security element as well.
In conclusion, utilizing this integration for your Logi applicatons can not only make your process more efficient by eliminating a duplicate authentication, but it can also provide for an added level of security because of Microsoft’s robust infrastructure.

 Like

Recently, options for connecting to distributed computing clusters with SQL-on-Hadoop functionality have sprung up all over the world of big data.

Amazon’s Elastic Map Reduce (EMR) is one such framework, enabling you to install a variety of different Hadoop based processing tools on an Amazon hosted cluster, and query them with software such as Hive, Spark-SQL, Presto, or other applications.

This post will show you how to use Spark-SQL to query data stored in Amazon Simple Storage Service (S3), and then to connect your cluster to Logi Studio so you can create powerful visualization and reporting documents.

 

Amazon EMR

Amazon Elastic Map Reduce is an AWS service that allows you to create a distributed computing analytics cluster without the overhead of setting up the machines and the cluster yourself. Using Amazon’s Elastic Compute Cloud (EC2) instances, EMR creates and configures the requisite number of machines you desire, with the software you need, and (almost) everything ready to go on startup.

Spark-SQL

Spark-SQL is an extension of Apache Spark, an open source data processing engine that eschews the Map Reduce framework of something like Hadoop or Hive for a directed acyclic graph (or DAG) execution engine. Spark-SQL allows you to harness the big data processing capabilities of Spark while using a dialect of SQL based on Hive-SQL. Spark-SQL is further connected to Hive within the EMR architecture since it is configured by default to use the Hive metastore when running queries. Spark on EMR also uses Thriftserver for creating JDBC connections, which is a Spark specific port of HiveServer2. As we will see later in the tutorial, this allows us to use Hive as an intermediary to simplify our connection to Spark-SQL.

Logi Analytics

            Logi Anaytics is an integrated development environment used to produce business intelligence tools, such as data visualizations and reports. Within Logi Studio, one can create powerful and descriptive webpages for the displaying of data, while requiring relatively little production of traditional code, as the Studio organizes (and often develops) the HTML, Javascript, and CSS elements for the developer. Logi Anaytics can connect to a variety of different data sources, including traditional relational databases, web APIs, and – as this tutorial will demonstrate – a distributed data cluster, via a JDBC connection.

Creating a Cluster

First things first: let’s get our cluster up and running.

  1. For this tutorial, you’ll need an IAM (Identity and Access Management) account with full access to the EMR, EC2, and S3 tools on AWS. Make sure that you have the necessary roles associated with your account before proceeding.
  2. Log in to the Amazon EMR console in your web browser.
  3. Click ‘Create Cluster’ and select ‘Go to Advanced Options’.
  4. On the ‘Software and Steps’ page, under ‘Software Configuration’ select the latest EMR image, Hadoop, Spark, and Hive (versions used at time of writing shown in image below).
  5. On the ‘Hardware Configuration’ page, select the size of the EC2 instances your cluster should be built with. Amazon defaults to m3.xlarge, but feel free to adjust the instance type to suit your needs.
  6. On the ‘General Options’ page, make sure you name your cluster, and select a folder in S3 where you’d like the cluster’s logs to go (if you choose to enable logging).
  7. Finally, on the ‘Security Options’ page, choose an EC2 SSH key that you have access to, and hit ‘Create Cluster’. Your cluster should begin to start up, and be ready in 5-15 minutes. When it’s ready, it will display ‘Waiting’ on the cluster list in your console.

Connecting via SSH

In order to work on your cluster, you’re going to need to open port 22 in the master EC2’s security group so you can connect to your cluster via SSH.

  1. Select your new cluster from the cluster list and click on it to bring up the cluster detail page.
  2. Under the ‘Security and Access’ section, click on the link for the Master’s security group.
  3. Once on the EC2 Security Group page, select the master group from the list, and add a rule to the Inbound section allowing SSH access to your IP.
  4. You should also allow access to port 10001, as that will be the port through which we’ll connect to Spark-SQL later on. You can also do this via SSH tunneling in your SSH client if you’d prefer.
  5. Open an SSH client (such as PuTTY on Windows or Terminal on Mac) and connect to your cluster via the Master Public DNS listed on your cluster page, and using the SSH key you chose during the configuration. The default username on your EMR cluster is ‘hadoop’ and there is no password.

Configuring Permissions

Some of the actions we’re going to take in this tutorial will require the changing of certain file permissions. Make sure that you complete these commands in your SSH terminal before you try to start your Spark service!

  1. Navigate via the cd command to /usr/lib/ and type the command ‘sudo chown hadoop spark -R’. This will give your account full access to the spark application folder.
  2. Navigate via the cd command to /var/log/ and type the command ‘sudo chown hadoop spark -R’. This will give your account access to modify the log folder, which is necessary for when you begin the Spark service.

Importing the Data into Hive from CSV

For this tutorial, we’ll be assuming that your data is stored in an Amazon S3 Bucket as comma delimited CSV files. However, you can use any Hive accepted format – ORC, Parquet, etc – and/or use files stored locally on the HDFS. Refer to the Hive documentation to change the SQL below to suit those scenarios; the rest of the tutorial should still apply.

  1. In your SSH session, use the command ‘hive’ to bring up the hive shell.
  2. Once the shell is opened, use the following SQL to create a table in the Hive metastore – no actual data will be imported yet:

“CREATE EXTERNAL TABLE tablename(values) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘,’ STORED AS TEXTFILE AT LOCATION ‘s3://bucket-name/path/to/csv/folder’;”

  1. Now that the table exists, use the following command to populate the table with data:

“LOAD DATA INPATH ‘s3://bucket-name/path/to/csv/folder’ INTO TABLE tablename;

  1. If you provide a path to a specific folder, Hive will import all the data in all the CSV files that folder contains. However, you can also specify a specific .csv in the filename instead, if you so desire.

Starting Spark-SQL Service

Luckily for us, Amazon EMR automatically configures Spark-SQL to use the metadata stored in Hive when running its queries. No additional work needs to be done to give Spark access; it’s just a matter of getting the service running. Once it is, we’ll test it via the command line tool Beeline, which comes with your Spark installation.

  1. In SSH, type the command: “/usr/lib/spark/sbin/start-thriftserver.sh”
    1. Make sure you’ve configured the permissions as per the section above or this won’t work!
  2. Wait about 30 seconds for the Spark application to be fully up and running, and then type: “/usr/lib/spark/bin/beeline”.
  3. In the beeline command line, type the command: “!connect jdbc:hive2://localhost:10001 -n hadoop”. This should connect you to the Spark-SQL service. If it doesn’t, wait a few seconds and try again. It can take up to a minute for the Thriftserver to be ready to connect.
  4. Run a test query on your table, such as: “select count(*) from tablename;”
  5. Spark-SQL is currently reading the data from S3 before querying it, which slows the process significantly. Depending on the size of your dataset, you may be able to use the command “cache table tablename” to place your data into Spark’s local memory. This process may take a while, but it will significantly improve your query speeds.
  6. After caching, run the same test query again, and see the time difference.

Downloading the JDBC Drivers

Spark Thriftserver allows you to connect to and query the cluster remotely via a JDBC connection. This will be how we connect our Logi application to Spark.

    1. Using an SFTP program (such as Cyberduck), connect to your cluster, and download the following JARs from /usr/lib/spark/jars:
      SPARK HIVE JDBC
    2. To test the connection, download SQL Workbench/J.
    3. Make a new driver configuration, with all of the above jars as part of the library.
    4. Format the connection string as: “jdbc:hive2://host.name:10001;AuthMech=2;UID=hadoop” and hit “Test”.
    5. If your Thriftserver is running, you should see a successful connection.

    Connecting with a Logi Application

    Because we’re using a JDBC connection, you’ll have to use a Java based Logi Application, as well as Apache Tomcat to run it locally. If you don’t already have it, download Tomcat, or another Java web application hosting platform.

    1. Create a Java based Logi Application in Logi Studio, with the application folder located in your Apache Tomcat’s webapps folder.
    2. Add all of the JARs from the previous section to the WEB_INF/lib folder of your Logi application folder.
    3. Within Logi Studio, create a Connection.JDBC object in your Settings definition. The connection string should be formatted as: “JdbcDriverClassName=org.apache.hive.jdbc.HiveDriver;JdbcURL=jdbc:hive2://host.name:10001/default;AuthMech=2;UID=Hadoop”
    4. In a report definition, add a Data Table and SQL Data Layer, connected to your JDBC connection, and querying the table you created in your cluster. Make sure you add columns to your Data Table!
    5. In the command shell, navigate to your Tomcat folder, and start Tomcat via the command: “./bin/catalina.bin start”
    6. Try to connect to your Logi webpage (usually at “localhost:8080/your-Logi-App-name-here”) and see your data.

     

    Final Thoughts

    Because EMR clusters are expensive, and the data doesn’t persist, it becomes important eventually to automate this setup process. Once you’ve done it once, it’s relatively easy to use the EMR Console’s “Steps” feature to perform this automation. To create the table, you can use a Hive Step and a .sql file containing the commands in this tutorial saved on S3. To change the permissions and start the server, you can use a .sh script saved on S3 via the script-command jar. More information on that process is listed here. Make sure that with the .sh script that you don’t include a bash tag, and that your end line characters are formatted for Unicode, instead of a Windows/Mac text document.

    Another possible solution to the lack of data persistence is to use a remote metastore. By default, Hive creates a local MySQL metastore to save the table metadata. However, you can override this behavior and use your own MySQL or PostgreSQL database as a remote metastore that will persist after the cluster is shut down. This is also advantageous for the Logi application, as the local metastore – Derby – does not allow for concurrent running of queries. Information on how to configure your cluster to use a remote metastore will be coming in a second tutorial, to be published soon!

     

 Like
Easy_Access
Automation and accessibility. If I were to choose two of the most important characteristics in digital transformation for businesses today, these two would be it. Automation and accessibility.

Easy_Access

While automation may come as a no-brainer, accessibility might leave you wondering what I mean. This feature is what holds the future of business intelligence. Automation needs little explanation. As data collection methods become more sophisticated, so does the analytics of that data. Automation in collection, analysis, and even acting upon the information is the obvious advancement. It removes dependencies for data-based decisions.
The benefits of technology assisted intelligence have enhanced our lives for quite a while now. Early on, it began as a luxury reserved for the computer science literates. Anyone who wanted to enhance their business decisions with technology-fueled analytics, needed to hire a tech-savvy analyst, or two, or three. The analyst would devise a system to gather and analyze the data. Then she would make the data presentable and digestible for the business decision maker.As time goes on, the frustration of having a middle-man to interpret technology-assisted intelligence fueled innovation.

Simplification of user experiences to accommodate non-technical decision-makers came out of this frustration. In other words, user experiences are trying to make the intelligence digestible–accessible to every user.For business intelligence solutions, accessibility of information to a broader audience of users proves indispensable. Accessibility brings relevance to business users. Relevance brings more usability. While automation is already underway as a natural progression of robust digital transformation, let’s keep an eye towards maintaining accessibility so we remain relevant.

 Like
Thoughts on BI and Analytics as Storytelling
Generally people are not that good at remembering or retaining an unending list of facts, especially with no context of why they are important. However, they can remember facts that are narrated as a story.In your organization, you should strive for an overall BI or analytics “story.”

Thoughts on BI and Analytics as Storytelling

Like any story, your story should have a beginning, a middle, and an end. The beginning of the story should tell what you want to convey and why the story matters. The middle should convey the how of the story. And, the end should convey a data narrative.

Data Storytelling = Visualization + Context + Narrative
Data storytelling is not data visualization. Every person has a different perspective and may visualize data differently. For effective storytelling, visualization should be just one component of your storytelling. You also need a textual narrative, such as what the visualization is trying to convey. Ask yourself, “what story are you trying to tell about your organization?” Another important component of storytelling is context, why it’s important or meaningful, what themes do you want to uncover. Without context, both the visual and narrative would not convey the full picture. Further, there isn’t a single correct answer for all storytelling. Based on the narrative your story could be represented as a: an annotated dashboard, a flowchart, a slide show, an infographic, a storyboard.

Also note that all stories are not “happily ever after.” Your story should convey the results as they are, and let your users actively explore and question based on their experience and needs. Then will the data help in proper decision-making.

 

 

 Like

At the beginning of the month, I had the pleasure of attending the Gartner Data & Analytics Summit in Grapevine, TX. I was particularly interested in hearing about latest trends as well as about the Gartner maturity model for analytics (since our own dbSeer framework for analytics is based on the model).

It was most interesting to hear where organizations today fall in the different phases of this maturity model: 74% of organizations currently have Descriptive Analytics, 34% have Diagnostic Analytics, 11% have Predictive, and only 1% has Prescriptive. However, Gartner notes that the market is changing rapidly, predicting a growing trend toward Predictive and Prescriptive Reporting in the next 3-5 years.

Here’s a more detailed recap of how Gartner defines these four different phases:

Descriptive
What is the Descriptive approach? This approach provide trends and information on historical data and answers the question, “What happened?”
What does it provide? It can be used to monitor the past and provides consistent and credible reporting. The data is governed and the reports are based on a predefined set of KPIs.
What kinds of analytic capabilities are used? Reports, Dashboards, and OLAP are the most common analytic capabilities used to monitor the data.
What process and governance does this architecture require? Collect the user requirements, prepare the backend data models, and create the front-end visualizations and reports.
What roles in the organization consume or create these types of reports/dashboards? Data Architect, BI Developer, Information Consumer

Diagnostic
What is the Diagnostic approach? This approach helps explore historical data to answer the question “Why did it happen?”
What does it provide? It can help us to be more agile and look more at insights that the data provides. It directs you to investigate KPIs or review how KPI values were derived.
What kinds of analytic capabilities are used? Self-service reporting, data discovery, self-service data preparation are some of the approaches to explore the data.
What process and governance does this architecture require? Give analysts access to data models, provide a process to certify data, and train analysts on data models and provide support.
What roles in the organization consume or create these types analytics? Analyst, Data Quality Manager, Analytics Support Expert, Data Engineer

Predictive
What is the Predictive Approach? This approach helps in forecasting trends based on historical data and patterns and helps to answer the question, “What will happen?”
What does it provide? More advanced analytics where machine learning has been applied to predict outcomes based on historical trends and discovered insights.
What kinds of analytic capabilities are used? Big data discovery, predictive modeling, graph analysis, behavioral analytics (used to investigate and predict).
What process and governance does this architecture require? Collecting and processing complex and vast data sets, iterating & optimizing analytic models, confirming predictions with business users.
What roles in the organization consume or create these types of reports/dashboards? Data Scientist, Data Steward, Analytics Enterprise Architect

Prescriptive
What is the Prescriptive approach? – This approach is based on predictive data and models to provide and/or automate a set of actions and to help answer the question, “What should I do?”
What does it provide? Decision support & automation to optimize, actions for prescription, automation and industrialization.
What kinds of capabilities are used? Prescriptive engines, simulation & optimization engines (used to prescribe actions and these could be industrialized).
What process and governance does this architecture require? Optimizing business processes and automating analytical models.
What roles in the organization consume or create these types of reports/dashboards? Analytics Project Manager, Analytics System Integrator.

 Like

Enabling a direct export to a Secure, Scalable and Reliable AWS S3 storage will make your Logi Analytics application even more powerful. Contents stored in an S3 bucket can be secured with different security option like bucket policies and permissions to make it available to individuals or group of people or to the entire internet. S3 guarantees 11 9’s durability and 99.99% availability with virtually unlimited storage.

This blog will walk you through 9 easy steps to build an AWS S3 export plugin and use it within a Logi Analytics application.  You first need to create and build the Export to S3 Plugin (a dll), and then call this plugin within your Logi Analytics application with all the required parameters. Our example shows how to create and use a .NET dll to check for an existing file and copy files to AWS S3. (Note that our example uses C#.NET, however the same plugin can be built in Java).

Creating the Plugin (dll)

  1. Add the reference for the rdPlugin.dll, which is located in the Logi application folder \bin directory. and the following AWS DLL references are required
    • AWSSDK.Core
    • AWSSDK.S3

    AWS S3 Plugin for Logi Analytics Application

  2. Include the following namespaces in the class:
    • rdPlugin
    • Amazon
    • Amazon.S3
    • Amazon.S3.Transfer

    AWS S3 Plugin for Logi Analytics Application

  3. Use the sendFileToS3 method, which will be called from the Logi Analytics application. (This method must be public and can have only one ref input parameter of type rdServerObjects.)
    AWS S3 Plugin for Logi Analytics Application

    • Note that any parameters that are added in the Logi plugin call are encapsulated under rdObjects.PluginParameters and can be accessed as shown below (rdObjects, the ref parameter, is passed to the method.)

    AWS S3 Plugin for Logi Analytics Application

  4. Initialize the S3 client and transfer utility as shown below. You need to initialize the S3Client utility with the AccessKeyID, SecretAccessKey, and Region parameters. (These can be null if the proper bucket policy is set or if the appropriate role is set in the EC2 instance.)
    AWS S3 Plugin for Logi Analytics Application
  5. Optionally, you can check for the existence of the file and provide a warning about overwrite (otherwise, files can get overwritten by default).
    AWS S3 Plugin for Logi Analytics Application
  6. Transfer the file to S3 as shown below:
  7. Build the dll with the following namespace and class, and then copy the S3Utility, AWSSDK.S3, and AWSSKD.Core DLLs to the Logi application folder \bin directory.
    AWS S3 Plugin for Logi Analytics ApplicationAWS S3 Plugin for Logi Analytics Application

Using the Plugin in Logi Application

  1. Note the class type name S3Utility.AMZS3Utility (namespace.classname)You are now all set to use the plugin in the Logi app. Calling the plugin method sendFiletoS3 from a task is as simple as this
    AWS S3 Plugin for Logi Analytics Application
  2. Pass the parameters to the plugin as shown below (these were the parameters accessed in the code example in Step 4 above).
    1. awsAccessKeyID and awsSecretAccessKey are required to authenticate and process the S3 request
    2. as shown in this example, with an adjusted S3 bucket policy the same code can export to S3 without awsAccessKeyID and awsSecretAccessKey
      AWS S3 Plugin for Logi Analytics Application

 

 Like

A descriptive dashboard essentially answers the question, “What happened?” It provides analytics that describe and summarize past events and trends, such as: who are my highest paying customers, what were the sales over the past month, quarter or year; what was the customer churn during a specific time frame (such as during a major organization shift). The target audience includes executives and upper management who want to look at the bigger picture as well as managers who would like to identify potential areas of concern and to then drill down to diagnose the issue.

So what types of analytics or reports should you build for a descriptive dashboard?

  • Key indicators on the top for a quick review. (The main purpose of a dashboard is to provide instant access to information after all.)
  • Supporting high level charts or table summaries that describe the KPIs.  If used with numerical KPIs, descriptive data can provide context to help you effectively make quick decisions.
  • Charts/tables to show:
    • Top N categories
    • Trends on a timeline
    • Comparison of top categories

So that your descriptive dashboard is most useful, you should consider these important factors when planning and creating your dashboard.

• The right key performance indicators (KPIs) should be identified and displayed. At a quick glance, they should describe the main essence of the dashboard.
• Charts and tables should support those key indicators. They should show how the key indicator values were derived.
• Charts and tables should be small and simple. They should show top/bottom, 5/10. They should not show a list of everything.
• Key filters should be available to filter the high-level data. Also, these filters should apply to the entire dashboard and not individual elements.
• Capability to drill down from the high level data to view details must be available.

 Like
Dashboards are immensely useful for businesses, because they can help give you key analytics to understand your business at a glance; they can help you communicate the right information to key stakeholders and users; and they can make your employees more efficient by giving them actionable information at their fingertips.But to get any or all of these benefits, there are seven questions that you should ask before designing a business dashboard. If you ask these questions, you will be on the right track to delivering a useful dashboard that is adopted and used by the right people in your business.
 Like