Blog

Easy_Access
Easy Access
Automation and accessibility. If I were to choose two of the most important characteristics in digital transformation for businesses today, these two would be it. Automation and accessibility. While automation may come as a no-brainer, accessibility might leave you wondering what I mean. This feature is what holds the future of business intelligence.

Automation needs little explanation. As data collection methods become more sophisticated, so does the analytics of that data. Automation in collection, analysis, and even acting upon the information is the obvious advancement. It removes dependencies for data-based decisions.
The benefits of technology assisted intelligence have enhanced our lives for quite a while now. Early on, it began as a luxury reserved for the computer science literates. Anyone who wanted to enhance their business decisions with technology-fueled analytics, needed to hire a tech-savvy analyst, or two, or three. The analyst would devise a system to gather and analyze the data. Then she would make the data presentable and digestible for the business decision maker.

As time goes on, the frustration of having a middle-man to interpret technology-assisted intelligence fueled innovation. Simplification of user experiences to accommodate non-technical decision-makers came out of this frustration. In other words, user experiences are trying to make the intelligence digestible–accessible to every user.

For business intelligence solutions, accessibility of information to a broader audience of users proves indispensable. Accessibility brings relevance to business users. Relevance brings more usability. While automation is already underway as a natural progression of robust digital transformation, let’s keep an eye towards maintaining accessibility so we remain relevant.

 Like
Thoughts on BI and Analytics as Storytelling
Thoughts on BI and Analytics as Storytelling
Generally people are not that good at remembering or retaining an unending list of facts, especially with no context of why they are important. However, they can remember facts that are narrated as a story.

In your organization, you should strive for an overall BI or analytics “story.”

Like any story, your story should have a beginning, a middle, and an end. The beginning of the story should tell what you want to convey and why the story matters. The middle should convey the how of the story. And, the end should convey a data narrative.

Data Storytelling = Visualization + Context + Narrative

Data storytelling is not data visualization. Every person has a different perspective and may visualize data differently. For effective storytelling, visualization should be just one component of your storytelling. You also need a textual narrative, such as what the visualization is trying to convey. Ask yourself, “what story are you trying to tell about your organization?” Another important component of storytelling is context, why it’s important or meaningful, what themes do you want to uncover. Without context, both the visual and narrative would not convey the full picture. Further, there isn’t a single correct answer for all storytelling. Based on the narrative your story could be represented as a: an annotated dashboard, a flowchart, a slide show, an infographic, a storyboard.

Also note that all stories are not “happily ever after.” Your story should convey the results as they are, and let your users actively explore and question based on their experience and needs. Then will the data help in proper decision-making.

 

 

 Like

At the beginning of the month, I had the pleasure of attending the Gartner Data & Analytics Summit in Grapevine, TX. I was particularly interested in hearing about latest trends as well as about the Gartner maturity model for analytics (since our own dbSeer framework for analytics is based on the model).

It was most interesting to hear where organizations today fall in the different phases of this maturity model: 74% of organizations currently have Descriptive Analytics, 34% have Diagnostic Analytics, 11% have Predictive, and only 1% has Prescriptive. However, Gartner notes that the market is changing rapidly, predicting a growing trend toward Predictive and Prescriptive Reporting in the next 3-5 years.

Here’s a more detailed recap of how Gartner defines these four different phases:

Descriptive
What is the Descriptive approach? This approach provide trends and information on historical data and answers the question, “What happened?”
What does it provide? It can be used to monitor the past and provides consistent and credible reporting. The data is governed and the reports are based on a predefined set of KPIs.
What kinds of analytic capabilities are used? Reports, Dashboards, and OLAP are the most common analytic capabilities used to monitor the data.
What process and governance does this architecture require? Collect the user requirements, prepare the backend data models, and create the front-end visualizations and reports.
What roles in the organization consume or create these types of reports/dashboards? Data Architect, BI Developer, Information Consumer

Diagnostic
What is the Diagnostic approach? This approach helps explore historical data to answer the question “Why did it happen?”
What does it provide? It can help us to be more agile and look more at insights that the data provides. It directs you to investigate KPIs or review how KPI values were derived.
What kinds of analytic capabilities are used? Self-service reporting, data discovery, self-service data preparation are some of the approaches to explore the data.
What process and governance does this architecture require? Give analysts access to data models, provide a process to certify data, and train analysts on data models and provide support.
What roles in the organization consume or create these types analytics? Analyst, Data Quality Manager, Analytics Support Expert, Data Engineer

Predictive
What is the Predictive Approach? This approach helps in forecasting trends based on historical data and patterns and helps to answer the question, “What will happen?”
What does it provide? More advanced analytics where machine learning has been applied to predict outcomes based on historical trends and discovered insights.
What kinds of analytic capabilities are used? Big data discovery, predictive modeling, graph analysis, behavioral analytics (used to investigate and predict).
What process and governance does this architecture require? Collecting and processing complex and vast data sets, iterating & optimizing analytic models, confirming predictions with business users.
What roles in the organization consume or create these types of reports/dashboards? Data Scientist, Data Steward, Analytics Enterprise Architect

Prescriptive
What is the Prescriptive approach? – This approach is based on predictive data and models to provide and/or automate a set of actions and to help answer the question, “What should I do?”
What does it provide? Decision support & automation to optimize, actions for prescription, automation and industrialization.
What kinds of capabilities are used? Prescriptive engines, simulation & optimization engines (used to prescribe actions and these could be industrialized).
What process and governance does this architecture require? Optimizing business processes and automating analytical models.
What roles in the organization consume or create these types of reports/dashboards? Analytics Project Manager, Analytics System Integrator.

 Like

Enabling a direct export to a Secure, Scalable and Reliable AWS S3 storage will make your Logi Analytics application even more powerful. Contents stored in an S3 bucket can be secured with different security option like bucket policies and permissions to make it available to individuals or group of people or to the entire internet. S3 guarantees 11 9’s durability and 99.99% availability with virtually unlimited storage.

This blog will walk you through 9 easy steps to build an AWS S3 export plugin and use it within a Logi Analytics application.  You first need to create and build the Export to S3 Plugin (a dll), and then call this plugin within your Logi Analytics application with all the required parameters. Our example shows how to create and use a .NET dll to check for an existing file and copy files to AWS S3. (Note that our example uses C#.NET, however the same plugin can be built in Java).

Creating the Plugin (dll)

  1. Add the reference for the rdPlugin.dll, which is located in the Logi application folder \bin directory. and the following AWS DLL references are required
    • AWSSDK.Core
    • AWSSDK.S3

    AWS S3 Plugin for Logi Analytics Application

  2. Include the following namespaces in the class:
    • rdPlugin
    • Amazon
    • Amazon.S3
    • Amazon.S3.Transfer

    AWS S3 Plugin for Logi Analytics Application

  3. Use the sendFileToS3 method, which will be called from the Logi Analytics application. (This method must be public and can have only one ref input parameter of type rdServerObjects.)
    AWS S3 Plugin for Logi Analytics Application

    • Note that any parameters that are added in the Logi plugin call are encapsulated under rdObjects.PluginParameters and can be accessed as shown below (rdObjects, the ref parameter, is passed to the method.)

    AWS S3 Plugin for Logi Analytics Application

  4. Initialize the S3 client and transfer utility as shown below. You need to initialize the S3Client utility with the AccessKeyID, SecretAccessKey, and Region parameters. (These can be null if the proper bucket policy is set or if the appropriate role is set in the EC2 instance.)
    AWS S3 Plugin for Logi Analytics Application
  5. Optionally, you can check for the existence of the file and provide a warning about overwrite (otherwise, files can get overwritten by default).
    AWS S3 Plugin for Logi Analytics Application
  6. Transfer the file to S3 as shown below:
  7. Build the dll with the following namespace and class, and then copy the S3Utility, AWSSDK.S3, and AWSSKD.Core DLLs to the Logi application folder \bin directory.
    AWS S3 Plugin for Logi Analytics ApplicationAWS S3 Plugin for Logi Analytics Application

Using the Plugin in Logi Application

  1. Note the class type name S3Utility.AMZS3Utility (namespace.classname)You are now all set to use the plugin in the Logi app. Calling the plugin method sendFiletoS3 from a task is as simple as this
    AWS S3 Plugin for Logi Analytics Application
  2. Pass the parameters to the plugin as shown below (these were the parameters accessed in the code example in Step 4 above).
    1. awsAccessKeyID and awsSecretAccessKey are required to authenticate and process the S3 request
    2. as shown in this example, with an adjusted S3 bucket policy the same code can export to S3 without awsAccessKeyID and awsSecretAccessKey
      AWS S3 Plugin for Logi Analytics Application

 

 Like

A descriptive dashboard essentially answers the question, “What happened?” It provides analytics that describe and summarize past events and trends, such as: who are my highest paying customers, what were the sales over the past month, quarter or year; what was the customer churn during a specific time frame (such as during a major organization shift). The target audience includes executives and upper management who want to look at the bigger picture as well as managers who would like to identify potential areas of concern and to then drill down to diagnose the issue.

So what types of analytics or reports should you build for a descriptive dashboard?

  • Key indicators on the top for a quick review. (The main purpose of a dashboard is to provide instant access to information after all.)
  • Supporting high level charts or table summaries that describe the KPIs.  If used with numerical KPIs, descriptive data can provide context to help you effectively make quick decisions.
  • Charts/tables to show:
    • Top N categories
    • Trends on a timeline
    • Comparison of top categories

So that your descriptive dashboard is most useful, you should consider these important factors when planning and creating your dashboard.

• The right key performance indicators (KPIs) should be identified and displayed. At a quick glance, they should describe the main essence of the dashboard.
• Charts and tables should support those key indicators. They should show how the key indicator values were derived.
• Charts and tables should be small and simple. They should show top/bottom, 5/10. They should not show a list of everything.
• Key filters should be available to filter the high-level data. Also, these filters should apply to the entire dashboard and not individual elements.
• Capability to drill down from the high level data to view details must be available.

 Like
Dashboards are immensely useful for businesses, because they can help give you key analytics to understand your business at a glance; they can help you communicate the right information to key stakeholders and users; and they can make your employees more efficient by giving them actionable information at their fingertips.But to get any or all of these benefits, there are seven questions that you should ask before designing a business dashboard. If you ask these questions, you will be on the right track to delivering a useful dashboard that is adopted and used by the right people in your business.
 Like

We are excited to announce our sponsorship of a local cricket team, the Stormers. Our very own dbSeer team member, Nanda Kumar, is not only an avid player, but is team captain.

stormers_1

The team formed in the Fall of 2012 and plays two seasons per year, in the Spring and the Fall. The Stormers are part of the Loudoun County Cricket League (LCCL)  http://cricclubs.com/vlccl as well as the Potomac Cricket League (PCL)  http://cricclubs.com/PCL, both hard tennis ball cricket leagues near Leesburg, VA.

It’s fun for us to follow the Stormers as they play throughout the year and help to bring more fans locally to experience the sport of cricket, and we wish them luck on a successful season. You can check out the league web pages to find out current game schedules.  Come on out and watch the Stormers play!

 Like
Has BI reached its peak?

Has BI reached its peak?I recently read a piece published on March 30, 2016 by the siliconangle.com web site questioning whether traditional business intelligence (BI) has reached its peak. It raised a couple of interesting points. Due to the recent “flattening of consumption of business intelligence” and the consolidation of BI players (via mergers and acquisitions, for example), some analysts are proposing that this signals BI has reached its peak. In an interview by siliconangle.com with Ian Andrews, VP of Products at Pivotal, Inc., Mr. Andrews explained that this implies the industry is just shifting customers among vendors and that we may be “hitting the peak platform for data.” He argues, however, that this is not the case and that “we’re just at the beginning from a platform perspective.” He further states that consumption of data is where it’s now at, with delivering “information in context.”

I would agree with him that BI is shifting toward extracting the complexity out of analytics and pushing “information in context.” This means pushing out information to be consumed and used by any user in the organization, and embedding the relevant analytics into the daily applications that users use to complete daily workflow, so that business processes can be improved right when they need to be. We are shifting toward delivering the right information, right when users need it and right where they need it. Further, analytics are becoming critical to the more non-traditional stakeholders throughout the organization, switching from IT-led, system-of- record reporting to business-led analytics.

In my opinion, the recent self- service push in the analytics market has gone too far, and, while it has a role to play in the “information in context” model, it will not be the only way toward the “democratization” of analytics. Embedded BI and custom BI applications will be key components of “information in context.” The democratization of analytics will continue to be the focus in the organization, and BI has certainly not reached its peak. If anything, the opportunities for information in context are far and wide.

Click here for the full interview on siliconangle.com.

 Like

In March, I had the privilege of attending the premiere big data conference O’Reilly and Cloudera Strata + Hadoop World in San Jose, CA. I’ll describe some of the more interesting topics/sessions in more detail below. The key technology areas and trends that seemed to be the focus were around:

  • Machine learning
  • Streaming and real-time data processing
  • IoT
  • Real-time Analytics

Hadoop Clusters – There were a few sessions/talks about the challenges of managing Hadoop clusters in Enterprise environments. I attended two talks about the topic by GE and British Telecom. GE’s talk focused on how to use a big data platform to change the enterprise culture to a data-driven culture, by opening up the data and creating data lakes where data is accessible by businesses. The BT talk was about successful design patterns for data hubs. Both talks highlighted the enterprise approach on data lakes or data hubs and the Hadoop challenge of job management within the cluster.

IoT – Intel hosted a session showcasing a data-streaming platform that helped Levi Strauss to find its items in a store. This solutions used RFID (IoT) on each item in the store and a machine learning algorithm (that learns over time) where each item should be located in the store. While the application of the technology was a bit simplistic, the platform itself was very impressive.

Machine Learning – Microsoft hosted a talk on machine learning, in which they showcased research on machine learning and neuroscience. Remarkably, they have developed an algorithm that is able to identify basic thoughts just by analyzing electrical signals released from the brain; in this case, the algorithm was able to identify if an individual was seeing a face or a building. They showed images in a few milliseconds to a patient, and the computer, with over 90% accuracy, could guess what picture the patient saw.

Real-time analytics – Something that came up on in a few sessions was the challenge of applying real-time analytics to massive data. There was one use case on credit card security fraud discussed by MapR.  Combining streaming technology and machine learning, MapR impressively made under a 1-second decision to determine if a transaction is a fraud transaction.

 

Future of Data Analytics

Another key takeaway from the conference included an overview of the future of data analytics. The following diagram created by Amplab – UC Berkely is a great representation. In summary, it shows from the bottom level to the top:

  • Virtualization/distributed file system at the lowest level
  • Compression and encryption at the storage level
  • Spark as the processing engine (Notice no alternative for spark!)
  • Access – Still too many options (I think this is the issue that need to be resolved there is no clear way to access)
  • Applications

Data Analytics Diagram

 

 Like