Integration “done right” makes advanced business intelligence simple!

The last couple of years the BI (Business Intelligence) world has gone through a paradigm shift. OLAP (On Line Analytic Process) based solutions that previously been hugely successful within the enterprises have been more and more replaced by in-memory alternatives. 

OLAP solutions once revolutionized BI by storing pre-aggregated results and allowing users to make advanced analytics over huge data sets in the blink of a second. The problem is however that as the data needs to be pre-aggregated KPIs (Key Performance Indicators) has to be defined in forehand and the model that creates the cube is more or less fixed which in the end of course restricts the possibilities for the end user. OLAP solutions have traditionally also been complicated to implement as they need its own specialists to design and feed the cube with data – a competence that differs a lot from the more commonly known relational data skills. 

The performance of relational data products has however vastly improved over the years, 64-bit computing has become a commodity and fast computer memory (RAM) has become cheaper and cheaper. This has opened the field for in-memory BI solutions like Spotfire, Qlikview, Tableau, and PowerPivot to name a few. An in-memory solution will not use a cube directly but rather read the whole relational data set into memory. This approach combined with modern and attractive user interfaces made the solutions more efficient, faster and much more flexible than the traditional OLAP alternatives.

Both BI approaches however has one thing in common; without correct and current data the solution is more or less worthless.

Sometimes “correct and current data” is easier said than done. As best of breed system architectures get more popular and organizations start taking advantage of Software as a Service (SaaS) solutions, important data gets spread over a vast number of different systems. This makes the collection of data for the BI solution both complicated and fragile.

The “integration done right” in the title of this post refers to an integration that is based on a comprehensive integration strategy, that uses shared conical internal schemas, is loosely coupled and well documented. If this is the case creating a central data warehouse as a base for the BI solution of relevant data in a proper format is a simple task.

Receiving and feeding the data warehouse with data directly via the integration is also often superior to actually trying to have the BI solution reach into the different systems and read data. The integration option is superior as the alternative complicates the whole architecture, creates a fragile solution, puts extra load on online data within the actual system – but also uses extra time before the data can be analyzed. In a world where two seconds can be a business advantage as current data as possible is critical and waiting on for example a scheduled batch-extract to run before we can receive data from a system isn’t acceptable!

To summarize: an integration investment and strategy within an organization needs to be implemented correctly and from the beginning strive for flexibility and honor things like loose couplings, documentation, canonical formats etc. With a BI implementation on the other hand is equality important to consider integration as a vital part of the project and not end up with a complicated, fragile and non-manageable, non-optimal solution.

Posted in: •Integration  | Tagged: •Architecture  •Business Intelligence (BI)  •Integration