“For us it is very important to sponsor this type of creative, fun and challenging student competitions. The tasks has historically been quite challenging and it’s very nice that the students in Karlstad get the chance to try this locally in Karlstad. We wish the contestants good luck,” says Caj Rollny, Professional Services Manager at iBiz Solutions.
The contest, which is a part of the Nordic Championships in programming, takes place in the house “Vanern” Saturday October 6 2012.
A team of up to three people will solve 8-12 programming problems in five hours. Each team has access to one of the university’s computers to do the programming on. The tasks is presented to the student when the race starts and they should be resolved by team members without the help of outsiders. Each program is then tested by a judge who approves or return the task. After the competition a list of results is compiled, and the team that solved the most tasks correctly wins.
The purpose is to encourage interest in programming by giving participants a chance to compete with contestants from Karlstad and other universities in the country, for example Lulea, KTH, Chalmers and o, which all run the contest the same day and time. In addition to the honor, the top three teams get prizes sponsored by iBiz Solutions.
Read more about the contest: http://www.cs.kau.se/ncpc12/
In this post I will discuss a problem encountered a while ago when I was responsible for developing a Microsoft .NET Class Library (DLL) that was going to be used by .NET applications handling messages between BizTalk and old Legacy systems. The .NET applications retrieves information from and store information into the Legacy system and communicates with BizTalk via MSMQ queues. Picture 1 shows the conceptual solution where messages are sent between BizTalk and an old Legacy system.
Picture 1. Messages sent between BizTalk and Legacy system via MSMQ queues.
At first it was straight forward programming using standard .NET Message Queuing classes from_ System.Messaging_ namespace. The code block below demonstrates the classes used.
using System.Messaging; … MessageQueue queue = new MessageQueue(); Message message = new Message(“This is a message!”); queue.Send(message, ”Message Label”); …
Problem encountered When it was time for Integration Tests a problem was encountered. The BizTalk MSMQ Adapters were not able to interpret the messages sent by .NET applications and vice versa.
When sending or receiving messages using the MessageQueue class something called a Message Formatter is used. A Message Formatter is a class implementing the IMessageFormatter interface and it determines how the messages are serialized/deserialized into the body of the message when they are sent/received. The System.Messaging namespace contains the following Message Formatters:
If no Message Formatter is explicitly used when calling Send/Receive on MessageQueue the XmlMessageFormatter will be chosen by default.
Why is this a problem?
The XmlMessageFormatter wraps information around the actual data being sent and expects certain information wrapped around received data. The ActiveXMessageFormatter and BinaryMessageFormatter serialize the data into binary representation.
Since there is no way to configure BizTalk MSMQ Adapter to use a Message Formatter it is not possible to configure the BizTalk MSMQ Adapter to be compatible with the .NET Applications. The problem had to be solved in the .NET Class Library since it was desired to solve it in one place and it should work for both directions. A possible solution might have been to solve it with a Custom Pipeline Component. However, since there was a clean and elegant solution to the problem that could be implemented in the .NET Class Library we naturally chose that solution.
Solution The solution to the problem was to write a Custom Message Formatter which serializes/deserializes the data to/from a byte stream with UTF-8 encoding witout adding any additional information to the message body.
For more information on MSDN about creating Custom Message Formatter, see this article http://support.microsoft.com/kb/310683
BizTalk User Group Sweden (BUGS) is a community whose goal is to increase the understanding and knowledge of Microsoft products and solutions from the Connected Systems Division, mainly around the brand and the platform BizTalk. The target audience is individuals with a technical background (or interest) from consulting organizations interested of systems integration on the Microsoft platform.
On October 3, BUGS have a meeting in Stockholm and two of iBiz Solutions experienced architects in the area, Richard Hallgren and Michael Olsson will give two presentations.
Richard will talk about effective system documentation in an integration project where he will discuss and illustrate the various levels of documentation that we use, what you should document manually and what to automate. Richard will also outline the different platforms and tools that are available to manage system documentation, its advantages and disadvantages as well as some general principles for successful documentation of a project, and look at BizTalk Documentor and BizTalk Web Documentor.
Michael will then hold a presentation about BizTalk IaaS, Paas - Hybrid-based integration solutions using BizTalk locally and / or in the cloud, where the focus will be on future opportunities to run BizTalk 2010 R2 and Windows Azure together.
When: October 3 at 18:00 Where: In the premises of Informator at Karlagatan 108, 115 26 Stockholm
More information and registration for the event can be found at: http://biztalkusergroup.se/
The last couple of years the BI (Business Intelligence) world has gone through a paradigm shift. OLAP (On Line Analytic Process) based solutions that previously been hugely successful within the enterprises have been more and more replaced by in-memory alternatives.
OLAP solutions once revolutionized BI by storing pre-aggregated results and allowing users to make advanced analytics over huge data sets in the blink of a second. The problem is however that as the data needs to be pre-aggregated KPIs (Key Performance Indicators) has to be defined in forehand and the model that creates the cube is more or less fixed which in the end of course restricts the possibilities for the end user. OLAP solutions have traditionally also been complicated to implement as they need its own specialists to design and feed the cube with data – a competence that differs a lot from the more commonly known relational data skills.
The performance of relational data products has however vastly improved over the years, 64-bit computing has become a commodity and fast computer memory (RAM) has become cheaper and cheaper. This has opened the field for in-memory BI solutions like Spotfire, Qlikview, Tableau, and PowerPivot to name a few. An in-memory solution will not use a cube directly but rather read the whole relational data set into memory. This approach combined with modern and attractive user interfaces made the solutions more efficient, faster and much more flexible than the traditional OLAP alternatives.
Both BI approaches however has one thing in common; without correct and current data the solution is more or less worthless.
Sometimes “correct and current data” is easier said than done. As best of breed system architectures get more popular and organizations start taking advantage of Software as a Service (SaaS) solutions, important data gets spread over a vast number of different systems. This makes the collection of data for the BI solution both complicated and fragile.
The “integration done right” in the title of this post refers to an integration that is based on a comprehensive integration strategy, that uses shared conical internal schemas, is loosely coupled and well documented. If this is the case creating a central data warehouse as a base for the BI solution of relevant data in a proper format is a simple task.
Receiving and feeding the data warehouse with data directly via the integration is also often superior to actually trying to have the BI solution reach into the different systems and read data. The integration option is superior as the alternative complicates the whole architecture, creates a fragile solution, puts extra load on online data within the actual system – but also uses extra time before the data can be analyzed. In a world where two seconds can be a business advantage as current data as possible is critical and waiting on for example a scheduled batch-extract to run before we can receive data from a system isn’t acceptable!
To summarize: an integration investment and strategy within an organization needs to be implemented correctly and from the beginning strive for flexibility and honor things like loose couplings, documentation, canonical formats etc. With a BI implementation on the other hand is equality important to consider integration as a vital part of the project and not end up with a complicated, fragile and non-manageable, non-optimal solution.