Why full NuGet support for BizTalk projects is important!

Let’s start with a summary for those who don’t feel like reading the full post.

Using NuGet to handle BizTalk dependencies for shared schemas, pipeline components and so on works fine today.

As .btproj files however aren’t supported by NuGet (as shown in this pull request) and are not in the current white list of allowed project types Package Restore will not work (issue closed as by design here).

Not having Package Restore of course is a problem as one now is forced to check in all packages as part of the solutions, something that in the end leads to bloated and messy solutions.

So please reach out to your Microsoft contacts and let’s get this fixed!

NuGet

As most people know NuGet is the package management solution from Microsoft for .NET. It started off as an initiative to further boost open source within the .NET community and NuGet packages uploaded to the NuGet.org platform are open and available directly within Visual Studio through the NuGet add-in. Currently there are well over 20 000 open packages for everyone to download and use.

Lately there has however been lots of discussions within the community to use NuGet as a package manager for internal shared resources as well (by Hanselman and others). Solutions like MyGet allows for private NuGet feeds – only available to those within your organization but still levering all the ease and control offered by NuGet.

Using NuGet for references has a number of of advantages:

  • Communication All available resources are directly visible in Visual Studio and when updates to a used library is a available a notification is shown. No more spam mails about changes and never read list of available libraries.
  • Versioning A NuGet package has it’s own versioning. This is useful as it isn’t always optimal to change the dll version, but by using the NuGet package version one can still indicate that something has changed. As you also reference a specific version of a NuGet package from your solution you always have full control of exactly what version you’re targeting and where to find the built and ready bits.
  • Efficiency When starting to work on a project with many references one first have to get the source code from source control for the references, build these (hopefully in the right version … hopefully you have your tags and labels in order …) until all the broken references are fix. With NuGet references this just works straight away and you can be sure you get the right version as the resource isn’t the latest from source control but the actual built dlls that’s part of the referenced NuGet package.

NuGet Feeds

As mentioned NuGet feeds can be public or private. A NuGet feed is basically a RSS feed with available resources and versions of these. A feed and a NuGet server can be a hosted web based solution or something as simple as a folder where you write your NuGet packages to. The NuGet documentation covers these options in depth. The point being that creating your own private NuGet feed is very simple!

So if you haven’t already realized it by now – NuGet is not only a great tool to manage all public external dependencies but can add a a lot a value for internal references as well.

Couple of relevant NuGet features

  • NuGet Package RestoreNuGet Package Restore enables NuGet to download the used referenced package from the package area. The goal is to avoid having to check in the actual references in source control as this will bloat the version control system and in the end the create a messy solution.
  • NuGet Specification (nuspec) metadata token replacements All packages are based on a nuspec file that dictates the version, package description and other meta information. NuGet has the capability to by using replacement tokens (such as_ $version$_ for example) read some of this information from the AssemblyInfo files. This is far from a critical feature but nice to have and avoid having to repeat oneself and have the same information in a number of places.

BizTalk and NuGet?

A typical BizTalk solution has a number of shared resources such as shared schemas, shared libraries and pipeline components. As the resources usually are shared between a number of project they often live in a separate development cycle. So when opening a BizTalk project with such resources it’s not only a lot of work getting the referenced code and building the references, there’s also this nagging feeling that it might not be in the right version and that the source might have changed since the first time the reference was added.

Another reference issue occurs when using a build server for a solution with references. As the solution has a dependency to the referenced project one has to make sure not only that the main solution is fetched to the build workarea by the build server, but also that all the referenced project are fetched from version control – and again, hopefully the latest version in the attended version … This kind of works using TFS Build Service and common TFS Source Control. If however one’s using Git and have resources in separate repositories this becomes impossible as TFS Build Service currently only supports fetching a single repository per build definition to the build workarea … (This issue does not apply for TeamCity that has a number of different options for dependency management)

All the these issues are actually solved when using NuGet references instead of traditional references as we can be sure we’re getting the packaged dlls as part of the NuGet package in the version that one referenced and not the latest checked in version. A NuGet reference also makes things a bit easier when it comes to managing the workarea for the TFS Build Service as one only have to sure the NuGet package is available (either as checked in as part of the solution or by using Package Restore).

But …

NuGet doesn’t support BizTalk projects!

As disused here NuGet currently doesn’t support .btproj files. As BizTalk project files are are basically a standard .NET project file with some extra information a two line change in the NuGet whitelist is needed as in this pull request.

So the main issue it that by not having full support of .btproj files Package Restore won’t work and we’re for now force to check in all the NuGet packages as part of our solutions. An other minor issue is that the token replacement feature also doesn’t work. I also think that if we could actually get full BizTalk support we’d see more BizTalk specific open shared packages for things like useful libraries and pipeline components.

Call for action: Reach out to the NuGet Team or other Microsoft connections and let’s get those two lines added to the white list!

Posted in: •Integration  | Tagged: •BizTalk  •NuGet 


London BizTalk Summit 2014 Wrap Up

BizTalk Summit 2014 was a fully packed event with many interesting speakers. BizTalk 360 and Saravana Kumar did a good job putting this together. After two intense days, we arrived back in Sweden with new contacts in our network and more knowledge regarding new information and techniques.

No less than 12 integration MVP’s were represented at the sessions. On top of that, Harish and Guru from Microsoft initiated the summit by presenting some of the news regarding BizTalk 2013 R2 and BizTalk Services. At the summit as a whole, the subject focus was as broad as it can get within the narrow integration area. Amongst the covered areas we can find BizTalk 360 (of course), WABS and Azure Mobile Services but also a fair share of sessions covering softer subjects on a higher abstract level.

First of all, we are all happy to once again see a united front from Microsoft that BizTalk is a product to rely on and will be around for many years ahead. Microsoft describes it as Reality and points out that it is a vital part in the hybrid solutions that now also is reality. They also press that it still is an important product as an on premise platform, which is how it often is used by many of the customers. The release plan is a regular one release per year where they will be alternating between major and minor (R2) releases. Guru Venkataraman (Senior Program Manager - BizTalk Product Team, Microsoft) also revealed that just for testing the BizTalk engine there will be around 7800 tests available. The large amount of tests may sound insane but as Guru explained, it is necessary since they need to maintain and support BizTalks heavy backpack that has been building up since back in 2000.

With that said, let us move on to the fun parts; a wrap-up of the sessions!

 

BizTalk @ Microsoft and upcoming server releases

The first session opening the summit was held by Guru who presented new features covered in BizTalk 2013 R2. The largest effort has been put in to optimizing compatibility but some new features will be released:

  • Native JSON support
  • JSON Schema wizard
  • Support for empty message when working with REST
  • Platform alignment with Windows Server 2012 R2 and SQL Server 2012 R2

The long awaited native JSON support is very welcome by the BizTalk community which now is mostly using third party components to achieve this. Although, I clearly agree with Richard Seroter’s (integration MVP, when talking about A look at choosing integration technology) statement that the idea of using XSD schemas for JSON is somewhat absurd since it really destroys the whole idea of JSON. However, with the architecture BizTalk is built on today, there is no choice but to implement it this way to be able to support transformation between messages in a btm mapping. Another new feature regarding REST is the processing of empty messages using an empty schema. You heard me, an empty message! It may sound strange but it is in fact a very powerful feature. Guru had a great demo showing that this schema solves the whole problem that the REST adapter had earlier when no message body was sent with a REST GET request witch just is a simple URL.

Some other features added in BizTalk 2013 R2 are requests from American Health Care. When he talked about this, the thing that got to me most was the new xml field data type FreeText. This data type literally makes BizTalk stop parsing that field and says “Hey BizTalk, what’s in this field is none of your business”, even though it could be some xml or invalid characters in the field. I love loose coupling so let’s hope this field is as useful as I think it could be. Planned release is June 2014.

 

Windows Azure BizTalk Services – Latest Updates

The latest news about WABS and Service Bus features were presented by Harish Kumar Agarwal (Program Manager - BizTalk Product Team, Microsoft). Some of the most useful new feature they presented is support for EDI and that WABS now can receive messages from Service bus queues and topics.

As a short and neat presentation on this, they simply took the EDI parties from BizTalk and put them in the cloud. They have tried to simplify the process and kept the most used parts visible and close together, the rest went under the “Advance” section to not interfere in the interface. Harish showed us how the setup is done and he demonstrated how easy it is to add and manage parties. Since this is done in the portal, it is a lot easier to distribute the management of the parties to the users who really should administer these things.

 

Manageability of Windows Azure BizTalk Services

How about maintaining solutions in WABS? Steef-Jaan Wiggers (Integration MVP, Author) had a lot of nice input to this and as with solutions overall, the keys is good source control management and good administration options. So if we take the scenario that the source control part is covered and we started to investigate how to maintain the deployed solutions. How it works today, the main tool that you actually would use is Visual Studio which might seem a bit odd since we are used to have some sort of administration tool added as well. At the moment we cannot edit, but we can read the settings of adapters for example in the Azure Portal which might not be the best solution but it works. The REST API and PowerShell could be used for managing the solutions, stopping, starting, deploying etc.

 

How to move BizTalk Services

So what if you would like to start moving some of your current integrations in BizTalk Server to the cloud using WABS?

Jon Fancey (Integration MVP, Author) showed us in a simple demo that migrating EDI parties and agreements where quite trivial. Migrating of ports is fairly easily doable (as long as they are of the adapter types supported by WABS) as well as pipeline components to bridges and map migration, except for some minor necessary changes. How about orchestrations then? First of all, orchestrations are not part of WABS which is replaced by workflow. You may question this, what, I cannot do these awesome cool Orchestrations anymore? And the answer is: NO! Though, that is not entirely true since the new workflow has been added instead. Jon was not entirely pleased with this as it means it will be hard to motivate a customer not migrating an orchestration to a workflow. WABS does not come with any tool for migrating this so he wanted to find a solution for this by actually making a converter from orchestration to workflow.

The same technique enabled him to also fully migrate pipeline to bridges in WABS. The workflow converter is not fully developed yet but it will be in the future and his demo clearly showed that it will be possible to convert a really advanced orchestration into a workflow and still have almost the same functionality. Even though I might not be the biggest fan of workflows, this approach might become one of the most common ways to solve migration problems. Thus, it will probably have a high possibility to bring the most value in the sense of cost per migration. Still, my opinion is that it is better to use the new platform on the way it is supposed to be used and in the most optimal way. Always use and take advantage of the framework don’t fight it!

 

What if you mess up the configuration?

Tord Glad Nordahl (Integration MVP) had a memorable session on the first day, where he in a clear way stated that all developers are evil and lazy using real examples in a humoristic approach. Personally, I can’t agree to that, not all the way at least. But I do agree with him on the areas such as keeping good strategy of host instances and a good level of logging is making life easier and the solutions much more maintainable. A great admin that knows what he is doing will make the BizTalk server so much happier and healthier. He clearly pointed out that he often sees problem with the disaster recovery (DR) model that might be “forgotten” or just handled poorly. He presented examples like for instance that someone backup their message box instance then two hours later they backup the configuration database. What are the consequences of this? Well you do backup, congrats! But have they actually tested that the backup can be restored? Probably not since this setup is doomed to fail, the consistency is lost and you will run into problem when you try to run a DR. So basically you’re screwed. His real life examples really lightened up the room and everyone had a good laugh, although almost everyone there were developers.

 

BizTalk 2013 in Windows Azure IaaS

We keep on talking about BizTalk Server 2013, we saw a pretty amazing demo by Stephen W. Thomas (Integration MVP) of how to automate creation of complete BizTalk environments, complete with all complex setups done automatically! I mean, who hasn’t dreamt of creating a complete environment with two clustered SQL servers and three BizTalk nodes with one button click, with the exception of a configuration file to change values in. Azure brings these great possibilities to the area of easy virtualization and powerful API’s. Both PowerShell and the REST API have some really great strengths.

With this, Stephen demonstrated the strengths of Azure in a clear and simple way and that really powerful and useful things can be created with small means. Just think about how hard some complex test cases are to create, where you need multiple BizTalk servers and a fully functional Clustered SQL server, just as you have in production. Before Azure I would say almost impossible (unless you have all the time in the world) and even with the Azure portal it still takes a lot of time creating all VM’s, doing the configuration and so on.

So Stephen’s work ended up in a couple of PowerShell scripts and one config file with three parameters and he could create all of this in one click! With this we could actually do these tests. Just use a script to create the environment, create the tests, run them and verify the result. Kill the machines, delete them and go on with it just like it was a simple test case! Well, true not all that simple but, still now it’s doable. This is some really awesome stuff and I look forward to see more of it and be able to create all kinds of messed up test scenarios.

 

Thinking like an Integration Person

At the Summit a Nino Crudele (Integration MVP) on fire had a session on a Visual Studio tool for BizTalk he developed himself. He was literally so exited he could not stand still. He has created some extended tools for Visual Studio with focus on speeding up the developing process. I especially liked the tools that he created for integration solution overview and automated documentation. You could inside Visual Studio right click on an orchestration file or project to generate some real nice documents that described the process and a lot of metrics. There were also tuning of build process, dependent tracking and extraction of code out of BizTalk Server using a dll file! Awesome! He marked an orchestration and said “compare it” and the tool compared the installed orchestration in BizTalk with the one in Visual Studio and if there were differences it downloaded the dll file and extracted the source code of the orchestration. From this, you could manually check the differences and make corrections. This was actually possible to do with all artifacts in BizTalk. Talk about useful tool, in all those lost projects cases where several developers are involved and everyone thinks “uhm, I think this is the source code”. With this we could actually compare the source code with the installed code and if there is a mismatch decompile it out from BizTalk to continue developing from the decompiled code. Another interesting part was the automated documentation which was actually quite good, useful metrics and readable documentation, either for entire projects or just for a specific orchestrations. Guru stated that the tool is wanted and he want to add this to the Developer Toolkit distributed by Microsoft later this year, hopefully together with the R2 release. Another useful feature was possibility of testing of pipelines and pipeline components in the same way you test maps inside VS today. This might be really useful as a complement to other tools but I would still recommend using the unit test solution with WinterDom in addition as you build tests that are long-lived and can be really useful during the whole lifecycle of the pipeline or pipeline component.

 

BizTalk Server Operations and Monitoring using BizTalk360

We also saw a great presentation of the BizTalk360 product which is an enhancement of the BizTalk Administration tool. They did not only improve the user experience but have also added a lot of nice features like audit for all changes, improved security and authorization model so you could give user only read permissions for specific applications. Saravana Kumar (Integration MVP) also demonstrated the monitor and alarm functions available but the most impressive part were that by minimal effort you could access the portal directly via Azure. Using the Live-ID logging into the cloud, from where you could access multiple BizTalk360 instances that technically can be hosted anywhere in the world. That is pretty amazing and could be very useful!

 

Real world Business Activity Monitoring

Dan Rosanova (Integration MVP, Author) had a great presentation of BAM and how this could be used to give the business insight and status report of what’s happening in the integration flows. Obviously, this is not news, but he gave some very useful examples and tips how to use BAM in a way so it is understandable by administrative staff. I have seen and tried BAM before but Dan made it look cool and interesting, he pointed out some obvious but still very nice areas where this could be very useful and give more insight into the area of integration. He also pointed out that for most people in an organization integration is just a black box and with the right tools we could enlighten them about what is happening or even make them understand integration a bit more. In this way, integration can be brought to the table again with a richer life, and as he said “get the business people of our backs living happy with their user friendly tracking tool”.

 

Exposing Operational data to Mobile devices using Windows Azure

Kent Weare (Integration MVP, Author) talked about this topic. As we all know mobility is a smoking hot thing on the market and everyone is talking about it, but what has it to do with BizTalk, WABS and Service Bus? Easy! They need integration as a part of the infrastructure otherwise they would be quite boring, living there alone in the mobile device. Admittedly, it is not that easy, they do know how to communicate with a server so they can get information. But still there is a gap area, from an integration point of view, especially when integrating a corporate app with an ERP system. We need to help the mobile developers so that they can work against a single API or endpoint and then let integration specialists do the work of connecting all the systems together. The coolest and greatest part however is that the Service Bus really helps us to do this in an easy way. Now they have added C# support for programming Mobile Services which means you are not forced to learn node.js. They have now given back the power to us! As we all know by now the Service Bus Relay helps enabling communication with the LOB’s and legacy systems and makes communication go seamless in and out of firewalls in an extremely secure and controlled manner. Thus, integration will be a key and play a bigger role in the future regarding mobility. We might need to speed up the process and make changes on the fly but we need the people understanding how things work.

 

When to use what: A look at choosing Integration Technology

Richard Seroter (Integration MVP, Author) had a session about management and different integration technologies focusing on when and how to use them. His focus where Microsoft core stuff as Service Bus, BizTalk 2013 ,WABS etc. The perspective in which he presented this was “Is this right for me?”, “Do I have people who understand this technique?”, “Is this technique mature enough?”, “How can we maintain the solutions we create?” and other important issues to address before choosing technology. Since a lot of the arguing can start like this “This technique is cool let’s try it!” and later on when you have 15 different technologies and platforms, I bet no one wants to be responsible for maintaining all that. So from what I see, it is very important to understand your organization and your current products and use that has the most value for the organization. Another key question is how steep the learning curve is, how long time does it take for my team to learn these new technologies? Is this technology strong enough to survive the next 5-10 years? These questions should today be addressed before asking whether we should use BizTalk or WABS. Or both? How effective would it be if we had five C++ developers that suddenly had to learn node.js?

The best plan, as Richard told, is to see the potential in your organization, what kind of developers do we have and make sure you believe in the product and not be afraid of mixing a few of them. Just make sure to have good reasons for bringing new ones in and not just because they are new and cool. For example, the Service Bus greatly makes the life a lot easier when communicating in and out over firewalls, so it’s a great add on to BizTalk.

 

Master Data Services

Johan Hedberg’s (Integration MVP) session was about a very common and hard to solve topic, master data. It doesn’t matter if it is customer data or employee data etc, I think we all have faced this problem in some way. It is always hard to solve master data problems where often very complex and hard maintainable solutions are developed. Johan demonstrated the Master Data Services which is a service on SQL Server for keeping track of and manage Master Data. Basically the idea is to create a data type and an entity where the data type is the name of it and the entity represents how the data looks like. The service then creates tables and views for the entities and on top of this, there could be data validation rules added for automatic validation. When adding data we could just populate the staging table of the entity and it will be imported and later validated. For pulling or sending data we read from a view with some filter for only pulling the new, updated or data we are interested in. In this way, we could setup event pushing or just pull the whole table for full push to other systems. This is an easy job for BizTalk 2013! Simple, neat and surprisingly powerful solution.

 

This blog post is written by Mattias Lögdberg, Therese Axelsson and Robin Hultman.

 

Posted in: •Integration  | Tagged: •BizTalk Summit 


Simplified usage of shared BizTalk artefacts using NuGet

Most BizTalk projects has a lot of dependencies! Since an assembly is the smallest deployable unit in BizTalk and we typically do not want to redeploy all our integrations just because we updated one map in one integration. When one artifact needs to use another artifact, it is said to be dependent on that artifact. If the artifacts are located in different assemblies, we have an assembly reference.

There is a ton of material for how to handle these dependencies when it comes to deployment but one of the big annoyances with having a lot of dependencies is just to get it to build fresh from source control.

A typical implement-fix-in-existing-project workflow could look something like this:

  1. Clone the project from the source control repository.
  2. Build
  3. < Get build errors due to missing dependencies >
  4. Clone all dependent source control repositories
  5. Build all dependent projects.
  6. Build the project that we are supposed to update (crossing fingers that we didn’t do any mistake in any of the above steps).
  7. Implement fix
  8. Commit changes.

It is not uncommon that these steps takes more time than the actual fix, especially if there is a lot of dependencies.

As most other development teams we use a build server where built, tested and versioned assemblies are stored. So why do I need to download and build code when I only need the binary?

The ideal thing would be if we can pull down the project we need to update and all dependencies would be automatically be restored against a pre-built versioned assembly from the build server.

NuGet to the rescue?

Note that BizTalk projects is not currently supported by NuGet. I have opened a pull request with support for BizTalk projects https://nuget.codeplex.com/SourceControl/network/forks/robinhultman/NuGetWithBizTalkProjectExtension/contribution/5960. This example uses my private build of NuGet.

When NuGet is mentioned people often think of public feeds for open source libraries but you could just as well use it internally with a private feed.

NuGet enabling a BizTalk project is easy. NuGet will use information from the AssemblyInfo but to add some more metadata about our artifact we can add a nuspec file. The rest is done on the build server.

We use TeamCity as build server. TeamCity has support for packing and publishing NuGet packages to both private and public feeds. TeamCity is also able to act as a NuGet feed server.

Our build process looks like this:

With the package published to our NuGet server we can simply reference it from “Manage NuGet Packages” dialog in Visual Studio:

If the NuGet Visual Studio setting “Allow NuGet to download missing packages” is enabled NuGet will, as the label indicates, download all missing packages automatically. So with this implemented the implement-fix-in-existing-project workflow will look like this:

1. Clone the project from the source control repository.
2. Build < Visual Studio automatically restores dependencies from the build server >.
3. Implement fix.
4. Commit changes.

If we investigate the MSBuild output in Visual Studio from step 2 we can see that the dependencies are downloaded from the NuGet feed.

Restoring NuGet packages… To prevent NuGet from downloading packages during build, open the Visual Studio Options dialog, click on the Package Manager node and uncheck ‘Allow NuGet to download missing packages’.
Installing ‘Demo.Shared.Schemas.EFACT_D93A_INVOIC 1.0.0.0’.
Successfully installed ‘Demo.Shared.Schemas.EFACT_D93A_INVOIC 1.0.0.0’.
Demo.CustomerInvoice.Transforms ->
C:\Projects\Demo\CustomerInvoice\Transforms\bin\Debug\Demo.CustomerInvoice.Transforms.dll

If one of our dependencies gets updated NuGet detects that and enable us to update the reference.

If you think that this could be useful please up vote my pull request on the NuGet project

https://nuget.codeplex.com/SourceControl/network/forks/robinhultman/NuGetWithBizTalkProjectExtension/contribution/5960

To read more about creating and publishing NuGet packages:

http://docs.nuget.org/docs/creating-packages/creating-and-publishing-a-package

More about package restore:

http://docs.nuget.org/docs/reference/package-restore

More about nuspec files:

http://docs.nuget.org/docs/reference/nuspec-reference

Posted in: •Integration  | Tagged: •BizTalk  •NuGet  •TeamCity 


Export BizTalk Server MSI packages directly from Visual Studio using BtsMsiTask

Getting a full Continuous Integration (CI) process working with BizTalk Server is hard!

One of the big advantages in a working CI process is to always have tested and verified artifacts from the build server to deploy into test and production. Packaging these build resources into a deployable unit is however notorious hard in BizTalk Server as a Visual Studio build will not provide a deployable artifact (only raw dlls). The only way to get a deployable MSI package for BizTalk Server is to first install everything into the server and then export – until now.

Why Continuous Integration?

Continuous Integration is a concept first described by Martin Fowler back in 2006. At its core its about team communication and fast feedback but also often leads to better quality software and more efficient processes.

 

A CI process usually works something like the above picture.

1. A developer checks in code to the source control server.

2. The build server detects that a check in has occurred, gets all the new code and initiates a new build while also running all the relevant unit tests.

3. The result from the build and the tests are sent back to the team of developers and provides them with a up to date view of the “health” in the project.

4. If the build and all the test are successful the built and tested resources are written to a deploy area.

As one can see the CI build server acts as another developer on the team but always builds everything on a fresh machine and bases everything on what is actually checked in to source control – guaranteeing that nothing is build using artifacts that for some reasons is not in source control or that some special setting etc is required to achieve a successful build.

In step 4 above the CI server also writes everything to a deploy area. A golden rule for a CI workflow is to use artifacts and packages from this area for further deployment to test and production environments – and never directly build and move artifacts from developer machines!

As all resources from each successful build is stored safely and labeled one automatically achieves versioning and the possibility to roll back to previous versions and packages if needed.

What is the problem with CI and BizTalk?

It is important to have the build and feedback process as efficient as possible to enable frequent checkins and to catch possible errors and mistake directly. As mentioned it is equally as important that the resources are written to the deploy area are the ones used to deploy to test and production so one gets all the advantages with versioning and roll back possibilities etc.

The problem with BizTalk Server is however that only building a project in Visual Studio does not gives us a deployable package (only raw dlls)!

There are a number of different ways to get around this. One popular option is to automate the whole installation of the dlls generated in the build. This not only requires a whole lot of scripting and work, it also requires a full BizTalk Server installation on the build server. The automated process of installation also takes time and slows down the feedback loop back to development team. There are however great frameworks as for example the BizTalk Deployment Framework to help with this (this solution of course also enables integration testing using BizUnit and other framework).

Some people would also argue that the whole script package and the raw dlls could be moved onto test and production and viewed on as a deployment package. But MSI is a powerful packaging tool and BizTalk Server has a number of specialized features around MSI. As MSI also is so simple and flexible it usually the preferred solution by IT operations.

A final possibility is of course to directly add the resources one by one using BizTalk Server Administration console. In more complex solutions this however takes time and requires deeper knowledge into the solution as one manually has to know in what order the different resources should be added.

Another option in BtsMsiTask

Another option is then to use BtsMsiTask to directly generate a BizTalk Server MSI from the Visual Studio build and MsBuild.

 

The BtsMsiTask uses same approach and tools as the MSI export process implemented into BizTalk Server but extracts it into a MSBuild task that can be directly executed as part of the build process.

BtsMsiTask enables the CI server to generate a deployable MSI package directly from the Visual Studio based build without having to first install into BizTalk Server!

 

This post is a cross-post and was originally posted here

Posted in: •Integration  | Tagged: •BizTalk  •BtsMsiTask  •Continuous Integration  •Visual Studio 


iBiz Solutions rated as a Super company also in 2013

Bisnode (formerly PAR), in association with business magazine Veckans Affarer have selected the Super Companies for 2013 , and this year, only 366 of ALL Swedish incorporated companies with a turnover of 10 million SEK (50.000 companies in Sweden ), achieves the razor sharp requirements for becoming a Super Company.

Super Companies - Bisnode in association with Veckans Affarer

The selection of Super Companies is a cooperation between Bisnode and Veckans Affarer, where Bisnode’s analysts have developed a model to identify the Swedish Super Companies. The selection and ranking to become a Super company is based on the Swedish companies’ economic performance over the past four years. The model used takes into account and consider the individual companies:

  • Growth
  • Earnings
  • Return
  • Efficiency
  • Capital Structure
  • Financing

 

“I am very happy and very proud that we received the award as Super Company in 2013 as in 2012”, says Allan Hallen, CEO at iBiz Solutions.

This sends clear and positive signals to our customers, our partners and the market in general that we are doing much right now, and have done so for a long time. Of course also an internal receipt, and a crystal clear evidence that our clear focus, our charted path and way of working works and is successfully. iBiz Solutions has a clear and long-term goal towards becoming No. 1 in the Nordic countries in the field of integration, and we already are one of the very best .

We have good cooperation with many large, well-known and exciting clients in the Nordic countries, where it undoubtedly shows that we are well qualified and are at the forefront in the field. Last year, several new exciting customers connected, and existing customers gave us continued confidence. That we invested heavily in partnership with both Microsoft and Tibco is of course still important , and here we are already now one of only a handful of players who have received the highest partner status , which is the Gold level with these strong and well-known companies. We will strengthen and intensify cooperation further with Microsoft in 2014, with new and even more exciting associations and deals.

There are of course many vital success factors, our very competent, committed and talented staff that contributes greatly for customers to create what many aspire to - to establish maintainable integrations is one of them. Working in a company that has ambitions and challenging clear growth goals is, of experience, much more fun than “the options” … and it shows in the exciting applications we receive from the top of the line consultants with own ambitions. iBiz Solutions is today a clear strong option when it comes to attracting the right consultants and applications in the field of integration , and we are always looking for more colleagues to our various offices. Both we and our customers pulls great value and efficacy of our concepts and tools that we over time have developed and refined - we make a difference in our commitments and deliveries.

“I know from previous years that Veckans Affarer’s evaluation criteria are very tough, and that the eye of the needle to meet the requirements is extremely small, so it is especially exciting that iBiz Solutions is the only IT company in Karlstad that meet the requirements for Super Company 2013. That iBiz Solutions also is a Gasell 2013 is nice, but the requirements for Gasell are not even close to Super company.

Our ambition is to be a Super company even in the future, for the benefit and value for employees, customers and partners now and in the future”, concludes Allan Hallen, CEO at iBiz Solutions.

 

Please read more about Super company on; http://www.va.se/temasajter-event/foretagande/superforetagen/hela-listan-superforetagen-2013-564105

Posted in: •About iBiz Solutions  | Tagged: •Bisnode  •Gasell  •Super Company  •Superforetag 2013  •Veckans Affarer 


Problem building BizTalk solutions with standalone MSBUILD

We are in the process of setting up a build server for BizTalk 2010 for one of our customers. Naturally we wanted an installation that was as clean as possible and we were very pleased to find out that we only need the “Project Build Component” from the BizTalk installation and not a full-blown BizTalk or even Visual Studio to build the BizTalk project. The description of the project build component clearly states that “Project Build Component enables building BizTalk solutions without Visual Studio” and MSDN confirms that no other component should be needed http://msdn.microsoft.com/en-us/library/dd334499.aspx.

We installed the .NET SDK and the project build component and started testing.

We started off by building some shared schema solutions and it worked just fine but when we tried building a project with some more BizTalk artifacts, schemas, mappings and pipelines, boom!

The “MapperCompiler” task failed unexpectedly. System.IO.FileNotFoundException: Could not load file or assembly ‘Microsoft.VisualStudio.OLE.Interop, Version=7.1.40304.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ or one of its dependencies. The system cannot find the file specified. File name: ‘Microsoft.VisualStudio.OLE.Interop, Version=7.1.40304.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’.

That is a bit odd since “Project Build Component enables building BizTalk solutions without Visual Studio” and yet it complains about a missing Visual Studio assembly?

Unwillingly to give up and install Visual Studio we started investigating what would break the build. If we removed the mappings and just build the schemas and pipelines, it worked. If we added one of the xml-schema-to-xml-schema mappings, it worked. If we added one of the flat-file-schema-to-xml-schema mapping, boom! I am not sure why a flat file schema is different from an xml schema from the mappings perspective but apparently it is.

Eventually we decided to install Visual Studio and BizTalk Developer Tools on the build server and now we are able to build this project as well.

This is not that big of an issue for us since we didn’t have to install and configure a full-blown BizTalk with a SQL Server but the MSDN article is misleading and it took us a while to give up and install Visual Studio.

 

Posted in: •Integration  | Tagged: •BizTalk 2010  •Continuous Build  •MSBuild 


Bug in BizTalk EdiAssembler?

I got an email from a customer telling me that the UNB segment in our EDIFACT messages was invalid according to the EDIFICAT specifications. I thought that it sounded rather strange since the UNB segment is generated by BizTalk based on the party settings.

The UNB segment we sent looked like:

UNB+UNOC:3+XXXXXXXXXXXXX:14+YYYYYYYYYYYYY:14+130627:2103+16++++0++1

The marking above in red is called “0031 Acknowledgement request” and is a flag managed on the “Acknowledgements” tab in the BizTalk party settings.

0 = off, 1 = 1 on sounds fair, right? Well not quite.

According to the EDIFACT specifications valid values of Acknowledgement request are 1, 2 and blank.

http://www.gefeg.com/jswg/cl/v4x/40202/cl6.htm

MSDN documentation states the following:

UNB9 (Acknowledgment request)

Enter a value for the Acknowledgment request. This value can be 0 - ACK not required, 1 - Acknowledgement requested, or 2 - Indication of receipt. Selecting 1 prompts generation of a CONTRL message as functional acknowledgment. Selecting 2 prompts generation of a CONTRL message as technical acknowledgment.

This field can only be a single-digit number. The default value for the field is 0 - ACK not required.

From http://msdn.microsoft.com/en-us/library/bb246092(v=bts.20).aspx

Clearly a mismatch between BizTalk and the EDIFACT standards.

I haven’t had the chance to try this myself but according to this forum post the ‘0’ was introduced in BizTalk 2010

http://social.technet.microsoft.com/Forums/en-US/92ce0b55-fc0c-4def-a55d-413b9717cc65/edifact-unb-segment-bts-2010

 

Why did Microsoft change this?

 

Posted in: •Integration  | Tagged: •BizTalk  •EDIFACT 


Know your Now – Real-Time Business Intelligence

The majority of today’s traditional Business Intelligence (BI) solutions are based on an Extract Transform Load (ETL) approach. This means that data is extracted from a number of sources, transformed into the specific format needed and then loaded in to the business intelligence tool for further analysis.

An ETL task can involve a number of individual steps such as communication with a number of sources, transition of encoded values, calculating values, merging data from a number of sources, calculation aggregations etc. ETL processes also requires saving all raw data in each source so that each scheduled ETL run can perform the transformations needed and for example calculate aggregation, consolidate data and so on. The problem with this approach is the latency it adds before current data can be loaded, displayed and analyzed in the actual BI tools. It’s a heavy and expensive process that limits the possibilities of showing data and analyses that are close to real-time.

In today’s competitive environment with high consumer expectation, decisions that are based on the most current data available will improve customer relationships, increase revenue, and maximize operational efficiencies. The speed of today’s processing systems has moved classical data warehousing into the realm of real-time. The result is real-time business intelligence (RTBI).

To achieve RTBI one has to rethink the ETL process and find ways of taping in to the data streams and feed business transactions as they occur to a real-time business intelligence system that maintains the current state of the enterprise.

Another important part of the RTBI puzzle is Complex Event Processing tools (CEP). CEP tools are specialized to analyze big data stream in real time. The tools makes it possible to also do complex analysis at the same time as the data is read and written to the real-time business intelligence – and not as a step in a slow and heavy ETL process.

Real-time business intelligence is also known as event-driven business intelligence. In order to react in real-time, a business intelligence system must react to events as they occur – not minutes or hours later. With real-time business intelligence, an enterprise establishes long-term strategies to optimize its operations while at the same time reacting with intelligence to events as they occur.

RTBI might sound as science fiction to many but today

Posted in: •Integration  | Tagged: •Architecture  •BI 


Exposing a REST GET endpoint using BizTalk Server 2013

In the newly released BizTalk Server 2013 we finally got an adapter for the WCF WebHttpBinding enabling us to expose and consume REST endpoints. BizTalk has for a long time had strong support for SOAP services (SOAP adapter and later WCF-* adapters). One major difference between SOAP and REST is that a SOAP message will always contain a message body and a message header making it easy to map the SOAP body to a typed message deployed in BizTalk. A REST message does not necessarily contain a body, in fact a GET request will only consist of a URI and HTTP headers.

So how do we translate this type of request to the strongly typed world of BizTalk?

If we start by looking at an example where we want to be able to query a database for product prices via BizTalk.

 

If we were to expose a SOAP service to the client, the solution could look something like this (irrelevant implementation details excluded):

 

The client sends a SOAP message to BizTalk. The adapter extracts the message from the body. The message is passed on to the receive pipeline where the message type is determined. Since the message type is known we can perform transformations on it on the send port.

That is old news, bring on the new stuff.

When the client wants to GET something from the server in a RESTful way it uses HTTP GET verb on a resource.

GET http://localhost/ProductService/Service1.svc/Products/680 will instruct the server to get product 680. Note that there isn’t any message body, just a verb and a resource identified by a URI.

So how can we consume this in BizTalk?

The WCF-WebHttp adapter does some of the heavy lifting for us and extracts the parameters from the URI and writes them to the message context.

 

The XML in BtsHttpUrlMapping tells BizTalk what URIs and what verbs on that URI we want to listen on. In the URI for the first operation we have defined a placeholder for a variable inside curly braces. That is what enables us to ask for a specific product:

http://localhost/ProductService/Service1.svc/Products/680

If we click the Edit button in the Variable Mapping section we can map this placeholder variable to a property defined in a property schema.

So we need to define a property schema for our ProductId property and promote it from our request schema.

 

If we change our WCF-WSHttp port to use the WCF-WebHttp adapter with the above configuration and update our client to send REST request the client would receive an error and there will be a suspended message in BizTalk saying “No Disassemble stage components can recognize the data.”

As expected the message body is empty and our ProductId is written to the context. The empty message body causes the XMLDisassembler to fail.

What we need is a custom disassembler pipeline component to create a strongly typed message from the context properties written by the adapter.

The interesting part here is of course the custom disassembler component. It has one parameter, DocumentTypeName. This property is where we define the schema of the message it should create.

 

If we run the previous request through this component we now get a message body:

Posted in: •Integration  | Tagged: •BizTalk 2013  •REST 


The Cloud Series – Part 3

This is my third post in this series and this time let’s talk development and cloud native applications, to be more precise, web applications.

I had a very interesting chat with a friend/colleague of mine the other day about an application they had. There where some problems with the current setup they had and one big problem was scaling so they decided to rebuild the solution and to host in the cloud. The interesting part here is that the solution in my head differed a lot from what they created so I would like to take the opportunity to compare them here.

There are 2 ways of running web applications in Azure. First of all we have the traditional way when we let Azure host our own Virtual Machine (from now referred as VM), which means that we can create a VM that runs any Windows or Linux. Configure it as we want, install what software we want, tweak the IIS/Apache servers and so on. Then we have the “Cloud native” way where we use web workers in Azure and just upload the code and Azure fires it up. Did you know that you can upload source code directly from GIT? Just by a simple remote push command and Azure will compile the code for us. How cool is that! When we use Web workers we don’t need to install or configure anything in the OS or IIS environment, it just works!

So what does this means?

- VM gives us flexibility to tweak and configure it to match out our needs to 100%. This is the IaaS (Infrastructure as a Service) part of Azure and it releases us from all hardware, network etc. concerns since Azure and Microsoft will handle that for us.

Web workers as the “Cloud native” use PaaS (Platform as a Service) which is the next level of service and it releases us from all above and all OS handling, no installation, no configuration, no updates - nothing! They say “Take care of your application, we do the rest”, still we are able to do configuration to IIS or Windows or install services and so on in the Startup Tasks. But it lets us focus on what we developers do best! Developing applications, and hopefully it gives us more time to improve and maintain the applications instead of handling boring patching, OS updates other updates etc.!

When choosing what to run and how, there is a lot more to think about than the above mentioned areas, since we sometimes need the possibility to install third part components in our OS or we need to have special configurations and setups in our IIS/Apache installation and so on. Every time we need to use/configure things outside of the application itself it might be a better choice to use a Virtual Machine.

So now that we have a little background, let’s set up the context. The application is a plugin that can be used anywhere at a webpage, the idea is to create dynamic video content that provides the client user with the possibility to start streaming a small video after a click on the “start” button. For fun let’s say that the application is used on a big Swedish newspaper site www.aftonbladet.se and we got it plugged in on one of the top news. Studies have shown that when this is done before the payload looks something like the chart below. (The chart shows average values/hour in time periods of the day and the values are completely made up, but I want to show you how big the difference are between the biggest peak and the smallest and give us something to plan around.)

Now we have a scenario let’s solve this in 2 different ways one using Virtual Machines and one using only Azure Cloud parts. Basically the solution would look like this.

To start off we need a web application that handles this:

  • provides the site with the content
  • gives access to the movie
  • presents the movie

In the background we would need something like this:

  • Place to store movies
  • Database to store information/logs/configurations etc.

Let’s set this up with Virtual Machines, we use Windows server, IIS on our web application machines. As a backend we setup Virtual machines with clustered NoSQL databases. We are using the load balancer in Azure to help us handle the balancing of the traffic between our web servers and Azure Media Service to provide the movies. It could look something like this (orange boxes is the parts where we are responsible for):

With this setup we could easily add new Virtual Machines at the place where we meet threshold first, either at the database or the Web Application. This gives us a good setup and a lot of scaling potential, flexibility and possibilities. But we need to keep the OS, IIS, NoSQL server etc. updated and upgrade to new versions with all the problems that comes along and so on. Sure some of these things are automatically but we have the responsibility to do this. We also have the responsibility to make sure that the machines are health and run without errors, Azure will only provide us with the assurance that the Virtual Machine runs, not that the image and software on the image is working correct.

In this setup we would have our movies saved in the Azure Media Service and in that way we will be able to provide live streaming with very little effort, since Azure does all the work! Logging each view and request could be heavy and should be done asynchronously by using for example Redis we can even out the peaks and protect our databases from being overloaded. With Redis and a windows service that frequently reads from the create list/queue we will write our logs in our pace and since it’s not critical data it’s not a problem. But we have another problem, what if the host that runs this Virtual Machine crashes and the fault is a corrupt hard drive? Then the local data is lost and we have lost all the logs that was stored on that node. This is not ideal.

On the front end we install and setup the OS Environment, we install and configure IIS, and install our application. On the backend we install OS, install Redis and NoSQL server. Creates cluster and configures Azure Media Service. After this we can just publish new machines that will start of working asap.

So let’s do this the “Cloud native” way! Here we will use Web workers for the web application, Azure Table storage as our NoSQL database, we use service worker for logging with a Service bus Queue for evening the peaks out. Even in this setup we would have our movies saved in the Azure Media Service and in the same way we will be able to provide live streaming to clients. This solution could look something this (orange boxes is the parts where we are responsible for):

So what’s going on here? Well we start of in the same way as before with Azure Load Balancer but then we skip all that comes with hardware OS, IIS etc. and uses Web Workers for the web applications. Then we use Azure Table Storage and for logging we push them on a Service Bus Queue to the Worker role which will log at its own pace just as with the Redis mentioned above but this time with a transient queue that guarantees a delivery. So we develop the software, configure Service bus queue, Table storage and Media server and then we are up and running!

I haven’t mentioned the scaling part since it isn’t the point of this entry but as everything else it’s easy and there are actually complete solutions that are ready to use and only needs configuration, look at WASABi for more information.

To wrap this up, take a good look at the features that exists in these cloud applications because they are really useful and will save you a lot of time. Then you need to make sure that you develop for cloud, since a lot of the applications we have done in the past has been done on high reliable hardware it can be a challenge to start thinking and develop in the way that “it will crash at some point” and start developing applications that are crash friendly. Sometimes we know it’s going down (updates, computer restart and so on) but other times its hardware failure that causes the crash and then it’s an instant. Smaller steps, use queues to shorter request calls and make sure that data don’t vanish.

With HTML 5 and Azure Media Services we manage to create video distribution in just a few minutes. This is all HTML that’s needed:

The green text above is the actual path to our file distributed by Azure Media Services, which means that we don’t need to handle anything but providing the html and a link. Divide this up and add some JavaScript that calls our WCF service with logging info and we got a really light and fast video service.

I think this is awesome and this really speed up the entire process when creating robust and dynamic scaling applications. Even though it looks like we are bound to communicate over http to our cloud native application that’s not entirely true, cause with worker roles we can communicate with all protocols we want. We just need to specify in the Azure management console what to communicate with.

Hope this fires up some ideas, because we can achieve a lot more a lot faster with Azure.

Posted in: •Integration  | Tagged: •Architecture  •Azure  •Cloud  •Development  •HTML5