The Cloud Series – Part 3

This is my third post in this series and this time let’s talk development and cloud native applications, to be more precise, web applications.

I had a very interesting chat with a friend/colleague of mine the other day about an application they had. There where some problems with the current setup they had and one big problem was scaling so they decided to rebuild the solution and to host in the cloud. The interesting part here is that the solution in my head differed a lot from what they created so I would like to take the opportunity to compare them here.

There are 2 ways of running web applications in Azure. First of all we have the traditional way when we let Azure host our own Virtual Machine (from now referred as VM), which means that we can create a VM that runs any Windows or Linux. Configure it as we want, install what software we want, tweak the IIS/Apache servers and so on. Then we have the “Cloud native” way where we use web workers in Azure and just upload the code and Azure fires it up. Did you know that you can upload source code directly from GIT? Just by a simple remote push command and Azure will compile the code for us. How cool is that! When we use Web workers we don’t need to install or configure anything in the OS or IIS environment, it just works!

So what does this means?

- VM gives us flexibility to tweak and configure it to match out our needs to 100%. This is the IaaS (Infrastructure as a Service) part of Azure and it releases us from all hardware, network etc. concerns since Azure and Microsoft will handle that for us.

Web workers as the “Cloud native” use PaaS (Platform as a Service) which is the next level of service and it releases us from all above and all OS handling, no installation, no configuration, no updates - nothing! They say “Take care of your application, we do the rest”, still we are able to do configuration to IIS or Windows or install services and so on in the Startup Tasks. But it lets us focus on what we developers do best! Developing applications, and hopefully it gives us more time to improve and maintain the applications instead of handling boring patching, OS updates other updates etc.!

When choosing what to run and how, there is a lot more to think about than the above mentioned areas, since we sometimes need the possibility to install third part components in our OS or we need to have special configurations and setups in our IIS/Apache installation and so on. Every time we need to use/configure things outside of the application itself it might be a better choice to use a Virtual Machine.

So now that we have a little background, let’s set up the context. The application is a plugin that can be used anywhere at a webpage, the idea is to create dynamic video content that provides the client user with the possibility to start streaming a small video after a click on the “start” button. For fun let’s say that the application is used on a big Swedish newspaper site www.aftonbladet.se and we got it plugged in on one of the top news. Studies have shown that when this is done before the payload looks something like the chart below. (The chart shows average values/hour in time periods of the day and the values are completely made up, but I want to show you how big the difference are between the biggest peak and the smallest and give us something to plan around.)

Now we have a scenario let’s solve this in 2 different ways one using Virtual Machines and one using only Azure Cloud parts. Basically the solution would look like this.

To start off we need a web application that handles this:

  • provides the site with the content
  • gives access to the movie
  • presents the movie

In the background we would need something like this:

  • Place to store movies
  • Database to store information/logs/configurations etc.

Let’s set this up with Virtual Machines, we use Windows server, IIS on our web application machines. As a backend we setup Virtual machines with clustered NoSQL databases. We are using the load balancer in Azure to help us handle the balancing of the traffic between our web servers and Azure Media Service to provide the movies. It could look something like this (orange boxes is the parts where we are responsible for):

With this setup we could easily add new Virtual Machines at the place where we meet threshold first, either at the database or the Web Application. This gives us a good setup and a lot of scaling potential, flexibility and possibilities. But we need to keep the OS, IIS, NoSQL server etc. updated and upgrade to new versions with all the problems that comes along and so on. Sure some of these things are automatically but we have the responsibility to do this. We also have the responsibility to make sure that the machines are health and run without errors, Azure will only provide us with the assurance that the Virtual Machine runs, not that the image and software on the image is working correct.

In this setup we would have our movies saved in the Azure Media Service and in that way we will be able to provide live streaming with very little effort, since Azure does all the work! Logging each view and request could be heavy and should be done asynchronously by using for example Redis we can even out the peaks and protect our databases from being overloaded. With Redis and a windows service that frequently reads from the create list/queue we will write our logs in our pace and since it’s not critical data it’s not a problem. But we have another problem, what if the host that runs this Virtual Machine crashes and the fault is a corrupt hard drive? Then the local data is lost and we have lost all the logs that was stored on that node. This is not ideal.

On the front end we install and setup the OS Environment, we install and configure IIS, and install our application. On the backend we install OS, install Redis and NoSQL server. Creates cluster and configures Azure Media Service. After this we can just publish new machines that will start of working asap.

So let’s do this the “Cloud native” way! Here we will use Web workers for the web application, Azure Table storage as our NoSQL database, we use service worker for logging with a Service bus Queue for evening the peaks out. Even in this setup we would have our movies saved in the Azure Media Service and in the same way we will be able to provide live streaming to clients. This solution could look something this (orange boxes is the parts where we are responsible for):

So what’s going on here? Well we start of in the same way as before with Azure Load Balancer but then we skip all that comes with hardware OS, IIS etc. and uses Web Workers for the web applications. Then we use Azure Table Storage and for logging we push them on a Service Bus Queue to the Worker role which will log at its own pace just as with the Redis mentioned above but this time with a transient queue that guarantees a delivery. So we develop the software, configure Service bus queue, Table storage and Media server and then we are up and running!

I haven’t mentioned the scaling part since it isn’t the point of this entry but as everything else it’s easy and there are actually complete solutions that are ready to use and only needs configuration, look at WASABi for more information.

To wrap this up, take a good look at the features that exists in these cloud applications because they are really useful and will save you a lot of time. Then you need to make sure that you develop for cloud, since a lot of the applications we have done in the past has been done on high reliable hardware it can be a challenge to start thinking and develop in the way that “it will crash at some point” and start developing applications that are crash friendly. Sometimes we know it’s going down (updates, computer restart and so on) but other times its hardware failure that causes the crash and then it’s an instant. Smaller steps, use queues to shorter request calls and make sure that data don’t vanish.

With HTML 5 and Azure Media Services we manage to create video distribution in just a few minutes. This is all HTML that’s needed:

The green text above is the actual path to our file distributed by Azure Media Services, which means that we don’t need to handle anything but providing the html and a link. Divide this up and add some JavaScript that calls our WCF service with logging info and we got a really light and fast video service.

I think this is awesome and this really speed up the entire process when creating robust and dynamic scaling applications. Even though it looks like we are bound to communicate over http to our cloud native application that’s not entirely true, cause with worker roles we can communicate with all protocols we want. We just need to specify in the Azure management console what to communicate with.

Hope this fires up some ideas, because we can achieve a lot more a lot faster with Azure.

Posted in: •Integration  | Tagged: •Architecture  •Azure  •Cloud  •Development  •HTML5 


BizTalk global pipeline tracking disabled unexpectedly

Many of our customers using the Microsoft BizTalk Server also use Integration Manager to log messages. Integration Manager collects message bodies and message contexts from the BizTalk tracking database. At some point we noticed that messages were missing in the log tool and the tracking database as well. We could not understand why, but after investigation we noticed that the global tracking of the Microsoft default pipelines were disabled, resulting in that messages were not tracked.

Fine, we knew why the messages were not tracked, but why did the global tracking become disabled? After further investigation we found that the problem occurred after modifying of existing assemblies.

Not all assemblies though, only the ones containing maps used by a port with a Microsoft default pipeline.

These are the pipelines where the tracking was disabled: - Microsoft.BizTalk.DefaultPipelines.PassThruReceive - Microsoft.BizTalk.DefaultPipelines.PassThruSend - Microsoft.BizTalk.DefaultPipelines.XMLReceive - Microsoft.BizTalk.DefaultPipelines.XMLSend

We wrote a PowerShell script to enable the tracking in an easy way. This is not water-proof though, if host-instances still are running, messages could be untracked between modifying the assembly and running the script. We reported this to Microsoft who was able to reproduce the fault and hopefully we will soon have a solution in the next BizTalk CU.

Posted in: •Integration  | Tagged: •BizTalk  •Pipeline Components  •Tracking 


The Cloud series – Part two

Last time I talked a bit about the cloud in general and now it’s time to talk about one of the features and possibilities it brings, the Azure Service Bus. I’m going to describe it while I work thru a common problem in the integration world.

We have problems in our daily business that I would like to address and how they may be solved with the use of Azure. I for once hate firewalls, not that they are bad but it’s so common that they are in the way and the person who can fix it for you is always busy and “will do it later”. How many times have you been stuck in testing or deploying, upgrading or whatever just because a firewall was blocking your way? I find this particularly annoying since it mostly means rescheduling the test, deployment or whatever I was doing and delaying it for at least a few days. Some projects gets delayed weeks and months (depending on system releases, holidays etc.). So what can we do about this? Well the obvious part is to be prepared, talk to the persons that handles the firewall in time and to verify that they actually did open the port. This isn’t always possible and often the rights are given to a service user or you are dependent of someone on the other side to verify that it’s open and so on. So it becomes a part of the test session which can be very annoying.

So is there a way the cloud could help us?

Well not directly and it don’t solve all issues with file shares etc. but it has given us something called Azure Service Bus. This is a very advanced bus which has a lot of techniques for dealing with efficient transport of messages and much more but for now let’s see it as a queue hosted in the cloud. This queue comes with transient storage which basically means that they guarantee that the message is stored safely and won’t be lost until you collect it. One of my favorite things with the Service Bus is that you** communicate with http and https!** So no problems with firewalls since they almost always allow http and https traffic and it’s easy to test, is it possible to browse the web on the computer/server? Awesome! Then it is possible with the Service Bus!

A simple but yet powerful setup would be something like in the picture below.

Here we can send and receive files without any problems with firewalls. And what’s even better is that BizTalk 2013 comes with an adapter to the service bus. So you only provide 2 things in this adapter: queue name and credentials. After that we can start receive or send messages! So you can easily replace old FTP- or VPN-file share solutions in BizTalk with this! And it’s very easy to write a simple service to install on a server to start sending messages over the Service Bus. Wouldn’t it be great to skip those VAN-providers? I think so and we can do it!

Is that all?

No not at all. The package of Service Bus Azure also gives us Topics, Subscriptions and Notification Hubs, I’m going to skip Notification hubs but you can read more of them here.

Topics and subscriptions

You can think of them as crossings (Topic) and the exits from that crossing (subscriptions). They are behaving quite different than a normal crossing since a message sent to this topic can either take 1 exit or several, meaning that the message is copied to each exit with a matching subscription demonstrated in the picture bellow.

And what do we benefit from this? Well at first we can connect a queue directly to the topic and a subscription could be connected to a queue which means that we easily can build advanced and powerful routing. This pattern is used in many cases, for example SOAP web services here xml message is sent to an http address and routed to a function in our service based on the method tag in the message. So this could be used for routing messages to the correct queue and then be processed by the correct service or sent to a partner.

As the example shows here we can do some awesome routing and route traffic directly in the cloud to the correct destination, which means that you don’t have to take in in your network through for example BizTalk and then let BizTalk route for you, it’s all done in the cloud on the bus! How great isn’t that!

As I mentioned earlier these 2 scenarios don’t take care of every firewall problem but it can certainly help us and when we plan and creates new flows these are seriously something to consider! And from what I know Microsoft is planning to enhance this area even more with BizTalk adapters, mapping functionality etc. To make it the awesome integration platform it has capabilities for!

As the opportunist and visionary that I am I believe that we are going to see that the market and possibilities for integration will grow and that new demands of integration will come. The Service Bus in Azure will most certainly play a big part in this and I just love the idea of working with this platform in the future!

I just love it!

Posted in: •Integration  | Tagged: •Cloud  •Cloud Integration  •Integration  •Service bus  •Windows Azure 


Your year 2013 developer resume!

Historically the written resume has been very important when recruiting new co-workers. The resume has been the main source to understand what a person knows and what their experiences are. But as it’s just a piece of paper and we’re looking for a good craftsman there’s been a mismatch. The “do you really know the techniques they’ve listed on the resume” feeling has always been nagging in the back of our minds during hiring processes.

One way that people has dealt with this is to put candidates thru different codes excises such as the Fizz Buzz test.

“Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.”

Tim Rayburn even wrote a BizTalk version of the FizzBuzz test http://timrayburn.net/blog/fizzbuzz-for-biztalk/ – can you solve in 5 minutes? ;)

Even though the FizzBuzz test definitely has its place in an interview situation, it doesn’t give a deeper understanding of the different skills a person might have such as code structure, architecture, cloud solutions, API handling, and source control and so on. To really understand how good or bad a person is we need to see more code, more actual work! That’s where new platforms such as CodePlex, GitHub, Stack Overflow etc. plays a role. If a person I’m interviewing can show a project on GitHub that shows decent code structure, good code quality, shows that the persons understand version control it beats almost any resume when recruiting for a technical position!

But hey, what about culture fit?

Culture fit is of course extremely important! It’s actually so important that if I have to choose between a technically skilled person with the wrong culture (that would for example be arrogance, anti-social tendencies, a non-team player etc), and a not so technical person but someone that has the will to learn and an otherwise excellent culture fit, I’d probably suggest we’d go with the later. But of course we’re always looking for those people that have a combination of both!

Call to action

So what should you as a developer type do to prepare for the future?

1. Start a blog Every developer should have a blog. A blog is a good idea for a number of reasons. It makes you a better writer. It leaves a trail of breadcrumb showing what you worked with and how you solve problems. It hopefully shows you’re skilled within certain areas and it actually works as you very own extended brain as you can always go back and see how you solved things in the past.

2. Start an open source project As a developer you’ll almost always work on some code that can be shared. And sure, putting code out in public can be both a political internal struggle and also requires some extra work when it comes to clean things up so anything that shouldn’t be public isn’t.

3. Show yourself Try to get to do some public speaking, first internally at your company and then at your local user group and so on. It will not only make you get better at public speaking but it will also show that you’re skilled and nevertheless that you have a passion for what you do!

Posted in: •Integration  •Recruitment  | Tagged: •culture fit  •FizzBuzz  •recruiting  •resume 


The Cloud series

I’m going to do a follow up on our Enterprise Architect Ulf Domanders discussion (Sharing is Caring) that lead down the Cloud and since this is a really big topic there will be a series of posts about it. I’ll start off with some background talk to move on and talk more about specific features.

So all of you have probably heard about the Cloud and what a powerful platform it is. So what is it and how come it’s going to revolutionize? Change the way we develop systems? Change how we are interacting with them and what we expect from them? I’ll do some short talking about it here.

I’m going to focus my cloud talking to Windows Azure. Azure has a lot of features and functions like databases, virtual machines, web system hosting, cloud service hosting, busses and more. Azure is a Platform as a Service (PaaS) and just like the more common seen, Software as a Service (SaaS), the whole idea is to use what you want to use and to pay for what you use, no more no less. It also has to be said that we don’t need to invest anything to get started, right now (when this is written) you even get the first 90 days for free.

So if I sign up for this what do I get? Access to an enormous computer park with almost unlimited resources! Think about that for a second, unlimited resources, that’s sick! What “normal” IT department can give us that to a reasonable price? I would say none. You even get the wonderful possibility to just demand more power at any time (a new computer, more website applications to handle that new peak on your website, database nodes, etc.), just ask for it and you will get it! No orders to create, no shipping times, no installing, no configuration to be done. It’s all done and ready to go within minutes! And when that peak is over? Just release it and stop paying for it. We can now have a slim (low cost) setup that easily and fast can be scaled and take 100 or tens of thousands times the normal payload or more, for just a short while or for as long as we like and we just pay for it while we use it.

But is the cloud for everything? No it’s not and we even need to consider if the system we plan to migrate to the cloud will benefit from the cloud features at all. Since the cloud is created for extreme situations that normal systems or solutions didn’t have to consider, most developers need to learn how to create or update systems to be able benefit and scale in a good way from these features. I would say that there is still a lot of developers that don’t use multiple threading, asynchronous call or any other form of parallel or task divided programming. It’s probably since it can be quite complex and hard to understand, even though there are very good frameworks out there that simplifies these tasks. On the same way the cloud simplifies the extreme tasks of controlling hundreds of clustered computers, busses, applications going up and down, virtual machines, scaling up and down etc. It’s still in the hands of us developers to develop systems in a way that they can and will benefit from all these fantastic features in a good way.

Just take the example that Ulf made, 2 600 000 hits found in 0.26 s!!! It is unimaginable for me as a developer on a small company to get the resources needed to perform that massive search in billions of billions of records, or I mean it was. Since this, the result of an extremely optimized and very good implementation of the MapReduce pattern (MapReduce pattern actually was created out of the paper describing Google’s search engine algorithm) running on Google’s own cloud platform. The cool thing is that we actually can access this today! We just need to start thinking CLOUD and start developing cloud native products! Just think about what we can achieve and how much easier hosting, scaling, testing and developing with high customer interaction will be.

Mobility will still be a key word in the future witch will continue push systems into the web and since most web system comes with a web services integration will be more intuitive and even more important. We need to do more with less! We need to be able to use our favorite web shop, erp-system, wms-system, invoice sender and so on and to just connect them by some easy steps. When that is done the systems will talk and with the right setup do most of the work for you!

As the nerd I am I love the Cloud and all the fantastic features it comes with. The possibilities it gives us and what new and unknown paths we will cross. Since we are in the beginning of using and understanding the possibilities of such great power anything can happen! It will most certainly revolutionize the software development and all that goes with it, the demand and the possibilities for system integration will grow and grow!

I just love it!

 

Posted in: •Integration  | Tagged: •Cloud  •Cloud Integration  •Future  •Innovation  •Integration  •Windows Azure 


Sharing is Caring

What is it that makes the world which we live in and work different from the world as we knew it 10 years ago?  And what will it look like if we look five years ahead in time?

Last week I discovered Chris Hadfield on Twitter (@Cmdr_Hadfield), if you haven’t heard of him I can tell you that he is a Canadian astronaut currently living in space aboard ISS as Flight Engineer on Expedition.

Chris is taken photos from where he is in outer space with his system camera and shares the pictures on Twitter. From his micro blog he also comments and informs us non astronauts about life as it is for a person living in space. You will find spectacular pictures from his camera on Twitter and you can of course also ask him questions and follow his discussion threads with other earthlings. As a nerd I find this, of course, amazing and now I just have to know what it is like in space and how the earth looks like from up there. It also made me think of things that happen closer to my daily life as an Enterprise Architect and particularly my involvement in systems integrations in mid-size and larger companies where I am often involved in the preparation for companies to stay innovative. This by assisting them in the preparation work to provide the best experience for their customers to do informed decisions with the help from IT.  In my simple earthling thoughts I was looking both back in time and also made an attempt to look forward into the future and this is what I came up with. So how come I started to think of my job instead of continuing with high flying plans?

Well; 10 years ago I didn’t know that I would write an activity log on the web (Facebook), I didn’t know that my phone had everything I would need from a flashlight to internet and even a telephone, I didn’t know that I would use Instagram to share my images, hell I didn’t even know that I had a need to share images… I certainly didn’t know that I would be “familiar” to an astronaut and I should be able to chat with him from my mobile. The only contact you had with astronauts at that time was from Discovery channel or on the news when well-known reporters had a few minutes to interview the astronauts with well-prepared questions. The media companies also had in their power the possibility to adjust the questions and answers to suite their own purpose. If they had a political or religious purpose, Excuse me for being a bit paranoid…

There are so many things I didn’t know about at that time but in one sense I did a wise choice even though I wasn’t aware of the decisions taken at the beginning of 2000 to focus on not only technique, SOA and information management but also that the governance of services would pay off over and over again as new technical enablers arises. To have a stable ground to build on to meet up with the change in the way we communicate and do business today makes it possible to keep up and change communication channels as the market change. With well-defined SOA-interfaces and with processes to govern and develop the services so that it really mirrors your organizations daily business you will stand prepared to use any new opportunities that will be taken for granted by your customers tomorrow. This you can rely on because information is relatively stable and not sensitive to technical changes.

Looking ahead 5-10 years from now we have most probably developed in a totally different approach from what we believe is possible today both in the way we do business and share our knowledge with our customers and friends as well as what tools that is in our hands to support us in decision making and forecasting. Evolution has brought us to where we are today in a few billion years and you can see that the slope has increased the last hundred years but now things are coming to change radically. The number one reason why we will see a dramatic increase in speed of behavior changes and the way our social and business habits will affect our lives is the introduction of cloud computing.

With the introduction of the cloud the innovation will burst and we will most probably see technical changes happen in a pace never seen before. This since the cloud gives every developer and innovator (You and I, and even our neighbor) a low cost super computer to carry out his vision to a cost that is affordable or at least worth taken an economically risk for. The cloud will for sure change the way we act and on what base we take decisions in the future.

For companies that are actively working to have a strategy for IT-management and Integration strategy this is good news since they can benefit from the new opportunities in a fast and cost effective way. For companies who has taken short cuts and that have only relied on techniques will have it slightly harder and cannot take advantage in the same way as competitors that had a plan for their business IT.

By the way: Just when I finished this blog post I Googled the title and the Google super cloud computer told me that my title was one of 20 600 000 pages with the sentence “Sharing is caring” in the page. Google also had the nerve to calculate the number of results in 0,26 seconds. Worse than Google is that my colleagues laughed and told me that I was probably the last person in the universe not knowing that “Sharing is caring” is a commonly used expression (I will of course ask Chris Hadfield about this).

Never mind! Even though I love to do preparation to meet new enablers in IT I am still a stone aged man in his best years and I can make decisions that is not recommended by a machine just because I feel for it and keeping the title in this blog post is my way of proving it. Hopefully that shows that there is room for us people in the future despite all IT innovators sitting in front of their computers developing the most stunning artificial intelligences which is deployed on a super computer near you with capability to index, analyze billions and billions of pages and throw a result in your face that you do not want to know and all this in a fraction of a second.

Or maybe I am wrong; maybe there is no room for a grumpy man in his best years in the future.

Who knows? Only time will tell. But at least you can prepare for a change because it will come. And you can also prepare by starting to not only looking at technical enablers but also think:

  • What do I need to do to be able to take advantage of these stunning innovations?
  • What makes these applications so fantastic?
  • Focus on your core business transactions that brings value to your organization because these interfaces will most probably make different in a short wile
  • Even information that you do not want to share today because it is so secret, this you will most probably need to share tomorrow to give customers what they expect.

Make a plan on how you should meet the future and make sure that integration is a part of that plan.

Posted in: •Integration  | Tagged: •Chris Hadfield  •Integration  •space 


iBiz Solutions arranges After Work evening

On March 7, iBiz Solutions and CompareKvinna arranges an After Work evening where integration is on the schedule. Two of iBiz Solutions stars, Marie Högkvist and Therese Axelsson will talk about life as a systems integrator and the challenge and opportunities of integration.

“It feels really exciting and great to get the confidence to present iBiz Solutions and integration during this evening, and simply describe what efficiency and effect integration have” says Therese Axelsson.

“I Often meet people who do not know what system integration means. It’s going to be great to talk about the possibilities around integration and why it is such an interesting topic” continues Marie Högkvist.

In addition to an exciting presentation by these two women, food and drinks will be served.

If you wish to participate, register before March 1, at this link.

Where
Karolinen
Våxnäsgatan 10
653 40 Karlstad

Time
17.15-17.45: Welcome and mingle
17.45-19.15:  Presentation
19.15: Food, drinks and mingle

Posted in: •About iBiz Solutions  •Marketing and Branding  | Tagged: •After Work  •CompareKvinna  •iBiz Solutions 


SalesInvoice routing in BizTalk

I’ve had the fortune to create two different Invoice routing applications in BizTalk. Lesson learned we made some improvements to the solution when it was time for #2 :)

Here I´m going to try and explain all the pitfalls and “good-to-know” which will hopefully save you a couple of hours of head scratching.

Some clarification before we start. I presume that you have already handled the invoice mapping from your sending invoice system, and the invoice lay outing is already done so that you are left with one or two files that you need to route to the appropriate customer. Often when talking about invoicing there are two types of files involved. One xml file with the raw data, and one pdf or image file that visualizes the data from the xml invoice.

Xml invoice is used to send to brokers that can send the invoice electronic to customers and the pdf/image file is used for email or printing and sending in a more traditional way; by mail. In the example below I use an xml message of the type E2B.

Step 1: Pair the xml and pdf/image file with each other when BizTalk receive them. You can’t exactly know when you receive them both, or in which order. For this we have set up an orchestration with a scope containing a parallel action shape that waits for both files to be received from two different receive ports; one for the xml and one for the pdf.

We have a timeout set on the scope that times out after given time if we haven’t received both files. In this way we can take proactive action before the customer calls you and wonder where the invoice is. It could be stuck in BizTalk forever if you don’t tell BizTalk what to do.

For this receive scenario we use a correlation set on the filename. Let’s presume the filename is set in this following format. [SendingOrgNumber]_[ReceiverOrgNumber]_[InvoiceNumber].[FileType] We then need to match on everything except the [FileType]. For this we create a custom pipeline component, to be used on the receive pipeline, that strips the file extension and promotes the value as ReceivedFileName in the default context.

In this way we get two files that are named exactly the same; leading to BizTalk being able to correlate the two files into the same orchestration. Fortunately we receive the files on different ports, so even though we are inside the orchestration with two files that are named exactly the same we know which is the xml and which is the pdf/image file.

 

Step 2: Set up routing information. Paper, email or electronic Invoice? Here we have more than one way to get information on how to route the invoice. In the first solution I built we had masterdata information about a customer in a database and fetched routing information based on the organization number fetched from the message itself. In the second one we had the information in the xml file when we received it. Here it is up to you how to get this information.

The important thing is to promote this value so that we can filter on this value on the send ports in the last step. i.e. 1=paper invoice, 2=email, 3=eInvoice.

 

Step 3: Decide if message should be routed via email or not. If routing is email then there is a dynamic send port set up in the orchestration for this scenario.

The only thing we need to do is to construct the message, promote receive type and reset the filename with file extension as shown in picture above. Last step before sending the invoice through the dynamic email port is to set up SMTP settings. (Here we read settings from appsettings file which makes it easier to change values without the need to actually redeploy solution). Important is to set the outbound SMTP server and credentials inside the construct message shape and also the email address, which is set in an expression shape. Example could look like textbox below.

 

Step 4: Send invoice. One of the last steps is of course to send the invoice. If it was an invoice that was to be sent by email this has already been done, as described in previous step.

But if it wasn’t then we really need the routing information in the context of the message. This is where we send the file to the appropriate send port and out of BizTalk’s control. We could set up send ports for each scenario. I.e. one send port for eInvoice xml and one for eInvoice pdf file, then one more for printhouse pdf, and maybe we want archiving… Like that you would end up with an orchestration full of send ports and every time you need to add a scenario you need to redeploy the solution. Instead we set up a send port for the xml and one for the pdf file, to be sent to the messagebox.

From there we can set up send ports in BizTalk Admin Console and set filter on receive type and message type. Then we have a clean orchestration and all other configuration in the admin console, easily configurable and easy to add one more subscription.

One very important thing to keep in mind when sending xml and pdf to the messagebox and you want to be able to filter on custom context properties on pdf you have to set up a correlation set for the pdf file. To be able to make custom context values to show up as promoted in the message you first have to promote the BTS.MessageType on the PDF send port. Create a new correlation set. When in the Correlation Properties window, scroll down to BTS and choose MessageType. Also choose your custom context values from your own custom property schema. Set initializing correlation sets on the xml message to the correlation set you just created and then set following correlation sets to the same on the send pdf.

 

Step 5: Filtering on send ports. Last step is to set up the actual filter and send the invoice. Here it is up to you to choose how to send the files. In my scenario I have an xml namespace on the xml file and the pdf file, which is treated as a System.Xml.Document inside the orchestration, has no namespace.

So if I would like to send only the xml file to eInvoice broker then I can set up a filter like this. (In my scenario invoice receiveType is 3 for eInvoice).

If you only want to send the pdf all you have to do is to set up filter to say that MessageType is not like http://www.e2b.no/XMLSchema/Internal#Interchange. In this way you can set up arbitrarily send ports to subscribe on the same message without the need to change or redeploy the BizTalk application.

Hope you found this useful and good luck setting up your own SalesInvoice routing application in the future! :)

Posted in: •Integration  | Tagged: •Architecture  •Integration  •Routing  •SalesInvoice 


iBiz Solutions invite to NoBUG event in Oslo

On January 15, 2013 iBiz Solutions will be the host for the NoBUG event which takes place in Oslo. Two of iBiz Solutions experienced integration architects, Richard Hallgren and Michael Olsson, will hold two presentations. Michael Olsson will hold a presentation about BizTalk IaaS, Paas - Hybrid-based integration solutions using BizTalk locally and / or in the cloud. Richard Hallgren will then talk about efficient system documentation in an integration project.

When: January 15 2013 at 18:00

Where: Microsoft Norway, Lysaker Torg 45, 1366 Lysaker, Norway

More information and registration for the event can be found at: http://lnkd.in/4hzPcC

http://www.linkedin.com/groups/NoBUG-Norwegian-BizTalk-User-Group-3788771

Posted in: •About iBiz Solutions  •Integration  | Tagged: •NoBUG 


XSLT vs. Graphic map tool - What is the best transformation technology?

My name is Johan and I have been working with system integration using BizTalk for 1.5 years. I often come across situations where the customer want changes on existing integration flows. The required changes often mean that logic needs to be added or modified. Many complex integration flows have been created by using the graphic map tool and consists of thousands of links and several hundred functoids. This can look something like the picture below.

This is a small part from such a mapping:

When I am asked to make a change on an integration flow like the one shown in the picture above I feel confused for a while before I understand how I can make the required change. It can be a time consuming task to update a mapping like this where multiple developers have made modifications over a long period of time and the integration flow have started to look like a bowl of spaghetti in the GUI even though the required changes are small. I believe it is a common behavior amongst developers to always choose the same technique (XSLT or the graphic mapping tool) for all their integrations because they are certain it is the best way of doing it or maybe just because they are lazy. To make the maintainability easier for complex integration flows I recommend XSLT as the way of creating a biztalk map. The graphic map tool is more suitable than XSLT for small integrations that haven’t got complex logic.

To summarize this blog post in one sentence: Choose the best fitting mapping technique for every situation in order to make future maintenance of your integration easier.

Posted in: •Integration  | Tagged: •BizTalk  •Graphic  •Maintenance  •Map  •XSLT