iBiz Solutions rated as a Super company also in 2013

Bisnode (formerly PAR), in association with business magazine Veckans Affarer have selected the Super Companies for 2013 , and this year, only 366 of ALL Swedish incorporated companies with a turnover of 10 million SEK (50.000 companies in Sweden ), achieves the razor sharp requirements for becoming a Super Company.

Super Companies - Bisnode in association with Veckans Affarer

The selection of Super Companies is a cooperation between Bisnode and Veckans Affarer, where Bisnode’s analysts have developed a model to identify the Swedish Super Companies. The selection and ranking to become a Super company is based on the Swedish companies’ economic performance over the past four years. The model used takes into account and consider the individual companies:

  • Growth
  • Earnings
  • Return
  • Efficiency
  • Capital Structure
  • Financing

 

“I am very happy and very proud that we received the award as Super Company in 2013 as in 2012”, says Allan Hallen, CEO at iBiz Solutions.

This sends clear and positive signals to our customers, our partners and the market in general that we are doing much right now, and have done so for a long time. Of course also an internal receipt, and a crystal clear evidence that our clear focus, our charted path and way of working works and is successfully. iBiz Solutions has a clear and long-term goal towards becoming No. 1 in the Nordic countries in the field of integration, and we already are one of the very best .

We have good cooperation with many large, well-known and exciting clients in the Nordic countries, where it undoubtedly shows that we are well qualified and are at the forefront in the field. Last year, several new exciting customers connected, and existing customers gave us continued confidence. That we invested heavily in partnership with both Microsoft and Tibco is of course still important , and here we are already now one of only a handful of players who have received the highest partner status , which is the Gold level with these strong and well-known companies. We will strengthen and intensify cooperation further with Microsoft in 2014, with new and even more exciting associations and deals.

There are of course many vital success factors, our very competent, committed and talented staff that contributes greatly for customers to create what many aspire to - to establish maintainable integrations is one of them. Working in a company that has ambitions and challenging clear growth goals is, of experience, much more fun than “the options” … and it shows in the exciting applications we receive from the top of the line consultants with own ambitions. iBiz Solutions is today a clear strong option when it comes to attracting the right consultants and applications in the field of integration , and we are always looking for more colleagues to our various offices. Both we and our customers pulls great value and efficacy of our concepts and tools that we over time have developed and refined - we make a difference in our commitments and deliveries.

“I know from previous years that Veckans Affarer’s evaluation criteria are very tough, and that the eye of the needle to meet the requirements is extremely small, so it is especially exciting that iBiz Solutions is the only IT company in Karlstad that meet the requirements for Super Company 2013. That iBiz Solutions also is a Gasell 2013 is nice, but the requirements for Gasell are not even close to Super company.

Our ambition is to be a Super company even in the future, for the benefit and value for employees, customers and partners now and in the future”, concludes Allan Hallen, CEO at iBiz Solutions.

 

Please read more about Super company on; http://www.va.se/temasajter-event/foretagande/superforetagen/hela-listan-superforetagen-2013-564105

Posted in: •About iBiz Solutions  | Tagged: •Bisnode  •Gasell  •Super Company  •Superforetag 2013  •Veckans Affarer 


Problem building BizTalk solutions with standalone MSBUILD

We are in the process of setting up a build server for BizTalk 2010 for one of our customers. Naturally we wanted an installation that was as clean as possible and we were very pleased to find out that we only need the “Project Build Component” from the BizTalk installation and not a full-blown BizTalk or even Visual Studio to build the BizTalk project. The description of the project build component clearly states that “Project Build Component enables building BizTalk solutions without Visual Studio” and MSDN confirms that no other component should be needed http://msdn.microsoft.com/en-us/library/dd334499.aspx.

We installed the .NET SDK and the project build component and started testing.

We started off by building some shared schema solutions and it worked just fine but when we tried building a project with some more BizTalk artifacts, schemas, mappings and pipelines, boom!

The “MapperCompiler” task failed unexpectedly. System.IO.FileNotFoundException: Could not load file or assembly ‘Microsoft.VisualStudio.OLE.Interop, Version=7.1.40304.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ or one of its dependencies. The system cannot find the file specified. File name: ‘Microsoft.VisualStudio.OLE.Interop, Version=7.1.40304.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’.

That is a bit odd since “Project Build Component enables building BizTalk solutions without Visual Studio” and yet it complains about a missing Visual Studio assembly?

Unwillingly to give up and install Visual Studio we started investigating what would break the build. If we removed the mappings and just build the schemas and pipelines, it worked. If we added one of the xml-schema-to-xml-schema mappings, it worked. If we added one of the flat-file-schema-to-xml-schema mapping, boom! I am not sure why a flat file schema is different from an xml schema from the mappings perspective but apparently it is.

Eventually we decided to install Visual Studio and BizTalk Developer Tools on the build server and now we are able to build this project as well.

This is not that big of an issue for us since we didn’t have to install and configure a full-blown BizTalk with a SQL Server but the MSDN article is misleading and it took us a while to give up and install Visual Studio.

 

Posted in: •Integration  | Tagged: •BizTalk 2010  •Continuous Build  •MSBuild 


Bug in BizTalk EdiAssembler?

I got an email from a customer telling me that the UNB segment in our EDIFACT messages was invalid according to the EDIFICAT specifications. I thought that it sounded rather strange since the UNB segment is generated by BizTalk based on the party settings.

The UNB segment we sent looked like:

UNB+UNOC:3+XXXXXXXXXXXXX:14+YYYYYYYYYYYYY:14+130627:2103+16++++0++1

The marking above in red is called “0031 Acknowledgement request” and is a flag managed on the “Acknowledgements” tab in the BizTalk party settings.

0 = off, 1 = 1 on sounds fair, right? Well not quite.

According to the EDIFACT specifications valid values of Acknowledgement request are 1, 2 and blank.

http://www.gefeg.com/jswg/cl/v4x/40202/cl6.htm

MSDN documentation states the following:

UNB9 (Acknowledgment request)

Enter a value for the Acknowledgment request. This value can be 0 - ACK not required, 1 - Acknowledgement requested, or 2 - Indication of receipt. Selecting 1 prompts generation of a CONTRL message as functional acknowledgment. Selecting 2 prompts generation of a CONTRL message as technical acknowledgment.

This field can only be a single-digit number. The default value for the field is 0 - ACK not required.

From http://msdn.microsoft.com/en-us/library/bb246092(v=bts.20).aspx

Clearly a mismatch between BizTalk and the EDIFACT standards.

I haven’t had the chance to try this myself but according to this forum post the ‘0’ was introduced in BizTalk 2010

http://social.technet.microsoft.com/Forums/en-US/92ce0b55-fc0c-4def-a55d-413b9717cc65/edifact-unb-segment-bts-2010

 

Why did Microsoft change this?

 

Posted in: •Integration  | Tagged: •BizTalk  •EDIFACT 


Know your Now – Real-Time Business Intelligence

The majority of today’s traditional Business Intelligence (BI) solutions are based on an Extract Transform Load (ETL) approach. This means that data is extracted from a number of sources, transformed into the specific format needed and then loaded in to the business intelligence tool for further analysis.

An ETL task can involve a number of individual steps such as communication with a number of sources, transition of encoded values, calculating values, merging data from a number of sources, calculation aggregations etc. ETL processes also requires saving all raw data in each source so that each scheduled ETL run can perform the transformations needed and for example calculate aggregation, consolidate data and so on. The problem with this approach is the latency it adds before current data can be loaded, displayed and analyzed in the actual BI tools. It’s a heavy and expensive process that limits the possibilities of showing data and analyses that are close to real-time.

In today’s competitive environment with high consumer expectation, decisions that are based on the most current data available will improve customer relationships, increase revenue, and maximize operational efficiencies. The speed of today’s processing systems has moved classical data warehousing into the realm of real-time. The result is real-time business intelligence (RTBI).

To achieve RTBI one has to rethink the ETL process and find ways of taping in to the data streams and feed business transactions as they occur to a real-time business intelligence system that maintains the current state of the enterprise.

Another important part of the RTBI puzzle is Complex Event Processing tools (CEP). CEP tools are specialized to analyze big data stream in real time. The tools makes it possible to also do complex analysis at the same time as the data is read and written to the real-time business intelligence – and not as a step in a slow and heavy ETL process.

Real-time business intelligence is also known as event-driven business intelligence. In order to react in real-time, a business intelligence system must react to events as they occur – not minutes or hours later. With real-time business intelligence, an enterprise establishes long-term strategies to optimize its operations while at the same time reacting with intelligence to events as they occur.

RTBI might sound as science fiction to many but today

Posted in: •Integration  | Tagged: •Architecture  •BI 


Exposing a REST GET endpoint using BizTalk Server 2013

In the newly released BizTalk Server 2013 we finally got an adapter for the WCF WebHttpBinding enabling us to expose and consume REST endpoints. BizTalk has for a long time had strong support for SOAP services (SOAP adapter and later WCF-* adapters). One major difference between SOAP and REST is that a SOAP message will always contain a message body and a message header making it easy to map the SOAP body to a typed message deployed in BizTalk. A REST message does not necessarily contain a body, in fact a GET request will only consist of a URI and HTTP headers.

So how do we translate this type of request to the strongly typed world of BizTalk?

If we start by looking at an example where we want to be able to query a database for product prices via BizTalk.

 

If we were to expose a SOAP service to the client, the solution could look something like this (irrelevant implementation details excluded):

 

The client sends a SOAP message to BizTalk. The adapter extracts the message from the body. The message is passed on to the receive pipeline where the message type is determined. Since the message type is known we can perform transformations on it on the send port.

That is old news, bring on the new stuff.

When the client wants to GET something from the server in a RESTful way it uses HTTP GET verb on a resource.

GET http://localhost/ProductService/Service1.svc/Products/680 will instruct the server to get product 680. Note that there isn’t any message body, just a verb and a resource identified by a URI.

So how can we consume this in BizTalk?

The WCF-WebHttp adapter does some of the heavy lifting for us and extracts the parameters from the URI and writes them to the message context.

 

The XML in BtsHttpUrlMapping tells BizTalk what URIs and what verbs on that URI we want to listen on. In the URI for the first operation we have defined a placeholder for a variable inside curly braces. That is what enables us to ask for a specific product:

http://localhost/ProductService/Service1.svc/Products/680

If we click the Edit button in the Variable Mapping section we can map this placeholder variable to a property defined in a property schema.

So we need to define a property schema for our ProductId property and promote it from our request schema.

 

If we change our WCF-WSHttp port to use the WCF-WebHttp adapter with the above configuration and update our client to send REST request the client would receive an error and there will be a suspended message in BizTalk saying “No Disassemble stage components can recognize the data.”

As expected the message body is empty and our ProductId is written to the context. The empty message body causes the XMLDisassembler to fail.

What we need is a custom disassembler pipeline component to create a strongly typed message from the context properties written by the adapter.

The interesting part here is of course the custom disassembler component. It has one parameter, DocumentTypeName. This property is where we define the schema of the message it should create.

 

If we run the previous request through this component we now get a message body:

Posted in: •Integration  | Tagged: •BizTalk 2013  •REST 


The Cloud Series – Part 3

This is my third post in this series and this time let’s talk development and cloud native applications, to be more precise, web applications.

I had a very interesting chat with a friend/colleague of mine the other day about an application they had. There where some problems with the current setup they had and one big problem was scaling so they decided to rebuild the solution and to host in the cloud. The interesting part here is that the solution in my head differed a lot from what they created so I would like to take the opportunity to compare them here.

There are 2 ways of running web applications in Azure. First of all we have the traditional way when we let Azure host our own Virtual Machine (from now referred as VM), which means that we can create a VM that runs any Windows or Linux. Configure it as we want, install what software we want, tweak the IIS/Apache servers and so on. Then we have the “Cloud native” way where we use web workers in Azure and just upload the code and Azure fires it up. Did you know that you can upload source code directly from GIT? Just by a simple remote push command and Azure will compile the code for us. How cool is that! When we use Web workers we don’t need to install or configure anything in the OS or IIS environment, it just works!

So what does this means?

- VM gives us flexibility to tweak and configure it to match out our needs to 100%. This is the IaaS (Infrastructure as a Service) part of Azure and it releases us from all hardware, network etc. concerns since Azure and Microsoft will handle that for us.

Web workers as the “Cloud native” use PaaS (Platform as a Service) which is the next level of service and it releases us from all above and all OS handling, no installation, no configuration, no updates - nothing! They say “Take care of your application, we do the rest”, still we are able to do configuration to IIS or Windows or install services and so on in the Startup Tasks. But it lets us focus on what we developers do best! Developing applications, and hopefully it gives us more time to improve and maintain the applications instead of handling boring patching, OS updates other updates etc.!

When choosing what to run and how, there is a lot more to think about than the above mentioned areas, since we sometimes need the possibility to install third part components in our OS or we need to have special configurations and setups in our IIS/Apache installation and so on. Every time we need to use/configure things outside of the application itself it might be a better choice to use a Virtual Machine.

So now that we have a little background, let’s set up the context. The application is a plugin that can be used anywhere at a webpage, the idea is to create dynamic video content that provides the client user with the possibility to start streaming a small video after a click on the “start” button. For fun let’s say that the application is used on a big Swedish newspaper site www.aftonbladet.se and we got it plugged in on one of the top news. Studies have shown that when this is done before the payload looks something like the chart below. (The chart shows average values/hour in time periods of the day and the values are completely made up, but I want to show you how big the difference are between the biggest peak and the smallest and give us something to plan around.)

Now we have a scenario let’s solve this in 2 different ways one using Virtual Machines and one using only Azure Cloud parts. Basically the solution would look like this.

To start off we need a web application that handles this:

  • provides the site with the content
  • gives access to the movie
  • presents the movie

In the background we would need something like this:

  • Place to store movies
  • Database to store information/logs/configurations etc.

Let’s set this up with Virtual Machines, we use Windows server, IIS on our web application machines. As a backend we setup Virtual machines with clustered NoSQL databases. We are using the load balancer in Azure to help us handle the balancing of the traffic between our web servers and Azure Media Service to provide the movies. It could look something like this (orange boxes is the parts where we are responsible for):

With this setup we could easily add new Virtual Machines at the place where we meet threshold first, either at the database or the Web Application. This gives us a good setup and a lot of scaling potential, flexibility and possibilities. But we need to keep the OS, IIS, NoSQL server etc. updated and upgrade to new versions with all the problems that comes along and so on. Sure some of these things are automatically but we have the responsibility to do this. We also have the responsibility to make sure that the machines are health and run without errors, Azure will only provide us with the assurance that the Virtual Machine runs, not that the image and software on the image is working correct.

In this setup we would have our movies saved in the Azure Media Service and in that way we will be able to provide live streaming with very little effort, since Azure does all the work! Logging each view and request could be heavy and should be done asynchronously by using for example Redis we can even out the peaks and protect our databases from being overloaded. With Redis and a windows service that frequently reads from the create list/queue we will write our logs in our pace and since it’s not critical data it’s not a problem. But we have another problem, what if the host that runs this Virtual Machine crashes and the fault is a corrupt hard drive? Then the local data is lost and we have lost all the logs that was stored on that node. This is not ideal.

On the front end we install and setup the OS Environment, we install and configure IIS, and install our application. On the backend we install OS, install Redis and NoSQL server. Creates cluster and configures Azure Media Service. After this we can just publish new machines that will start of working asap.

So let’s do this the “Cloud native” way! Here we will use Web workers for the web application, Azure Table storage as our NoSQL database, we use service worker for logging with a Service bus Queue for evening the peaks out. Even in this setup we would have our movies saved in the Azure Media Service and in the same way we will be able to provide live streaming to clients. This solution could look something this (orange boxes is the parts where we are responsible for):

So what’s going on here? Well we start of in the same way as before with Azure Load Balancer but then we skip all that comes with hardware OS, IIS etc. and uses Web Workers for the web applications. Then we use Azure Table Storage and for logging we push them on a Service Bus Queue to the Worker role which will log at its own pace just as with the Redis mentioned above but this time with a transient queue that guarantees a delivery. So we develop the software, configure Service bus queue, Table storage and Media server and then we are up and running!

I haven’t mentioned the scaling part since it isn’t the point of this entry but as everything else it’s easy and there are actually complete solutions that are ready to use and only needs configuration, look at WASABi for more information.

To wrap this up, take a good look at the features that exists in these cloud applications because they are really useful and will save you a lot of time. Then you need to make sure that you develop for cloud, since a lot of the applications we have done in the past has been done on high reliable hardware it can be a challenge to start thinking and develop in the way that “it will crash at some point” and start developing applications that are crash friendly. Sometimes we know it’s going down (updates, computer restart and so on) but other times its hardware failure that causes the crash and then it’s an instant. Smaller steps, use queues to shorter request calls and make sure that data don’t vanish.

With HTML 5 and Azure Media Services we manage to create video distribution in just a few minutes. This is all HTML that’s needed:

The green text above is the actual path to our file distributed by Azure Media Services, which means that we don’t need to handle anything but providing the html and a link. Divide this up and add some JavaScript that calls our WCF service with logging info and we got a really light and fast video service.

I think this is awesome and this really speed up the entire process when creating robust and dynamic scaling applications. Even though it looks like we are bound to communicate over http to our cloud native application that’s not entirely true, cause with worker roles we can communicate with all protocols we want. We just need to specify in the Azure management console what to communicate with.

Hope this fires up some ideas, because we can achieve a lot more a lot faster with Azure.

Posted in: •Integration  | Tagged: •Architecture  •Azure  •Cloud  •Development  •HTML5 


BizTalk global pipeline tracking disabled unexpectedly

Many of our customers using the Microsoft BizTalk Server also use Integration Manager to log messages. Integration Manager collects message bodies and message contexts from the BizTalk tracking database. At some point we noticed that messages were missing in the log tool and the tracking database as well. We could not understand why, but after investigation we noticed that the global tracking of the Microsoft default pipelines were disabled, resulting in that messages were not tracked.

Fine, we knew why the messages were not tracked, but why did the global tracking become disabled? After further investigation we found that the problem occurred after modifying of existing assemblies.

Not all assemblies though, only the ones containing maps used by a port with a Microsoft default pipeline.

These are the pipelines where the tracking was disabled: - Microsoft.BizTalk.DefaultPipelines.PassThruReceive - Microsoft.BizTalk.DefaultPipelines.PassThruSend - Microsoft.BizTalk.DefaultPipelines.XMLReceive - Microsoft.BizTalk.DefaultPipelines.XMLSend

We wrote a PowerShell script to enable the tracking in an easy way. This is not water-proof though, if host-instances still are running, messages could be untracked between modifying the assembly and running the script. We reported this to Microsoft who was able to reproduce the fault and hopefully we will soon have a solution in the next BizTalk CU.

Posted in: •Integration  | Tagged: •BizTalk  •Pipeline Components  •Tracking 


The Cloud series – Part two

Last time I talked a bit about the cloud in general and now it’s time to talk about one of the features and possibilities it brings, the Azure Service Bus. I’m going to describe it while I work thru a common problem in the integration world.

We have problems in our daily business that I would like to address and how they may be solved with the use of Azure. I for once hate firewalls, not that they are bad but it’s so common that they are in the way and the person who can fix it for you is always busy and “will do it later”. How many times have you been stuck in testing or deploying, upgrading or whatever just because a firewall was blocking your way? I find this particularly annoying since it mostly means rescheduling the test, deployment or whatever I was doing and delaying it for at least a few days. Some projects gets delayed weeks and months (depending on system releases, holidays etc.). So what can we do about this? Well the obvious part is to be prepared, talk to the persons that handles the firewall in time and to verify that they actually did open the port. This isn’t always possible and often the rights are given to a service user or you are dependent of someone on the other side to verify that it’s open and so on. So it becomes a part of the test session which can be very annoying.

So is there a way the cloud could help us?

Well not directly and it don’t solve all issues with file shares etc. but it has given us something called Azure Service Bus. This is a very advanced bus which has a lot of techniques for dealing with efficient transport of messages and much more but for now let’s see it as a queue hosted in the cloud. This queue comes with transient storage which basically means that they guarantee that the message is stored safely and won’t be lost until you collect it. One of my favorite things with the Service Bus is that you** communicate with http and https!** So no problems with firewalls since they almost always allow http and https traffic and it’s easy to test, is it possible to browse the web on the computer/server? Awesome! Then it is possible with the Service Bus!

A simple but yet powerful setup would be something like in the picture below.

Here we can send and receive files without any problems with firewalls. And what’s even better is that BizTalk 2013 comes with an adapter to the service bus. So you only provide 2 things in this adapter: queue name and credentials. After that we can start receive or send messages! So you can easily replace old FTP- or VPN-file share solutions in BizTalk with this! And it’s very easy to write a simple service to install on a server to start sending messages over the Service Bus. Wouldn’t it be great to skip those VAN-providers? I think so and we can do it!

Is that all?

No not at all. The package of Service Bus Azure also gives us Topics, Subscriptions and Notification Hubs, I’m going to skip Notification hubs but you can read more of them here.

Topics and subscriptions

You can think of them as crossings (Topic) and the exits from that crossing (subscriptions). They are behaving quite different than a normal crossing since a message sent to this topic can either take 1 exit or several, meaning that the message is copied to each exit with a matching subscription demonstrated in the picture bellow.

And what do we benefit from this? Well at first we can connect a queue directly to the topic and a subscription could be connected to a queue which means that we easily can build advanced and powerful routing. This pattern is used in many cases, for example SOAP web services here xml message is sent to an http address and routed to a function in our service based on the method tag in the message. So this could be used for routing messages to the correct queue and then be processed by the correct service or sent to a partner.

As the example shows here we can do some awesome routing and route traffic directly in the cloud to the correct destination, which means that you don’t have to take in in your network through for example BizTalk and then let BizTalk route for you, it’s all done in the cloud on the bus! How great isn’t that!

As I mentioned earlier these 2 scenarios don’t take care of every firewall problem but it can certainly help us and when we plan and creates new flows these are seriously something to consider! And from what I know Microsoft is planning to enhance this area even more with BizTalk adapters, mapping functionality etc. To make it the awesome integration platform it has capabilities for!

As the opportunist and visionary that I am I believe that we are going to see that the market and possibilities for integration will grow and that new demands of integration will come. The Service Bus in Azure will most certainly play a big part in this and I just love the idea of working with this platform in the future!

I just love it!

Posted in: •Integration  | Tagged: •Cloud  •Cloud Integration  •Integration  •Service bus  •Windows Azure 


Your year 2013 developer resume!

Historically the written resume has been very important when recruiting new co-workers. The resume has been the main source to understand what a person knows and what their experiences are. But as it’s just a piece of paper and we’re looking for a good craftsman there’s been a mismatch. The “do you really know the techniques they’ve listed on the resume” feeling has always been nagging in the back of our minds during hiring processes.

One way that people has dealt with this is to put candidates thru different codes excises such as the Fizz Buzz test.

“Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.”

Tim Rayburn even wrote a BizTalk version of the FizzBuzz test http://timrayburn.net/blog/fizzbuzz-for-biztalk/ – can you solve in 5 minutes? ;)

Even though the FizzBuzz test definitely has its place in an interview situation, it doesn’t give a deeper understanding of the different skills a person might have such as code structure, architecture, cloud solutions, API handling, and source control and so on. To really understand how good or bad a person is we need to see more code, more actual work! That’s where new platforms such as CodePlex, GitHub, Stack Overflow etc. plays a role. If a person I’m interviewing can show a project on GitHub that shows decent code structure, good code quality, shows that the persons understand version control it beats almost any resume when recruiting for a technical position!

But hey, what about culture fit?

Culture fit is of course extremely important! It’s actually so important that if I have to choose between a technically skilled person with the wrong culture (that would for example be arrogance, anti-social tendencies, a non-team player etc), and a not so technical person but someone that has the will to learn and an otherwise excellent culture fit, I’d probably suggest we’d go with the later. But of course we’re always looking for those people that have a combination of both!

Call to action

So what should you as a developer type do to prepare for the future?

1. Start a blog Every developer should have a blog. A blog is a good idea for a number of reasons. It makes you a better writer. It leaves a trail of breadcrumb showing what you worked with and how you solve problems. It hopefully shows you’re skilled within certain areas and it actually works as you very own extended brain as you can always go back and see how you solved things in the past.

2. Start an open source project As a developer you’ll almost always work on some code that can be shared. And sure, putting code out in public can be both a political internal struggle and also requires some extra work when it comes to clean things up so anything that shouldn’t be public isn’t.

3. Show yourself Try to get to do some public speaking, first internally at your company and then at your local user group and so on. It will not only make you get better at public speaking but it will also show that you’re skilled and nevertheless that you have a passion for what you do!

Posted in: •Integration  •Recruitment  | Tagged: •culture fit  •FizzBuzz  •recruiting  •resume 


The Cloud series

I’m going to do a follow up on our Enterprise Architect Ulf Domanders discussion (Sharing is Caring) that lead down the Cloud and since this is a really big topic there will be a series of posts about it. I’ll start off with some background talk to move on and talk more about specific features.

So all of you have probably heard about the Cloud and what a powerful platform it is. So what is it and how come it’s going to revolutionize? Change the way we develop systems? Change how we are interacting with them and what we expect from them? I’ll do some short talking about it here.

I’m going to focus my cloud talking to Windows Azure. Azure has a lot of features and functions like databases, virtual machines, web system hosting, cloud service hosting, busses and more. Azure is a Platform as a Service (PaaS) and just like the more common seen, Software as a Service (SaaS), the whole idea is to use what you want to use and to pay for what you use, no more no less. It also has to be said that we don’t need to invest anything to get started, right now (when this is written) you even get the first 90 days for free.

So if I sign up for this what do I get? Access to an enormous computer park with almost unlimited resources! Think about that for a second, unlimited resources, that’s sick! What “normal” IT department can give us that to a reasonable price? I would say none. You even get the wonderful possibility to just demand more power at any time (a new computer, more website applications to handle that new peak on your website, database nodes, etc.), just ask for it and you will get it! No orders to create, no shipping times, no installing, no configuration to be done. It’s all done and ready to go within minutes! And when that peak is over? Just release it and stop paying for it. We can now have a slim (low cost) setup that easily and fast can be scaled and take 100 or tens of thousands times the normal payload or more, for just a short while or for as long as we like and we just pay for it while we use it.

But is the cloud for everything? No it’s not and we even need to consider if the system we plan to migrate to the cloud will benefit from the cloud features at all. Since the cloud is created for extreme situations that normal systems or solutions didn’t have to consider, most developers need to learn how to create or update systems to be able benefit and scale in a good way from these features. I would say that there is still a lot of developers that don’t use multiple threading, asynchronous call or any other form of parallel or task divided programming. It’s probably since it can be quite complex and hard to understand, even though there are very good frameworks out there that simplifies these tasks. On the same way the cloud simplifies the extreme tasks of controlling hundreds of clustered computers, busses, applications going up and down, virtual machines, scaling up and down etc. It’s still in the hands of us developers to develop systems in a way that they can and will benefit from all these fantastic features in a good way.

Just take the example that Ulf made, 2 600 000 hits found in 0.26 s!!! It is unimaginable for me as a developer on a small company to get the resources needed to perform that massive search in billions of billions of records, or I mean it was. Since this, the result of an extremely optimized and very good implementation of the MapReduce pattern (MapReduce pattern actually was created out of the paper describing Google’s search engine algorithm) running on Google’s own cloud platform. The cool thing is that we actually can access this today! We just need to start thinking CLOUD and start developing cloud native products! Just think about what we can achieve and how much easier hosting, scaling, testing and developing with high customer interaction will be.

Mobility will still be a key word in the future witch will continue push systems into the web and since most web system comes with a web services integration will be more intuitive and even more important. We need to do more with less! We need to be able to use our favorite web shop, erp-system, wms-system, invoice sender and so on and to just connect them by some easy steps. When that is done the systems will talk and with the right setup do most of the work for you!

As the nerd I am I love the Cloud and all the fantastic features it comes with. The possibilities it gives us and what new and unknown paths we will cross. Since we are in the beginning of using and understanding the possibilities of such great power anything can happen! It will most certainly revolutionize the software development and all that goes with it, the demand and the possibilities for system integration will grow and grow!

I just love it!

 

Posted in: •Integration  | Tagged: •Cloud  •Cloud Integration  •Future  •Innovation  •Integration  •Windows Azure