Enterprise Application Integration


How to start working with us.

Geolance is a marketplace for remote freelancers who are looking for freelance work from clients around the world.


Create an account.

Simply sign up on our website and get started finding the perfect project or posting your own request!


Fill in the forms with information about you.

Let us know what type of professional you're looking for, your budget, deadline, and any other requirements you may have!


Choose a professional or post your own request.

Browse through our online directory of professionals and find someone who matches your needs perfectly, or post your own request if you don't see anything that fits!

Enterprise Application Integration (EEI) uses software architecture principles for integrating applications into enterprise computers. Reference required.


Enterprise application integration (EAI) is a software architecture that facilitates data exchange between applications. EAI uses software architecture principles for integrating applications into an enterprise application integration system, such as client/server and n-tier architectures.

EAI operating systems implement messaging patterns to support core capabilities such as reliable delivery, security, routing, transformation, and correlation.

Companies like Microsoft use EAI to integrate their products or custom in-house built solutions into their existing infrastructure to understand better all the information being sent. For example, administrators can retrieve messages from various mailboxes in a single mailbox store using the Mailbox Merge Wizard when using Microsoft Exchange Server. The same goes for integrating disparate systems to create automated business processes [1].

EAI is sometimes contrasted with Enterprise Information Integration (EII), which implements a data-sharing model across multiple applications and systems. EII implementations typically rely on enterprise application integration as a core tenet of its architecture. In practice, an EAI platform could facilitate the sharing of data across multiple applications by relying on a standard data model implemented using other technologies such as Business Process Supply Chain Management suites or service-oriented architectures.

Have trouble integrating all of your business applications?

Geolance can help! We provide an enterprise application integration platform that makes connecting all your systems easy. With our platform, you can automate business processes and improve communication between applications.

Our platform is based on open standards like XML, SOAP, and WSDL, so you can be sure it will work with any system or programming language. Plus, we offer a variety of features that make point to point integration easy and efficient. Sign up for a free trial of our enterprise service bus application integration platform today!

Categories of Integration Patterns

Message Queues: Transfer messages through queues between two endpoints. Message queuing is a powerful technique that provides a first-in-first-out decoupled exchange of messages in an environment that guarantees delivery even when endpoints are down. This is especially useful in cases where an application is offline or disconnected from the mainframe.

Reliable Delivery: A mechanism that preserves data integrity by preventing data loss. It guarantees the delivery of messages, even when components fail while processing a message. This pattern ensures all messages are processed exactly once and correctly.

Guaranteed Messaging: A mechanism for prioritizing the delivery of messages to guarantee the "fastest" possible message gets sent first/with the highest priority, followed by lower priority ones. Guaranteed messaging provides reliable guaranteed messaging with either FIFO (First In First Out) or prioritized queuing for managing different types of requests getting sent out to different service endpoints simultaneously [2]. Correlation: Enables multiple unrelated message exchanges into a single, related exchange.

Combining independent message exchanges to appear to the receiving application as a single logical unit of work. Correlation is sometimes called "sync over async" because it enables an application to inform all its pending asynchronous operations when the end-user has requested some result (and thus causing them all to begin executing in parallel). It can also be used for transaction control by using correlation identifiers to join multiple messages into either one atomic transaction or two serialized transactions [3].

Routing: Maps information between eai systems by specifying where data should flow based on different conditions like user ID, time, the volume of data sent, etc. This pattern is typically used when applications are not aware of each other and, to exchange information, must know where to send it.

Transformation: Transforms data received from one system before sending it to another. The receiving application can receive the transformed data without knowing its origin. This pattern is used when the receiving application cannot use or work with the sent message because it considers different formats, protocols, etc., which are not supported by the receiver.

Integration Architecture

The traditional EAI architecture is divided into two sub-architectures: Messaging Sub-system and Middleware Sub-system.

Messaging Sub-system enables heterogeneous applications to communicate through message oriented middleware using point-to-point communications (P2P). These communications could be synchronous or asynchronous, depending on the needs of the applications.

Middleware Sub-system is responsible for receiving messages, processing them, transforming them if necessary, and passing them onto application servers using publish-subscribe communications. The middleware sub-system also handles routing messages to different endpoints based on subscribing applications' registration information [4].

Figure 1: EAI Sub Systems

Another characteristic of an EAI platform is that they are often "stateless," which means that each incoming message provides all the required information about what to do with it - this differs from traditional integration architectures where they rely on "stateful" connections between messaging endpoints.

This type of architecture can be represented as a service bus architecture where service endpoints depend on the bus to relay messages. The endpoints do not communicate directly but only through the intermediary of the service bus.

Multi-Tenant EAI Platform

Multi-tenant architecture in this context would be when a single overall environment is used to host numerous different customers/projects within it managed by a single or group of administrators [5].

Figure 2: Multi-Tenant Architecture Overview

For example, suppose you were building an EAI platform for your organization using Azure Service Bus Relay. In that case, you could use one instance of this technology and configure it so that multiple customers utilize it - thus creating a multi-tenant solution. This type of configuration also allows for scaling out across different servers and partitioning certain services into a single logical unit.


Scenario 1: All-New Projects Will Be Multi-Tenant - Technical staff cannot migrate existing projects to this new environment.

Scenario 2: All Staff Can Migrate Existing Projects - No additional migration restrictions beyond what is currently present in the organization.

Scenario 3: System Administrators Cannot Migrate Existing Projects, but Sales/Marketing/Support/Facilities Can After Being Educated About The Process - Technical staff can migrate existing projects because it is something they know how to do and have done before, but other groups want all their messaging related operations centralized, so they no longer have to worry about them.

Scenario 4: System Administrators Cannot Migrate Existing Projects, and Sales/Marketing/Support/Facilities Are Not Eligible To Do So. Again, this is a scenario where the organization wants tighter control over who can do what, or organizational rules may limit certain groups from completing specific tasks.

Scenario 5: The System Administrators Can Migrate Some Projects, But Other Projects Will Be Multi-Tenant. Some projects cannot be moved because of technical reasons or other pre-existing conditions that prohibit them from being migrated yet. However, the system administrators have been given access to migrate those projects they deem as possible candidates for migration.

All Migrated Projects Will Be Multi-Tenant

In this example, all projects will be migrated to a new multi-tenant EAI platform. To do so, get the "EAI Migrate" package from Microsoft AppSource that can be used for migrating messaging endpoints over to Service Bus Relay [6]. This package is designed to move on-premise applications towards cloud-hosted ones and allows you to read/write messages and list subscriptions of interest. You may also need the appropriate client libraries for each type of Endpoint you are migrating ("magic" libraries) if they are not already included in your current solution.

Please note: Due to licensing issues with some clients, make sure you have the correct licenses before using those clients, or you may receive licensing errors.

Step 1: Install and Configure Service Bus Relay

First, we need to install and configure Azure Service Bus Relay. You can do so by going into the Azure Admin Portal and clicking on "Create A New Project" under App Services. From there, create a new Service Bus Relay instance with whatever name you want (Figure 3).

Figure 3: Creating a New Service Bus Relay Instance Next, we will need to install the service bus relay SDK for whatever platform we are using and install it locally to manage our environment more efficiently (i.e., no PowerShell remoting needed). To do this in Visual Studio 2015, go File -> New -> Project -> Visual C# -> Cloud -> Azure Service Bus Relay . After that, right-click on the project and select "Manage NuGet Packages," where you will want to install the package found here:  https://www.nuget.org/packages/Microsoft.Azure.ServiceBus.Relay/1.0.0-beta5.

After completing both of these steps, open up your Service Bus Explorer by clicking on view -> Other Windows -> Service Bus Explorer (Figure 4).

Figure 4: Opening the Service Bus Explorer [6] Once opened, you should see a screen like this (Figure 5):

Figure 5: Seeing Our Newly Created Namespace [7] At this point, we can now create and configure our target namespace and relay namespace. To do so, click on the "Relay" menu item and select "Create Relay Namespace." Please note: Azure Service Bus Namespaces are independent of one another you must also create a separate target namespace to migrate project(s) over to (Figure 3).

Name your new relay namespace whatever you want, but make sure to leave the SSL toggle set as false because we will not be using an SSL certificate for this particular configuration (Figure 6). 

Figure 6: Creating a New Relay Namespace [8] We need to add an inbound rule that allows Service Bus Relays traffic, which we will use later to enable our application to start receiving messages.

Figure 7: Creating a New Rule for Inbound Traffic To do so, click on the "Security" menu item and select "Manage Rules" (Figure 7). Once there, click on "Add A Rule," and you will see an interface similar to Figure 8 below:

Figure 8: Adding a New Rule for Azure Service Bus [9] Make sure that you give your rule an appropriate name (this is what makes enterprise application integration important), select TCP as the Transport Type, and enter *.outlook.office365.com as the Endpoint. You can use whatever port you want (the standard one would be 443 ), but I like using non-standard ports just in case I forget what it is later. Also, make sure you write down the Service Bus Namespace, Endpoint, and Shared Access Key, as we will need this information later (Figure 9).

Figure 9: Configuring a New Inbound Rule for Azure Service Bus Relay Once all of these items have been completed, click "OK" to save your configuration.

Step 2: Migrate Messaging Endpoints To migrate our messaging endpoints, we will first need to create a new .NET solution that contains all of our projects. You can do so by right-clicking on the solution and selecting Add -> Existing Project..., then finding all of your projects in your existing solution and adding them to the new one (Figure 10).

Figure 10: Adding Existing Projects to Our New Solution [12] After that is complete, we need to add a reference to the Azure Service Bus Relay SDK and Microsoft.Azure.ServiceBus NuGet package (Figure 11).

Figure 11: Adding References to Our New Solution [13] Once this has been completed, we now need to configure our service bus relay bindings in our OnStart method of the web role by creating a new ServiceBusRelayBinding element and configuring it with our target namespace, shared access key, and relaying capability (Figure 12).

Figure 12: Configuring Our Target Namespace Binding in OnStart Method Here is what your binding should look like: XML Copy using System; using System.Configuration; using Microsoft.ServiceBus; using Microsoft.ServiceBus.Relay; using Microsoft.WindowsAzure; namespace MigratingWebRole { public partial class OnStart : System . Web . Hosting . IRequiresRequestServices { public static void InitializeService( string targetNamespace, string sharedAccessKey, Uri relayUri) { var binding = new ServiceBusRelayBinding (sharedAccessKey); binding.Endpoint.Behaviors.Add( new EndpointBehavior () { RelayClientAuthenticationType = AuthenticationTypes .SharedSecret, }); binding.Endpoint.Behaviors.Add( new TransportClientEndpointBehavior ()); serviceBusRelaySettings = new ServiceBusRelaySettings (); serviceBusRelaySettings .Namespace = targetNamespace; serviceBusRelaySettings .RelayUri = new Uri (relayUri); serviceBusRelaySettings .SharedAccessKeyName = ConfigurationManager . AppSettings [ "ServiceBus.Secret" ]; serviceBusRelaySettings .SharedAccessKey = sharedAccessKey; binding.Initialize(); Services . AddSingleton < IServiceBusRelayFactory , ServiceBusService >(serviceBusRelaySettings); } } }

Now, we need to implement the IServiceBus interface and override its Initialize method to configure our message factory for Azure Service Bus Relays (MigrationSenderMessageFactory) and its Receive pipeline (MigrationRePipeline), as well as create the queue that our messages will be published to (ISendingQueue; Figure 13).

Figure 13: Implementing IServiceBus and Initializing Its Objects XML Copy using System.ServiceModel; using Microsoft.ServiceBus; using Microsoft.WindowsAzure.ServiceRuntime; namespace MigratingWebRole { public class ServiceBus : IServiceBus { private readonly MessageFactory migrationSenderMessageFactory = new MigrationSenderMessageFactory (); private readonly IRelaySettings serviceBusRelaySettings ; public ServiceBus( IRelaySettings relaySettings) { this .serviceBusRelaySettings = relaySettings; var factoryCredentialsProvider = new InstanceProfileCredentialsProvider ( "app-name" , "username" ); this .serviceBusRelaySettings .SharedAccessKey = factoryCredentialsProvider.GetSharedAccessSignature(ConfigurationManager. AppSettings [ "ServiceBus.Secret" ]); } public override void Initialize() { base .Initialize(); migrationSenderMessageFactory.ReceiveEndpointName = serviceBusRelaySettings .Namespace + "." ; migrationSenderMessageFactory.EndpointName = serviceBusRelaySettings .RelayUri; var queueAuthorizationRule = new QueueAuthorizationRule (serviceBusRelaySettings .Namespace, null , sharedAccessKey); migrationRePipeline = new MigrationReceivePipeline (); // Create a new queue if it does not exist. var queue = new MessageQueue (serviceBusRelaySettings .RelayUri); // Create a rule that authorizes this service to listen on the queue. queueAuthorizationRule.AddSubscriptionOptions( new SubscriptionOptions ()); queueAuthorizationRule.CreateIfNotExists(); migrationRePipeline.AddLast(queueAuthorizationRule); // Set the factory for receiving messages from queues to use our custom message factory object. ReceiveEndpointDescription description = MigrationReceivePipelineFactory . GetReceiveEndpointDescription (migrationSenderMessageFactory, null , null ); description.Behaviors.Add( new TransportClientEndpointBehavior () { AuthenticationMechanism = AuthenticationMechanism .SharedSecret, }); var binding = new MessageReceivingBinding (); binding.Security.MessageProtection = MessageProtectionLevel .EncryptAndSign; binding.TransportClientEndpointBehaviorCollection.Add(description.Behaviors); description.Bind(binding); base .Initialize(); } } }

As you can see in Figure 13, we created a message factory for sending messages to the CloudQueue and then, as part of the MigratingSenderPipeline for Azure Service Bus Relay, initialized a new instance of it and set it as its messaging engine (job done). The other important thing about the web service is that its configuration has been simplified. For example, it is no longer necessary to specify a custom queue authorization rule for this service. When the messaging pipeline is initialized, Service Bus automatically examines it. For example, suppose EndpointName points at a queue (and not a topic). In that case, it will add the default permission regarding message access to send messages to that queue provided by the associated Shared Access Policy. That's all we need! As you can see in Figure 14, our Azure Service Bus Relay endpoint has been configured in Visual Studio through the portal Interface Configuration pane. In this case, its namespace is different from ours to avoid any naming collision between services/clients.

Figure 14: Configuring MS Cloud Service Bus Relay Endpoints [Click on image for larger view.]

With these two configurations, our solution can now take advantage of the CloudQueue and Azure Service Bus Relay messaging capabilities without writing additional code (in other words, we don't need to define custom code for sending/receiving messages from queues and topics). Looking at Figure 15 closely, apart from automatically detecting that its EndpointName points at a queue and adding the necessary authorization rule, Service Bus also added a listener-rule permitting any service listening on this Endpoint (which is our web role) to send messages. Figure 15: Our Solution's New Configuration in Visual Studio [Click on image for larger view.]

Integrating Web Roles with WCF Services After I did all these configuration changes required by the new services that had to be added, I started writing some code. My first goal was to separate the different roles of my solution further by isolating web role functionality into three assemblies -- WebRole1, WebRole2, and WebRole3. This is because, in a natural production environment, each of these roles will undoubtedly need to have its dedicated development team. Hence, it makes sense not to have all their functionality in one big assembly. Remember that any Azure worker role can only reference up to 10 assemblies, so even though you might want your shared libraries (like MessageBox ) accessible from every Azure role, if you are designing your application accordingly, it won't be an issue. So after creating these three new web roles' projects in my solution, I tried to add a reference from one of them (I chose WebRole2) to the previously created CloudQueue project. Unfortunately, as you can see in Figure 16, when I tried to do this, their namespaces conflicted with each other, so I had no choice but to rename the CloudQueue namespace inside WebRole2, so there's no conflict with the existing one.

Figure 16: Referencing CloudQueue Project [Click on image for larger view.]

After renaming the namespace and recompiling everything, it became clear that we still have an issue here. While we succeeded in making our web roles communicate through service interfaces, we haven't yet done anything about making their respective hosting environments talk with each other. I'm saying this because even though the web role projects reference the service interfaces (via Add Service Reference ), they won't automatically download their respective WCF client libraries like what happened when we added Azure project references. So now, we need to do things manually!

Figure 17: Downloading Client Libraries Manually [Click on image for larger view.]

Apart from downloading the necessary client library assemblies manually (yes, that's exactly what you see in Figure 17 where I'm browsing to the downloaded classes. config file and then clicking Open), we must also make WebRole2 reference them in its code so it can successfully call method implementations exposed by WebRole1 and vice versa. This means we'll have to add a service reference for each role. So, after adding references to WebRole1 and WebRole3 assemblies inside the WebRole2 project (which you can see in Figure 18), I recompiled everything, and that's when I came to realize that I just wasted a lot of time because even though our solution worked fine on my development machine back home, it was now refusing to start altogether!

Figure 18: Referencing Client Assemblies

This happens when you keep fiddling with your configuration settings during development yet never bother taking an extra step or two, which would have made it easier for other people (or yourself) to take over your code should you happen to leave the company. This means before any of us can start writing code that uses the different service interfaces exposed by the other role, we have to do a straightforward thing -- register our roles in the cloud service's configuration file.

Figure 19: Registering Additional Roles

After registering all three web roles in my cloud service's Web config file (as seen in Figure 19), I tried starting it again, and voilà! Everything worked this time around, so it looks like Azure Web Roles are smart enough to load their client WCF assemblies automatically even though they were downloaded manually during development, which means your solution will work as long as you didn't make any mistakes when adding those assembly references. This is a huge step forward from our previous attempts to integrate enterprise applications into the cloud!

What are the benefits of enterprise application integration?

The benefits of enterprise application integration are that applications become more capable. They can reach out to other applications and use their functionality, which could be better than using the built-in capabilities in the software. For example, an organization might want to build a homegrown application, but it would make things easier if they already knew certain information from third-party applications. By integrating the two legacy systems, both applications will benefit by accessing new data.

Azure cloud service enables web role instances to send messages through queues for communication between them. Many message queue implementations have different features based on what type of messaging you need to perform or how high your messages are. Azure has three options for implementing this type of communication:

Azure Service Bus queues

These queues are the most common choice due to their support for multiple protocols and messaging models.

This feature uses a TCP connection intended for high performance, low latency, and secured communication between web roles in Azure. It can be used when a direct connection between two roles in the cloud is required.

The Message Queuing Service allows us to send messages from web role instances in Azure or another platform such as Windows or Linux. This service is helpful when we want to place a queue on our local network instead of the cloud because any application with access to this queue will send and receive messages; for more information about how these works, see Tutorials.

A new approach to enterprise application bus integration using Azure cloud service is proposed. It requires three new components, each with its functionality:

An end-to-end implementation was presented using the Windows Azure Platform demonstrating how this novel architecture can provide a simple middleware framework for developing new or integrating existing services into cloud computing environments. The implementation has been tested by submitting requests that consist of two different messages to two roles executing in the same data center and on different machines. Results show that requests are executed successfully within less than one second on average, showing the potential of Azure cloud service as an infrastructure environment for building more efficient enterprise application integration solutions. One of the most exciting aspects of Microsoft's implementation is probably its ease of use since there is no need for us to learn new tools or create overly complicated code to achieve our goal of building this new type of cloud-based enterprise application integration solution.

Cloud Service Fundamentals

Microsoft Azure provides the Cloud Service model, enabling scalable and reliable applications while providing on-demand availability. In addition, the Windows Azure Platform supports multitenancy so that you can deploy services to a dedicated environment with secure boundaries around it. In other words, you know where your resources are and where they reside because you have complete control over them. The platform's purpose is to enable developers to take their existing skills and knowledge from developing web applications using ASP.NET MVC 3 and WCF 4 to build robust services that run in the cloud quickly. This article describes creating an Azure web service that uses both Web Role and Worker Role types.

The first step is to deploy the AspProvidersDemo project to be accessed remotely without transferring the source code, configuration settings, or database to another computer. The simplest way to do this is by using Visual Studio 2010 to publish the application onto your local file system (C:\inetpub\wwwroot), as shown in Figure 1 below.

Figure 1 - Publishing the Application

After publishing has been completed successfully, select Open Windows Explorer from the shortcut menu of your desktop icon for your website (located under C:\inetpub\wwwroot). Then select all folders except for App_Data, Bin, and packages under the root directory of your website. Right-click on these folders and select Cut from the shortcut menu. Next, right-click inside the C:\inetpub\wwwroot directory and select Paste from the shortcut menu to move these directories into a new folder named AzureProjects.

Once all files have been moved into this new location, open Internet Information Services (IIS) Manager (Start -> All Programs -> Administrative Tools -> Internet Information Services (IIS) Manager). First, select your server name in IIS Manager; then double-click Sites in the Connections pane to open it. Now click Add Web Site… at the bottom of the Actions pane, as shown in Figure 2 below.

Figure 2 - Adding a New WebSite

In the Add Web Site dialogue box shown in Figure 3 below, enter AspProvidersDemo as both the name and alias for your site. Also, select ASP.NET from the Add Application dropdown list and choose to use port 8080 since this is the default HTTP port assigned by Azure. Finally, be sure that the Host In The Cloud checkbox is also selected before clicking OK to create a new IIS website.

Figure 3 - Naming Your Website

Once you have finished creating this new application pool, click on AspProvidersDemo in IIS Manager once again and click Show Advanced Settings at the bottom of the Actions pane, as shown in Figure 4 below. Scroll down until you find the Process Model section, then double-click on the Idle Timeout setting and enter a value of 360 into the box that appears. The default timeout in IIS is 600 seconds, which isn't very practical for an application running in Azure since it can take more than 10 minutes for the deployment to complete when switching from one environment to another. Keep in mind that many cloud services have been down during their initial stages because of this default setting within Windows Azure when trying to perform a failover from one location to another.

Figure 4 - Modifying Process Model Settings

This completes the setup process needed before building our WCF service hosted in Microsoft Azure Cloud Service using Visual Studio 2010.

Building Our First Cloud Service

To get started with building our demo, open Visual Studio 2010 and create a new cloud project by selecting File -> New Project… from the main menu. Once the New Project window appears, select Azure Cloud Service, as shown in Figure 5 below.

Figure 5 - Creating a New Azure Project

Once you have created this new application, change the Package/Publish Location to Remote Filesystem (UNC Path) and set the Target Framework to .NET Framework 4, as shown in Figure 6 below. This is necessary because your web service will need access to these assemblies during deployment for execution on Windows Azure.

Figure 6 - Configuring Package Settings

7. Next, right-click on AspProvidersDemoCloudService within Solution Explorer and select Add -> Cloud Service as illustrated in Figure 7 below.

Figure 7 - Adding a New Cloud Service

8. Once the Add Cloud Service dialogue appears, enter AspProvidersDemo as the name of your service and change both the Deployment Mode and Virtual IP Address to dynamic, as shown in Figure 8 below. This will allow your web service to use a public DNS address that changes based on Elastic IP settings within Azure after it has been deployed. Enter AspProvidersDemo for both URLs since this is the name we have given our cloud service during these steps above. Be sure that all checkboxes are selected before clicking OK to add this new cloud service to our solution.

Figure 8 - Configuring Global Settings

9. Now that we have added a new cloud service to our solution, right-click on it in Solution Explorer and select Publish. After clicking the Publish button in the Configure Publishing Options window, make sure that both AspProvidersDemoCloudService and AspProvidersDemo are checked before clicking Next > to continue.

Figure 9 - Enterprise resource Planning Package Deployment

10. Before publishing this new service, ensure that your target server runs Windows Azure SDK 1.6 and IIS 7 installed on a local machine within your network. Next, enter the URL to a Windows Azure cloud host for testing purposes, as shown in Figure 10 below, before clicking Next > once again.

Figure 10 - Providing a Target Server

11. Browse to the root of your Azure installation by entering either https://windows.azure.com or *.windows.azure.com for testing purposes before clicking Next > yet again, as illustrated in Figure 11 below.

Figure 11 - Configuring Destination Details

12. Enter AspProvidersDemoCloudService within both textboxes provided along with your Microsoft Azure subscription credentials before clicking the Validate Connection button, as shown in Figure 12 below.

Figure 12 - Validating Your Azure Account

13. If everything is entered correctly within these textboxes, you will see a green checkmark appear next to each of them before clicking Next > to continue, as illustrated in Figure 13 below.

Figure 13 - Publishing Application Information

14. The next step within this wizard is to configure any necessary connections with your cloud service and the destination server located within Azure, as shown in Figure 14 below. For testing purposes, just click Finish on both windows to proceed without making any changes.

Figure 14 - Initializing Deployment

15. If all goes well, you will see the results window appear, as illustrated in Figure 15 below. This window shows that your cloud service was successfully published within Azure. The application is now ready to deploy any files within this project to a server.

Figure 15 - Publishing Complete

16. Ensure AspProvidersDemoCloudService and AspProvidersDemo are checked within the Selected Cloud Services panel before clicking Finish, as shown in Figure 16 below.

Figure 16 - Publishing Successful

17. After clicking the Finish button from the wizard above, you should see a new folder named AspProvidersDemoCloudService appear within your solution directory, as shown in Figure 17 below.

Figure 17 - New Cloud Service Directory

18. Next, right-click on the AspProvidersDemo directory and select Deploy, as illustrated in Figure 18 below. This menu option allows us to deploy either just the completed solution or all of the projects within it with a single click.

Figure 18 - Deploying a Single Project

19. Since we would like to deploy everything within the solution, ensure that AspProvidersDemo is checked inside the Selected Web Projects panel before clicking Next > as shown in Figure 19 below. By checking this project within the dropdown menu, everything from its directory will be deployed to either our local IIS 7 server or the IIS 7 instance within Azure, depending on which one is specified.

Figure 19 - Deploying All Projects

20. Now that we have selected AspProvidersDemoCloudService and AspProvidersDemo for publication, click Finish to continue the deployment process, as shown in Figure 20 below.

Figure 20 - Deploying a Solution Directory

21. After clicking the Finish button from the deployment wizard, you will see a new virtual directory named AspProvidersDemo appear within your IIS 7 web server, as shown in Figure 21 below. This window also shows that there is currently no content being hosted within this virtual directory, which will soon be updated to the AspProvidersDemo.Web project once it has been successfully deployed.

Figure 21 - New Virtual Directory

22. At this point, you may close the Visual Studio 2010 Express IDE if desired since we are finished working with it for now. We should also see that our cloud service was published within Azure, as shown in Figure 22 below.

What is Enterprise Application Integration?

Enterprise Application Integration (EAI) is an approach to communication between software systems using messaging based on open standards like XML, SOAP, and WSDL. The goal of EAI is the automation of business processes in a fashion that is independent of any particular system or programming language being used within an organization.

What makes integration difficult?

Integration between two or more systems is no easy task, mainly if the involved parties have not designed their respective systems to work together from the start. The following are some of the most common obstacles which you will likely encounter when attempting to integrate disparate applications:

Programming Language Differences - Applications are written in different languages may have trouble sharing data without being translated into a language that both can understand. As a result, some companies must rely on hiring expensive contractors with the requisite skill set to integrate these systems, while others simply avoid this problem by sticking to a single programming language.

Network Connectivity Challenges - If your organization's local networks cannot connect directly with one another, then additional time and money must be spent on building a separate network infrastructure that will allow these systems to communicate. Unfortunately, this step is often overlooked when budgets are being planned, resulting in costly delays once the problem is discovered.

Data Quality - Even if two existing applications can be integrated successfully, they may still fail to "talk" with one another due to inconsistent data formats between them. Some integration tools address this issue through built-in mappings of each field within an incoming message. At the same time, other products use more generic techniques like XML schemas or regular expressions for mapping fields together. Fortunately, Windows Communication Foundation provides us with some great options for dealing with varying formats, so it should not be too much of an obstacle at all for our cloud service application to overcome.

What are some of the benefits of Enterprise Application Integration?

EAI can provide numerous benefits to help improve the overall effectiveness of an organization's business processes which would otherwise be handled through manual means. For example, when systems are properly integrated, they can increase productivity by reducing or eliminating repetitive tasks that were formerly required. In addition, data sharing between these various applications will prevent any one company from being held hostage due to incompatible formats, mainly when they must send their data across multiple networks with differing standards. Last but not least, end users can work more effectively and efficiently since they no longer need to switch between many different programs just to perform one small task. This benefit alone is often enough for people to implement integration within their own companies!

How does EAI compare with Data Integration?

Although Enterprise Application Integration is often used in conjunction with Data Integration, the two methodologies are quite different. EAI refers to business processes that involve multiple applications through messaging (perhaps even written in different languages). At the same time, data integration focuses explicitly on automating the flow of information between two or more databases. The acronym SOA (Service Oriented Architecture) can help distinguish between these terms; since it describes software systems that make up a service-oriented architecture that uses both EAI and data integration to achieve its goals.

How does Azure integrate with existing on-premise applications?

Azure supports three main methods for integrating existing on-premise applications with cloud services hosted within Azure: REST/OData, Windows Communication Foundation, and the Windows Azure Connect for Applications (or "AC4A"). The first two methods are used exclusively for data integration, while AC4A is a cloud service specifically designed to provide local connectivity between on-premise applications. Of course, other options exist as well, but these are by far the most popular choices right now since they offer the best combination of integration techniques along with reasonable prices!

Of these three main options, Windows Communication Foundation currently seems to be the best choice for creating an EAI infrastructure within Azure because it provides us with great flexibility when choosing our hosting architecture. In addition, we're making heavy use of WCF throughout this book, so hopefully, you'll have no trouble understanding how WS* endpoints can easily integrate. AC4A is also an option, but it doesn't support many of the features available within WCF, so for this reason alone, I will be making use of Web API instead.

What are some of the pros and cons of using WS-* technologies in Azure?

The two main programming paradigms supported by WS-* endpoints are REST (Representational State Transfer) and SOAP (Simple Object Access Protocol). Each has its benefits, where REST is typically used to communicate with cloud services while SOAP makes up the majority of message interactions between on-premise applications. Once again, though, Windows Communication Foundation allows us to integrate either type, so you should not be too concerned about choosing one or another unless your organization has specific requirements that need to be met.

The biggest downside when working with WS-* is that it can often be difficult to debug since it requires a SOAP debugger such as WsTrace. Fortunately, Microsoft provides tools for this purpose in Visual Studio, saving us from learning or using alternate technologies! Once you've become familiar with WCF, though, I believe you'll enjoy its flexibility and ease of use. As a result, I would recommend using Windows Communication Foundation unless your organization has a standard/conventional practice around protocols such as REST...

What's an example of an excellent way to integrate existing on-premise applications?

In my opinion, the best approach for integrating existing on-premise applications is to build a custom Azure/WCF solution that will directly interface with the desired external system. In many cases, this will be an older application that does not support newer web protocols such as REST, though it might utilize tools such as SOAP or even flat files instead. In any case, these types of integration challenges are prevalent and not limited only to SQL Server applications!

This can all become incredibly complicated since we'll need to maintain the existing legacy client(s) while also supporting new client(s) within Azure at the same time. Fortunately, WCF provides us with a lot of flexibility which allows us to specify incoming communication channels on a contract by contract basis using multiple bindings, each associated with its Endpoint.

What are some of the challenges associated with integration?

When integrating applications, the biggest challenge is debugging issues that potentially span over multiple communication channels. Though this can be pretty complex, it's important to remember that many communication problems exist beyond just application communication in general since solutions often utilize SQL Server connectivity, remote network addresses or firewalls, and so forth. Even though business data is typically pure text in nature, this doesn't mean that everything should be treated as such!

For example, consider an organization that has decided to implement service-oriented architecture via WCF endpoints hosted within Azure PaaS services/roles (or even on-premise). It might appear at first glance that each solution would communicate independently with SQL Server, but in reality, each solution might need to pass data into or out of the other at some point. Buffer size, timeout values, and other related problems can often cause these types of issues which are usually resolved by specifying an appropriate binding through configuration...

What are some common mistakes you see developers make when integrating applications?

The number one mistake developers make is over-designing their integration points in services/endpoints. Though this sounds counterintuitive given the topic at hand, it's not so much about how many interfaces we expose through WCF as it is about maintaining a clean design throughout all layers of our application regardless if they're exposed via HTTP(S), TCP, named pipes, or whatever else!

Geolance is an on-demand staffing platform

We're a new kind of staffing platform that simplifies the process for professionals to find work. No more tedious job boards, we've done all the hard work for you.

Geolance is a search engine that combines the power of machine learning with human input to make finding information easier.

© Copyright 2022 Geolance. All rights reserved.