Microsoft Integration Weekly Update: Oct 16, 2017

Microsoft Integration Weekly Update: Oct 16, 2017

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:

Feedback

Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.

Advertisements

Security & Data Announcements at Ignite 2017

Security & Data Announcements at Ignite 2017

Introducing Virtual Network Service Endpoints (Preview)

With the introduction of Virtual Network Service Endpoints (Preview) you can now protect your Azure resources by moving them inside a VNET and thus restricting access to that VNET or subnet itself.

Currently, this is only supported for Azure Storage & Azure SQL Database/Warehouse but the end goal is to provide this for all services.

By using VNET Service Endpoints you can now fully isolate your resources because you can now fully remove all access to the public internet by which you are limiting the risk of exposure.

It has been a long-awaited feature to isolated access, certainly for Azure Storage & Azure SQL Database, and I am excited and very happy that it’s finally here!

Additional resources:

Introducing Azure Data Factory 2.0 (Preview)

This must be my favorite announcement – Azure Data Factory 2.0 (Preview)the next generation of data integration.

While Azure Data Factory 1.0 was limited to a data-slicing model only, it now supports different types of triggers such as webhooks.

With Azure Data Factory 2.0 comes the new Integration Runtime that provides you with the infrastructure to orchestrate data movement, activity dispatching & SSIS package execution, both in Azure & on-premises.

But that’s not all, there is more – Http activity support, integration with Azure Monitor, integration with Azure Key Vault, and much more! We’ll dive deeper into this announcement in a later article.

Additional resources:

Azure DDOS Protection Service (Preview)

Distributed Denial-Of-Service attacks can be brutal and unfortunately is very easy to use. Nowadays, you can find it on the internet as a managed offering or even do it yourself just like Troy Hunt explains.

That’s why Microsoft is announcing Azure DDOS Protection Service (Preview) that allows you to protect your Virtual Networks in order to secure your Azure resources even more.

However, Microsoft Azure already brings you DDOS protection out-of-the-box. The difference here is that Azure DDOS Protection Service takes this a step further and give you more features & control.

Here is a nice comparison:

Azure DDOS Protection Service is a turn-key solution which makes it easy to use and is integrated into the Azure Portal. It gives you dedicated monitoring and allows you to define policies on your VNETs. By using machine learning it tries to create a baseline of your traffic pattern and identifies malicious traffic.

Last but not least, it also integrates with Azure Application Gateway allowing you to do L3 to L7 protection.

Additional resources:

Taking Azure Security Center to the next level

Another example of the security investment by Microsoft are there recent announcements for Azure Security Center. You can not only use it for cloud workloads but also for on-premises workloads as well.

Define corporate security standards with Azure Policy (Limited Preview)

Azure Policy allows you to define corporate standards and enforce them on your Azure resources to make sure that the resources are compliant with your standards. They also come with some default rules, such as running at least SQL Server 12.0 and can be scoped to either a management group or resource group level.

By using initiative definitions, you can group one or multiple policy definitions as a set of requirement. An example could be an initiative that consolidates all SQL database related definitions.

To summarize, Azure Policy allows you to define security standards across multiple subscriptions and/or resource groups making it easier to manage your complete infrastructure.

It is currently in limited preview but sign-up for the preview in the Azure portal.

Introduction of Security Playbooks

With the addition of Security Playbooks you can now easily integrate certain playbooks in reaction to specific Security Center alerts.

It allows you to create & link an Azure Logic Apps which orchestrates the handling of the alert, tailored to your security needs.

Investigation Dashboard

Azure Security Center now provides a new visual, interactive investigation experience to analyze alerts and determine root cause analysis.

It visualizes all relevant information linked to a specific security incident, in this case an RDP brute force attack.

It makes it a lot easier to get the big picture of the potential cause, but also the impact of the incident. By selecting certain nodes in the equasion, it provides you with more information about that specific segment. This enables you to drill deeper and get a better understanding of what is going on.

However, these are only a subset of the announcements, you can find all of them in this blog post.

Additional resources:

Introducing SQL Vulnerability Assessment (VA)

SQL Vulnerability Assessment (VA) is a new service that comes with Azure SQL Database and SQL on-premise via SQL Server Management Studio (SSMS).

It allows you to discover, track and remediate potential database vulnerabilities. You can see it as a lite version of Azure Security Center focused on SQL DBes that lists all potential vulnerabilities after running a scan.

This is another example of Microsoft making security more approachable, even if you are not a security expert. After running a scan you will probably see some quick wins making your database more secure step by step.

Additional resources:

Summary

Microsoft made some great announcements at Ignite and this is only the beginning, there were a lot more of them and I recommend read more about them on the Azure blog or watch the Ignite sessions on-demand.

Personally, I recommend Mark Russinovich’ interesting talk called “Inside Microsoft Azure datacenter hardware and software architecture” which walks you through how Azure datacenters work, their recent investments & achievements and what their future plans are.

Lately, the IT side of Azure is coming closer to the developer side where services such as Azure Networking is becoming easier to integrate with PaaS services such as Azure Storage & SQL DB. It looks like this is only the beginning and we can expect more of these kinds of integrations making it easier for both IT & Devs to build more secure solutions.

Last but not least, don’t forget that the Azure Roadmap gives a clear overview of what service is at what stage. Here you can see all services that are in preview for example.

Thanks for reading,

Tom Kerkhove.

Our experience in solving “webHttp” issue

Our experience in solving “webHttp” issue

The top-secret to effective product service handling is to take each complaint seriously. Even if it’s simply a misinterpretation or a mistake on the part of the customer, every complaint deserves the wholehearted attention and special handling. In our support, we often receive tickets which require neither functional investigation nor a problem with the configuration.

Recently, we have received an interesting ticket and it took considerable time to resolve, though it looked like a simple issue to deal with. In this blog, we wanted to share the troubleshooting steps we have performed and how we solved the problem. It would help many of our customers in the future to identify the problem at first hand without spending much time.

After the successful installation of BizTalk360 in the customer environment, during the first launch, the below error message was shown in the browser.

“ERROR after first time install : The extension name ‘webHttp’ is not registered in the collection at system.serviceModel/extensions/behaviorExtensions”

solving “webHttp” issue in BizTalk360

Identifying the problem:

Firstly, BizTalk360 doesn’t require any change in the configuration file. But looking at the error clearly, it’s a configuration issue and it happened during the processing of the configuration file to service the request.  As our product is being used on critical business environments, we are generally cautious in suggesting the changes in the configuration file in the customer environment and as mentioned previously it is not required.

We started from the basic troubleshooting steps one by one.

Choosing the Solution:

Ensure “HTTP Activation” is enabled under WCF:

If the “HTTP Activation” is not enabled under the WCF services It doesn’t communicate with the HTTP network protocols over the network. Hence, we checked whether this is enabled in the “Windows roles and features Wizard”.

This option was enabled and other required roles and features were enabled as per the prerequisite document.

webHTTP issue in BizTalk360

ASP.Net re-registration/ Reinstallation:

As all the configurations are perfectly enabled, which BizTalk360 requires, we nailed down and suggested to ensure the .NET3.5 is installed properly on the machine since the error would occur only if .NET3.5 is not installed correctly. To ensure this, we recommended repairing the .NET3.5 configuration elements using the tool Workflow Services Registration(WFServicesReg.exe) using the below commands.

  1. Go to C:WINDOWSMicrosoft.NETFrameworkv3.5 
  2. Run WFServicesReg.exe /c

After repairing, BizTalk360 didn’t load and the same exception appeared.

Reregistering the ASP.Net and Service models:

The ASP.NET Registration tool can be used to install and uninstall the linked version of the ASP.NET. This tool will install the ASP.NET and update the script maps of all existing ASP.NET applications and updates both classic mode and integrate mode handlers in the IIS metabase. Hence, we suggested the below command to reregister the ASP.NET.

“%WINDIR%Microsoft.NetFrameworkv4.0.30319aspnet_regiis” –i –enable

The “ServiceModelReg.exe” tool provides an ability to manage the registering of WCF and WF components on a single machine. Since we are experiencing the problem with Service activation, we have suggested registering the components using the “Service modelReg.exe” tool by executing the below command.

“%WINDIR%Microsoft.NetFrameworkv4.0.30319ServiceModelReg.exe” -iru

Even after performing the above steps the problem still appeared in the browser while loading the BizTalk360.

Application pool Configuration verification:

It is important to make the value “true” for the property “Enable 32-bit Applications” as the machine is running on a 64-bit and it is important to access a 32-bit application running under IIS. This is because, by default, IIS launched CGI applications on 64-bit work process if you’re running it under a 64-bit Windows.

This option was set to “True” as well.

webHTTP issue in BizTalk360

Active Execution of the chosen Solution:

Always while resolving the customer solution, we shouldn’t worry about the failure. We need to concentrate on the journey that will lead you to resolve the issue. As we tried all the possible solutions to make it work, we turned our focus on the configuration files.

BizTalk360 Web.config File Investigation:

As per the default settings, all the necessary service model extensions were added as expected and they were not manipulated as shown in the below screenshot.

webHTTP issue in BizTalk360

ASP.NET 4.0 machine.config File Investigation:

We were confident, usually the machine.config file is not altered unless otherwise if there is any specific policy from the business. However, we thought of taking a look at the machine.config file for the .NET version v4.0 in the location “C:WindowsMicrosoft.NETFramework64v4.0.30319Config”.

This Final investigation made a trick and solved the problem.

How?

The “WebHttp” service model extension was manipulated with some other values than the default value in the machine.config file as highlighted in the below screenshot. After removing the additional letters. BizTalk360 loaded properly 😊!!!

webHTTP issue in BizTalk360

PS: We recommend keeping the default machine.config file as such by default. If there is any necessity to make changes in the “Machine.config” file as per any specific internal policy or rules, we recommend installing BizTalk360 on a separate server with default ASP.NET settings. So that, BizTalk360 will get loaded without any issues.

If you have any questions, feel free to contact us at support@biztalk360.com. We are happy to assist you.

Author: Mekala Ramesh

Test Lead at BizTalk360 – Software Testing Engineer having diverse exposure in various features and application testing with a comprehensive understanding of all aspects of SDLC. Strong knowledge to establish the testing process from the scratch. Love to test the software product to deliver it with good quality. Strongly believes on “Testing goes beyond just executing the test protocol”.

Codit Connect 2017 – Recap

Codit Connect 2017 – Recap

Introduction

CONNECT 2017 focused on Digital Transformation with international speakers from Microsoft, the business and the community. The full-day event was organized in Utrecht and Ghent and inspired participants to strengthen their integration strategy and prepare them for the next steps towards a fully connected company.

This blogpost will capture the key take-aways and some of the lessons learned during both days.

[NL] Opening keynote – Ernst-Jan Stigter, Microsoft Netherlands

Ernst-Jan started off with the fact that we can all agree the cloud is here to stay and the next step to accelerate by applying Digital Transformation. Microsoft’s vision on Digital Transformation focuses on bringing people, data and processes together to create value for your customers and keep your competitive advantage. In his keynote, Ernst-Jan explains the challenges and opportunities this Digital Transformation offers.

Microsoft’s Digital Transformation framework focuses on 4 pillars: Empower employees, Engage customers, Optimize operations and Transform products where the latter one is an outcome of the first 3 pillars. Digital Transformation is enabled by the modern workplace, business applications, applications & infrastructure, data and AI.

Ernst-Jan continues to lay out Microsoft’s strategy towards IoT. By collecting, ingesting, analyzing data and acting upon that data, customers can be smarter than ever before, being able to build solutions that were unthinkable in the past. He shares some IoT use cases examples at Dutch Railways, City of Breda, Rijkswaterstaat and Q-Park to illustrate this.

[BE] Opening keynote – Michael Beal, Microsoft BeLux

Michael started explaining what digital transformation means and the vision of Microsoft on that subject. Microsoft is focusing on empowering people and building trust in Technology.

Michael continued his talk with the vision of Microsoft on the Intelligent cloud combined with an intelligent edge. To wrap up, Michael talked about how Microsoft thinks about IoT and how Microsoft is focusing on simplifying IoT.

Democratizing IoT by allowing everyone to access the benefits of IoT and providing the foundation of Digital Transformation is one of the core missions of Microsoft in the near future.

A great inspiring talk to start the day with in Belgium.

[NL/BE] Hybrid Integration and the power of Azure Services – Jon Fancey, Microsoft Corp

Jon Fancey is a Principal Product Manager at Microsoft and is responsible for the BizTalk Server, Logic Apps and Azure API Management products.

He shares his vision on integration and the fact there is a continuous pressure between the forces and trends in the market. He explains that companies need to manage change effectively to be able to adapt in a quickly changing environment.

Azure enables organizations to innovate their businesses. To deal with digital disruptions (rapid evolving technology), Digital Transformation is required. Jon goes through the evolution of inter-organizational communication technologies from EDI, RPC, SOAP, REST to Swagger/Open API.
Logic Apps now has 160+ connectors currently available for all types of needs: B2B, PaaS support, SaaS, etc…. This number is continually growing, if needed you can build your own connector and use that in your Logic Apps.

Today, Azure Integration Services consist of BizTalk Server, Logic Apps, API Management, ServiceBus and Azure Functions. Each of these components can be leveraged in several scenarios and, when combined, can fulfill unlimited opportunities. Jon talks about serverless integration. Key advantages are reduced DevOps effort, reduced time-to-market and per action billing.

[NL] Mature IoT Solutions with Azure – Sam Vanhoutte, Codit

In this session Sam Vanhoutte, CTO of Codit, explained us how businesses can leverage IoT solutions to establish innovation and agility.

He first showed us some showcases from enterprises that are using IoT today to create innovative solutions with a relatively small effort. All that while gaining a very high TCO. He showed us how a large transport company combined Sigfox (an IoT connection service), geofencing and the Nebulus IoT gateway to track “black circuit” movements of containers. Sam also showed us how a large manufacturer of food processing machines uses IoT to connect existing machines to gather data for remote monitoring and predictive maintenance, even though these machines communicate with legacy protocols.

Next, Sam reflected on the pitfalls of IoT projects and how to address them. He stressed the importance of executive buy-in. Solutions will rarely make it to production if this is lacking. Sam also advised to use the existing installed-base of enterprises in order to decrease the time to market and add value fast. This can be achieved by adding a IoT gateway. Also, you need to think about how to process all the data these devices are generating and add some filtering and aggregation before storage costs become too high. Sam then stressed the importance of security and patching the devices.

One last thing to keep in mind is to spend your money and time wisely in an IoT project. Use the bits and pieces from the cloud patform that are already there and focus on value generators. In the last part of the presentation, Sam showed us how the Nebulus gateway takes care of the heavy lifting of connecting devices and how it can jumpstart a companies’ journey into its first IoT project.

[BE] Cloud Integration: What’s in it for you? – Toon Vanhoutte & Massimo Crippa, Codit

During this session Toon Vanhoutte (Lead Architect) and Massimo Crippa (API Management Domain Lead) gave us more information about different integration scenarios.

Massimo started with showing us the different scenarios as they were yesterday, today and how it will become tomorrow. In the past everything was On-Premise. Nowadays we have a hybrid landscape which includes the huge advantage of connectivity, for example the ease of use of Logic Apps. There is also the integrated azure environment, the velocity e.g. the continuous releases for LogicApps and the network (VNET integration).

Toon introduced Cloud Integration which has the following advantages. Serverless technology, migration path, the pricing is consumption based and the use of ALM which stands for continuous integration & delivery. The shift towards the cloud can start with IAAS (Infrastructure as a service). The main advantages of IAAS are : availability, security and the lower costs. But why we should choose for Hybrid Integration? Flexibility and agility towards your customers and it is future proof. Serverless integration reduces the total cost of ownership, you have less devops, you can instantly scale your setup with a huge business value.

Massimo told us that security is very important through governance, firewall, identity and access rules. Another topic is monitoring, in the below photo you have all of the different types of monitoring.

The Codit approach in moving forward is a mix between on-premise (Biztalk – Sentinet – SWL server) and an Azure infrastructure.

[NL] The Microsoft Integration Platform – Steef-Jan Wiggers, Codit

The presentation of Steef-Jan started with an overview of the application landscape from yesterday’s, today’s and tomorrow’s organizations. Previously, all applications, which were mostly server products, were running at on-premises data centers. Today, the majority of the enterprises have a hybrid application landscape: the core applications are still running on-premises, but they are already using some SaaS applications in the cloud . Tomorrow, cloud-based applications will take over our businesses

The integration landscape is currently undergoing a switch from on-premises to hybrid to the cloud. On-premise integration is based on BizTalk Server and Sentinet for API Management. BizTalk is used for running missing-critical productions workloads and Sentinet for virtualizing API’s with minimal latency. Both have been made cloud ready. Adapters for Logic Apps (On premise Gateway) and Service Bus (Queues, Topics and Relay) have been added in BizTalk, for Sentinet integration with Azure Service bus and more focus on REST, OAuth and OpenID. In Hybrid Integration, Logic Apps is used for connecting the cloud and API Management as well.
You have the advantage of continuous releases, moving faster and adapting faster to change. For networking you can use VNET and Relays. Cloud Integration has the advantage of Serverless integration (no server installation & patching, inherent high availability, …).
The pricing is consumption based: pay per executed action.

Different paths are available to switch from on-premises to cloud: “it should be a natural evolution and not a revolution”.
One way is IaaS integration for obtaining better availability for your server infrastructure. IaaS improves security and has less costs. Hybrid integration gives you flexiblity in your application landscape. It is agile towards the business and you can release faster. A hybrid setup ensures you are set for the future. Serverless integration reduces the efforts you put in operations tremendously: no more server patching, backups… The costs are lower and you have the advantage to be able to scale much faster as well.

The Codit Approach

If you look at the hybrid integration platform you can distinguish several blocks. On premises has the known integration technologies. In Azure you find the standard compute and storage options. Connectivity enables smooth integration between on premises and the cloud. Messaging solutions like Service Bus and Event Grid allow decoupling of application. For integration, Logic Apps are used which orchestrate all integrations that can be extended via Azure Functions and API Apps. Integration with Azure API Management ensures governance and security using of Azure AD and Azure Key Vault. Administration and operations are done by using VSTS Release Management to rollout the solutions throughout the DTAP street in a consistent manner. A role-based monitoring experienced is offered by App Insights for developers, OMS for operations and Power BI reports for business users.

Codit wants you to be fully connected: Integration is the backbone of your Digital Transformation. Now more than ever.

[NL] update links – How the Azure ecosystem is instrumental to your IoT solution – Glenn Colpaert, Codit

IoT is here to stay so we’d better get ready for it. In the future everything will be connected, even cows. Glenn kicked off his session by giving a good overview of all main IoT pillars ranging from data storage & analytics to edge computing and connectivity and device management. Of course, that’s not the only things to take into account. Security is often forgotton about, or “applied on top of it” later on. But security should be designed from the ground up. Microsoft’s goal is to simplify IoT on several perspectives: Security, Device Management, Insights, Edge. Microsoft Azure provides a whole ecosystem of services that can assist you with this:

  • Azure IoT Hub that provides a gateway between the edge and the cloud with Service Assisted Communications built-in by default
  • Perform near-real-time stream processing with Azure Stream Analytics
  • Write custom business logic with Service Fabric or Azure Functions
  • Enable business connectivity with Azure Logic Apps for building a hybrid story
  • Azure Time Series Insights enabling real-time streaming insights
  • Setup DevOps pipelines with Visual Studio Team Services

However, when you want to get your feet wet: Azure IoT Central & Solutions are very easy. Start small and play around before spending a big budget on custom development. By using a Raspberry Pi simulator Glenn showed how easy it is to send telemetry to Azure IoT Hub and how you can visualize all the telemetry without writing a single line of code with Azure Time Series Insights. The key take-aways from this session are:

  • Data Value is created by making sense of your data
  • Insights Connect insights back to business
  • Security Start thinking about security from day zero
  • Edge IoT Edge is there for low latency scenarios
  • Evolve Learn by experience with new deployments

If you are interested in learning more about data storage & analytics, we highly recommend reading Zoiner Tejada’s Mastering Azure Analytics

[NL/BE] Event-Driven Serverless Architecture – the next big thing in the cloud – Clemens Vasters, Microsoft Corp

Clemens starts the session with explaining the “Serverless” concept, which frees you entirely from any infrastructure pain points. You don’t have to worry about patching, scaling and all the other infrastructure tasks that you normally have in a hosted environment. It lets you solely focus on your apps & data. Very nice! Clemens teaches us that there are different PaaS options for hosting your services, each having its own use cases and advantages.

Managed Cluster

Applications are being deployed on a cluster that handles the placement, replication, ownership consensus and management of stateful resources. This option is used to host complex, stateful, highly reliable and always-on services.

Managed Middleware

Applications are deployed on sets of independant “stateless” middleware servers, like web servers or pure compute hosts. These applications may be “always-on” or “start on demand” and typically maintain a shared cached state and resources.

Managed Functions

Function implementations can be triggered by a configured condition (event driven) and are short lived. There is a high level of abstraction of the infrastructure where your function implementations are running. Next to that, you have different deployment models you can use to host your services. The classic “monolith” approach divides the functional tiers on designated role servers (like a web server, database server,…). The disadvantage of this model is that you need to scale your application by cloning the service on multiple servers or containers. The more modern approach is the “microservice” approach, where you seperate functionality into smaller services and host them as a cluster. Each service can be scaled out independently by creating instances across servers or containers. It’s an autonomous unit that manages a certain part of a system and can be built and deployed independently.

[BE] Maturing IoT Solutions with Microsoft Azure – Sam Vanhoutte & Glenn Colpaert, Codit

Sam and Glenn kicked off their session talking about the IoT End-to-End Value chain. A typical IoT solution chain is comprised of the following layers:

  • Devices are the basis for the IoT solution because they connect to the cloud backend.
  • The Edge brings the intelligence layer closer to the devices to reduce the latency.
  • Gateways are used to help devices to connect with the cloud.
  • The Ingestion layer is the entry into the IoT backend and is typically the part that must be able to scale out to handle a lot of parallel incoming data streams from the (thousands of) devices.
  • The Automation layer is where business rules, alerting and anomaly detection typically take place.
  • The Data layer is where analytics and machine learning typically take place and where all the stored data gets turned into insights and information. Report and Act is all about turning insights in action, where business events get integrated with the backend systems, or where insights get exposed in reports, apps or open data.

At Codit, we have built a solution, the Nebulus IoT Gateway, that helps companies jump start the IoT connectivity phase and generate value as quickly as possible. The Gateway is a software-based IoT solution that instantly connects your devices to (y)our cloud. The gateway provides all required functionality to cope with connectivity issues, cloud-based configuration management and security challenge.

As integration experts, we at Codit can help you simplify this IoT Journey. Our IoT consultants can guide you through the full IoT Service offering and evolve your PoC to a real production scenario.

The session ended with the following conclusion:

[NL/BE] Closing keynote – Richard Seroter, Pivotal

The theory of constraints tells you the way to improve performance is to find and handle bottlenecks. This also applies to Integration and the software delivery of the solution. It does not matter how fast your development team is working if it takes forever to deploy the solution. Without making changes, your cloud-native efforts go to waste.

Richard went on comparing traditional integration with cloud-native integration, showing the move is also a change in mindset.

A cloud-native solution is composable: it is built out by chaining together independent blocks allowing targeted updates without the need of downtime. This is part of the always-on feature of the integration: a cloud-native solution assumes failure and is built for it. Another aspect of the solution is that it’s built for scale; The solution scales with demand, and the different components do this separately. Making the solution usable for ‘citizen integrators’ by developing for self-service, will reduce the need for big teams of integration specialists. The integration project should be done with the modern resources and connectors in mind allowing for more endpoints and data streams. The software lifecycle will be automated; The integration can no longer be managed and monitored by people. Your software is managed by your software.

Thank you for reading our blog post, feel free to comment or give us feedback in person. You can find the presentations of both days on following links:

This blogpost was prepared by:

Glenn Colpaert – Nils Gruson – René Bik – Jacqueline Portier – Filiep Maes – Tom KerkhoveDennis Defrancq – Christophe De VrieseKorneel Vanhie – Falco Lannoo

Azure Functions Proxies – Part 4 – A very lightweight API Management

Azure Functions Proxies – Part 4 – A very lightweight API Management

Common Functionalities

Transformation

Azure Function Proxies have limited transformation capabilities on three levels: rewriting of the URI, modification of the HTTP headers and changing the HTTP body. The options for transformations are very basic and focussed on just creating a unified API. Azure API Management on the other hand, has an impressive range of transform capabilities.

These are the main transformation policies:

Next to these policies, you have the opportunity to write policy expressions that inject .NET C# code into your processing pipeline, to make it even more intelligent.

Security

Azure Function Proxies supports any kind of backend security that can be accomplished through static keys / tokens in the URL or HTTP headers. Frontend-facing, Azure Function Proxies offers out-of-the-box authentication enforcement by several providers: Azure Active Directory, Facebook, Google, Twitter & Microsoft. Azure API Management has many options to secure the frontend and backend API, going from IP restrictions to inbound throttling, from client certificates to full OAuth2 support.

These are the main access restriction policies:

  • Check HTTP header – Enforces existence and/or value of a HTTP Header.
  • Limit call rate by subscription – Prevents API usage spikes by limiting call rate, on a per subscription basis.
  • Limit call rate by key – Prevents API usage spikes by limiting call rate, on a per key basis.
  • Restrict caller IPs – Filters (allows/denies) calls from specific IP addresses and/or address ranges.
  • Set usage quota by subscription – Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis.
  • Set usage quota by key – Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.
  • Validate JWT – Enforces existence and validity of a JWT extracted from either a specified HTTP Header or a specified query parameter.

These are the main authentication policies:

Hybrid Connectivity

Azure Function Proxies can leverage the App Service networking capabilities, if they are deployed within an App Service Plan. This gives three powerful hybrid network integration options: hybrid connections, VNET integration or App Service Environment. Azure API Management, premium tier, allows your API proxy to be part of a Virtual Network. This provides access to all resources within the VNET, which can be extended to on-premises through a Site-to-Site VPN or ExpressRoute. On this level, both services offer quite similar functionality.

Scope

The scope of Azure Function Proxies is really at the application level. It creates one single uniform API, that typically consists of multiple heterogenous backend operations. Azure API Management has more of an organizational reach and typically governs (large parts) of the API’s available within an organization. The diagram below illustrates how they can be combined together. The much broader scope of API Management results also in a much richer feature set: e.g. the publisher portal to manage API’s, the developer portal with samples for quick starts, advanced security options, the enormous range of runtime policies, great versioning experience, etc…

Use cases

These are some use cases where Azure Function Proxies was already very beneficial:

  • Create a single API that consists of multiple Azure Functions and / or Logic Apps
  • Create a pass-through proxy to access on-premises API’s, without any coding
  • Generate a nicer URL for AS2 endpoints that are hosted in Azure Logic Apps
  • Generate a simple URL for Logic Apps endpoints, that works better for QR codes
  • Add explicit versioning in the URL of Azure Functions and / or Logic Apps

Conclusion

Azure Function Proxies really has an added value in the modern world of API’s that often consist of multiple heterogenous (micro-)service operations. It offers very basic runtime API management capabilities, that reside on the application level.

Cheers,
Toon

BizTalk Cumulative Update installation error: Cannot proceed with installation Biztalk server pack is installed

BizTalk Cumulative Update installation error: Cannot proceed with installation Biztalk server pack is installed

Cannot proceed with installation. Biztalk server pack is installed.”, it is not the first time that I encounter this problem, the first time was while I was trying to install Cumulative Update 2 for BizTalk Server 2016 ( the error that you can see in the picture below) and more recently while I was trying to install Cumulative Update 3:

Microsoft BizTalk Server 2016 CU2 [KB 4021095]

Cannot proceed with installation. Biztalk server pack is installed. Please install Cumulative Update for BizTalk Server Feature Pack.

Cannot proceed with installation Biztalk server pack is installed

Cause

There is this new “kid on the block” that was introduced by the BizTalk Server Product group in BizTalk Server 2016 that is call “BizTalk Server 2016 Feature Pack”.

Microsoft will use feature pack approach as a way to provide new and non-breaking functionalities to the product at a faster pace without the need for you to wait 2 years for the next major release of the product to have new features.

However, until know – Cumulative Update 3 for BizTalk Server 2016 – the Cumulative Updates are not aligned with the feature pack. And when I say: “Cumulative Updates are not aligned with the feature pack” I mean that when Microsoft releases a new BizTalk Server Cumulative Update:

  • it will not be compatible with Feature Pack 1
  • you can only install it on environments without BizTalk Server Feature Pack 1 installed, otherwise you will receive the error described above.
  • you need to wait a few more days/weeks for Microsoft release a new update of BizTalk Server Feature Pack 1 with the Cumulative Update included

At least based on the history until now:

I do not know if this behavior will change in future versions of Cumulative Updates, I hope it does and all the new CU’s will be compatible with FP1.

Important Note: as an additional note, currently if you want to install FP1 you will be forced to install Cumulative Update 3 for BizTalk Server 2016 because it is now part of the Feature Pack… so you cannot install FP1 without it.

Solution

Of course, currently, this is a false issue because Microsoft already released an updated version of Feature Pack 1 with the latest CU. But if this behavior continues to occur in future CU’s versions, you have two options:

  • Abdicate the Feature Pack by uninstalling it. And then you will be able to update your BizTalk Server environment with the latest fixes.
  • Be patient, and wait a few days/weeks until Microsoft release an updated version of the Feature Pack compatible with the latest CU.
Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

BizTalk Server Configuration error: The backup file could not be created. (SSO)

BizTalk Server Configuration error: The backup file could not be created. (SSO)

Some clients have restricted rules regarding what to install content in the C (the default) hard drive. For some of them, C drive is just for the operating system and other component related to the operating system. All the rest should be installed or kept in different hard drives.

The goal of this post is not to say if that is the proper and best approach or not is just to document this error and know the reason and what to do. Personally, I usually install BizTalk Server in C (the default) hard drive, that is why I never encounter this error/warning before.

So, during one of my recent installations, where I have the need to install and kept all BizTalk Server installation and configuration components on a non-default (C:), I encounter the following error message while trying to configure the local of the Backup file location of the SSO Master Secret Key:

TITLE: Microsoft BizTalk Server Configuration Wizard
——————————
The backup file could not be created. (SSO)

For help, click: http://go.microsoft.com/fwlink/events.asp?ProdName=Microsoft+BizTalk+Server+2016&ProdVer=3.12.774.0&EvtSrc=SSO&EvtID
——————————
ADDITIONAL INFORMATION:
(0x80070003) The system cannot find the path specified.
(Win32)

For help, click: http://go.microsoft.com/fwlink/events.asp?ProdName=Microsoft+BizTalk+Server+2016&ProdVer=3.12.774.0&EvtSrc=Win32&EvtID
——————————
BUTTONS:
OK
——————————

BizTalk Server Configuration Wizard: The backup file could not be created. (SSO)

Cause

I don’t know if this can be considered an error or actually a bug in the BizTalk Server Configuration Wizard because, for me, the wizard should be responsible for creating the specified path.

By default, the SSO Master Secret Key Backup file location is set as C:Program FilesCommon FilesEnterprise Single Sign-On with the following name structure “SSO****.bak”:

  • where **** is a randomly generated name by the BizTalk Server Configuration Wizard.

The BizTalk Server Configuration Wizard has an option for you to open an annoying Folder Browser Dialog windows that force you to iterate through a Directory Tree (you cannot manually set the path) instead of using a more fancy, elegant and practical dialog box or way to archive the same result, like, suing SaveFileDialog component instead

So, if you try to manually set the path according to your requirement, for me it was just changing the hard drive letter from C: to E: you need to be sure to first manually create the full folder path on the desired destination hard drive, otherwise, you will get this problem.

Solution

Make sure the folder path is created on the desired hard drive, if not create it before you specify the path in the BizTalk Server Configuration Wizard tool.

If you already specify the path in BizTalk Server Configuration Wizard, that is, before you have created the folder:

  • Create folder path on the desired hard drive
  • And refresh the Enterprise SSO Secret backup page

and the backup file could not be created. (SSO) error message will be gone.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

DTA Purge and Archive (BizTalkDTADb) job: Procedure or function dtasp_PurgeTrackingDatabase has too many arguments specified. [SQLSTATE 42000] (Error 8144). The step failed.

DTA Purge and Archive (BizTalkDTADb) job: Procedure or function dtasp_PurgeTrackingDatabase has too many arguments specified. [SQLSTATE 42000] (Error 8144). The step failed.

I normally advise my clients or partners to “religious” follow the order and steps of my BizTalk Server Installation and Configuration guide, an advice that I also take into consideration for myself, not because following the exact order of the steps are mandatory, some of them are but other don’t, but because if you follow that “recipe” you will end up with a successfully BizTalk Server installation better optimized than the default and normal installations without encountering errors.

Despite all of my warning I am also human and make “mistakes” of not following the order of some steps to try to agile the BizTalk Server configuration/optimization process and a few weeks ago, while installing multiple machines in several environments at the same time we encountered a “new” problem after we configured the DTA Purge and Archive (BizTalkDTADb) job. When we try to execute the DTA Purge and Archive (BizTalkDTADb) job it failed with the following error message:

Executed as user: <servicename>. Procedure or function dtasp_PurgeTrackingDatabase has too many arguments specified. [SQLSTATE 42000] (Error 8144). The step failed.

DTA Purge and Archive (BizTalkDTADb) job: dtasp_PurgeTrackingDatabase has too many arguments specified

Cause

The error message is very clear “… dtasp_PurgeTrackingDatabase has too many arguments specified…” but my first instinct was to say “Impossible!”. I been using the same job configuration from last 10 years in all my environments successfully so why this is failing?

Then I realize that I recently updated my guide to improving the DTA Purge and Archive (BizTalkDTADb) job to be able to automatically delete orphaned BizTalk DTA service instances.

This is a new extra feature that the BizTalk Product group release with:

The “traditional” contract of the stored procedure is to have six parameters that you must configure:

  • @nHours tinyint: Any completed instance older than (live hours) + (live days) will be deleted along with all associated data.
  • @nDays tinyint: Any completed instance older than (live hours) + (live days) will be deleted along with all associated data. The default interval is 1 day.
  • @nHardDays tinyint: All data older than this day will be deleted, even if the data is incomplete. The time interval specified for HardDeleteDays should be greater than the live window of data. The live window of data is the interval of time for which you want to maintain tracking data in the BizTalk Tracking (BizTalkDTADb) database. Anything older than this interval is eligible to be archived at the next archive and then purged.
  • @dtLastBackup: Set this to GetUTCDate() to purge data from the BizTalk Tracking (BizTalkDTADb) database. When set to NULL, data is not purged from the database.

Normally the step will be configured like this:

declare @dtLastBackup datetime set @dtLastBackup = GetUTCDate() exec dtasp_PurgeTrackingDatabase 1, 0, 1, @dtLastBackup

But with the CU’s described above there is an additional parameter that you can use:

  • @fHardDeleteRunningInstances int = 0: if this flag is set to 1 we will delete all the running service instances older than hard delete days. By default, this new parameter is set to 0

So now the configuration that I normally use is like this:

declare @dtLastBackup datetime set @dtLastBackup = GetUTCDate() exec dtasp_PurgeTrackingDatabase 1, 0, 1, @dtLastBackup, 1

Note: After you install this update, you must manually update the DTA Purge and Archive job definition to pass the additional parameter @fHardDeleteRunningInstances if you want to clean up running service instances that are older than @nHardDeleteDays. By default, this new parameter is set to 0. This continues the current behavior. If you require the new behavior, set this parameter to 1.

My problem was that I was already using this new configuration without installing any CU in our brand-new BizTalk Server 2016 installation

Solution

Of course, you should review the job configuration to see if you properly set all the stored procedure parameters if the script.

But in our case, Installing the most recent cumulative update solved the issue.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

Microsoft Integration Weekly Update: Oct 9, 2017

Microsoft Integration Weekly Update: Oct 9, 2017

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:

Feedback

Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.

Advertisements