What is coming in v9.0

What is coming in v9.0

 BizTalk360 enhances the complex administrative tasks into a modern easier task. We always aim to constantly improve our product based on our customer’s feedback and their business needs. We have recently released V8.9.5. The upcoming version 9.0 features are typically chosen from our customer feedback based on the impact and the number of requests.

You asked and we are implementing the below features for v9.0.

Alarm Auditing

When we work on cutting edge technology, security is one of the important factors we always need to consider. Keeping that in mind, earlier we implemented Governance and Auditing which actually audits BizTalk level activities from BizTalk360. You can think of actions on BizTalk Applications, Service Instances, Host Instances, BizTalk and SQL Servers, ESB Messages and Business Rules.

Many customers requested us that it could be great if Biztalk360 activities such as Alarms, User access policy activities also audited. So, for this release we have implemented Alarm activities. Alarm operations such as new alarm creation, deleting an alarm, changing the alarm status (Enable/Disable), editing alarm details will be audited with the existing and new values, along with the user details. With this, the administrator will get clear picture of all the alarm activities.

Also, we have taken Secure SQL Query auditing. If any query has been created or modified this will be audited. We also implemented auditing of the ‘Execute’ operation. So, with this administrator can easily look for, who has executed which query and which parameter value are passed for the query execution in detail.

 

SMTP Notification Channel

In BizTalk360 version 8.0, we introduced Notification Channels. With this , it’s easy to send alerts to any external systems like your ticketing system, internal databases, calling REST endpoints or executing PowerShell scripts.

To add more value to this, we are bringing the SMTP Notification Channel which provides an ability to create an email distribution lists by grouping email ids based on the business needs. The monitoring service will generate a notification to a group of people by simply configuring the recipient address (To, CC) in the channel. Once the SMTP notification channel is configured it can be used in all the alarms just, by enabling the channel. This will prevent the user to type email ids over and over again.

UnMapped Artifacts List

When the artifacts are mapped for monitoring, BizTalk360 will take care of it and intimates users when any violation occurs. But, what happens if you missed to map some artifacts for monitoring? You will not get any notification for this, right?! Of course, we don’t want you to miss artifacts for monitoring.

This  problem will be solved in v9.0, you will get a summarized list in the Monitoring Dashboard. This list contains  the status of the artifacts which has been mapped for monitoring and also the unmapped artifacts list. For instance, if any new artifacts have been added in your BizTalk environment, we will bring that to your notice and you can easily map the artifacts for monitoring. Also, we will forward the list of unmapped artifacts to the system administrator based on the configured interval time.

 

Switch User Roles

 

As of now in BizTalk360, once a user profile is created, the user roles (Super, Normal) cannot be modified at any point. To change the user roles, the profile needs to be deleted and recreated again which is a time-consuming process. This will be solved just by editing and changing the user roles. So Super Users can quickly be converted to Normal users and vice versa in a single step.

Copy to Clipboard

Our business data is highly valuable. The information uses contains decision-making and problem-solving. From v9.0 we are providing an option to copy your information’s in a single click from  the BizTalk360 UI to the Windows Clipboard.

 

Apart from these new features, we are working on improvements in the following sections.

 

Monitoring Dashboard

The BizTalk360 ‘Monitoring Dashboard‘ becomes the one-stop point for support people to view the health status of BizTalk environment. We have planned to do some changes on monitoring dashboard UI which helps you to see the summarized dashboard in much enriched view.

AutoCorrect reset

In BizTalk360 we have the Auto Healing functionality with which, if your artifacts go down, the system will try to auto heal the violation. The system will retry the auto healing process for a configured number of times. Once retry limit is reached the auto healing process will be stopped.

In v8.9.5 we introduced the auto reset option for the auto healing process, in which, after the configured interval time, the retry count will be reset to 0 and starting the auto healing process again. We kept the “interval time to reset the retry count” to 0. This means that the retry limit will never reset, unless the user goes and manually change the interval time. This is quite a tedious process when you have huge number of artifacts mapped for auto healing.

This problem will be solved as we will provide an option to define the default interval time globally. So once you update the interval time it will be used for all the artifacts.

Conclusion

Considering the feedback from our customers, BizTalk360 will continues to provide more useful features. Now we would like you to hear from you, so please take some time to fill this questionnaire to help us prioritize the next upcoming feature tasks. Stay tuned for next version v9.0.

Why not give BizTalk360 a try! It takes about 10 minutes to install on your BizTalk environments and you can witness and check the security and productivity of your own BizTalk Environments. Get started with the 

 

 

 

The post What is coming in v9.0 appeared first on BizTalk360.

Microsoft Integration Weekly Update: March 18, 2019

Microsoft Integration Weekly Update: March 18, 2019

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

 

Microsoft Announcements and Updates

 

Community Blog Posts

 

Videos


Podcasts

 

How get started with iPaaS design & development in Azure?

Feedback

Hope this would be helpful. Please feel free to reach out to me with your feedback and questions.

BizTalk Server WCF-* Adapter: Please verify that the receive location exists, and that the isolated adapter runs under an account that has access to the BizTalk databases.

BizTalk Server WCF-* Adapter: Please verify that the receive location exists, and that the isolated adapter runs under an account that has access to the BizTalk databases.

I think in the past I told that do not try to configure anything in BizTalk Server if you are tired. My advice, go to sleep for one hour and come back… unless the client is eager and demands or request things to be done… then the error happens. This was one of these errors in which I usually say that the problem was between the chair and the keyboard. This week I returned to work, after my little honeymoon leave, that I spent part of it working and another part of it with my little kid a little sick (I need to compensate my wife with a proper vacation for being so understanding). So, as you can imagine I return a little tired and the first day was one of those days that I had several clients requiring my presence for several small things at the same time.

One of them was configuring correctly, according to best practices and security the IIS application pools that were being used to run Web Sites with some orchestrations exposed as Web Services that initial was running with BizTalk Server Administration account.

Once I finished configuring the applications pools, I started receiving the following error:

The Messaging Engine failed to register the adapter for “WCF-WebHttp” for the receive location “/ModifyOperationStatus/ModifyOperationStatus.svc”. Please verify that the receive location exists, and that the isolated adapter runs under an account that has access to the BizTalk databases~

BizTalk Server WCF-* Adapter: Please verify that the receive location exists, and that the isolated adapter runs under an account that has access to the BizTalk databases.

Cause

Usually, this can happen for two reasons:

  • There isn’t a receive location created and enabled listening to this web service;
  • Or this is a permission issue! And is typically related by the fact that the account or service account specified on the application pool that the web service is running is not… a member of the BizTalk Isolated Host Users group.

In my case, and because the names of the service accounts were very identical, I improperly configured the application pool to run with the service account that was a member of the BizTalk Host Users group (btsapphostsrv) instead of the service account member of the BizTalk Isolated Host Users group (btsiapphostsrv).

Solution

To solve this issue, you first should check and double-check if the IIS Application Pool Identities are correctly configured.

If yes, guarantee that the user or service account is part of the BizTalk Isolated Host Users group. If not:

  • Make sure you add that user or service account into the BizTalk Isolated Host Users group.
  • Or change the IIS Application Pool Identity for an account that is already a member of the BizTalk Isolated Host Users Group.

And then make sure that there is a receive location configured and to listen to this web service and if it is enabled.

 

In my case, changing to the BizTalk Isolated Host Instance Account that is, of course, a member of BizTalk Isolated Host Users group solved my issue.

The post BizTalk Server WCF-* Adapter: Please verify that the receive location exists, and that the isolated adapter runs under an account that has access to the BizTalk databases. appeared first on SANDRO PEREIRA BIZTALK BLOG.

BizTalk360 Technical Support – A look back in 2018

BizTalk360 Technical Support – A look back in 2018

As the BizTalk360 technical support team, we receive support tickets from our customers through various channels. 2018 was, of course, a great year for the support team. We were not only involved in resolving the support tickets, but we were also involved in the Customer Relationship calls, Best Practice Installation and Configuration sessions and even some of the demos to the customers.

This makes us understand the customers’ expectations better and help in improving the product to cater to the needs of the customer. As the saying goes, ‘Every ending has a new beginning’, the end of 2018 has brought us new lessons and experiences with various customers and new processes. Every support case has a lesson in it, be it the customer scenario, their infrastructure settings or our way of troubleshooting the case, our responses. We make sure that we improve our support, thereby making our customers happy and working more efficiently.

We are happy to share the stats of the support cases handled by us in 2018. This is the result of our continuous hard work and dedication which has resulted in some really positive numbers in BizTalk360 Customer Support. Here are some statistics that we are happy to share. These statistics are taken from data provided by our customer support platform – Freshdesk.

  • 14.626 customer queries addressed in 2018
  • Tickets ranging across technical support, licensing and sales enquiries
  • The busiest month for the team was the month of October 2018. We received about 2132 support tickets.

  • We managed to respond to 97% of the tickets and resolve 87% of the tickets within the SLA
  • We resolved 76% of the tickets with just one response to the customer
  • We received support tickets on
    • Email
    • Support Portal
    • Feedback Portal

We make sure customer satisfaction is achieved and this can clearly be seen in the numbers for the SLA. Below are the rating and appreciation given by the customers.

Knowledge Sharing

Of course, in the software industry, it is all about teamwork and knowledge sharing. Yes, teamwork is involved in solving each and every support case. One member might analyse the case, the other may test it if required. In 2018, we have started the ‘Support Deliberation’ meetings for knowledge sharing. As an engineering team, not all the members will be involved in product support. Hence it is important that the support team members share their knowledge of the handled support cases, to the team. This will give insight to the team about the various customer scenarios that need to be known so that we can check for the same in our development and testing phases.

Introduction to DevOps practice

This is one of the major changes that happened during 2018. We involved ourselves in the development activities too, the next step in improvising the product to cater to the needs of the customers. The complete process is explained here. From the task planning to post-release validation, we follow all different steps in the process.

Best Practice Installation and Configuration sessions

This is a new initiative taken by the Client Relationship team to help our customers with the installation of BizTalk360 and provide them help in the basic configuration steps that are required for BizTalk360 to start monitoring your BizTalk environment.

This is a two-hour session where BizTalk360 is installed, important configurations are done and an overview of some of the important features of BizTalk360 is given. We also explain to them about some of the best practices that need to be followed in the alarm configuration, Advanced Event viewer setup, Data purging which may, in turn, affect the performance of the application. The knowledge our customers will obtain during this session will help them to get the most out of the product so that they can monitor their BizTalk environment efficiently.

In 2019

We have shifted from Agile to Kanban to record all our activities. We continuously strive to improve ourselves for providing better support thereby trying to resolve customer’s queries on time.

The most awaited event in the Microsoft Integration space, Integrate2019 is on the way. We have this event in two locations this year. The dates and venues are finalized, and the early bird offer ends by March 31st, 2019. You can check for the details here.

Conclusion

>>Which feature would you like to see coming in BizTalk360 in upcoming releases? <<

We would like to request you, our customers, so please take the time to fill this questionnaire. This helps us to prioritize the next upcoming feature tasks and will let us know what your main pain points are. In case of any queries, you can always write to support@biztalk360.com, so that we can immediately get your queries answered and resolve the issues. Happy monitoring with BizTalk360!

 

 

 

The post BizTalk360 Technical Support – A look back in 2018 appeared first on BizTalk360.

Interesting support cases 2018 – Part 2

Interesting support cases 2018 – Part 2

In my previous blog post, I have highlighted 5 interesting cases we received and solved in the past year. In this blog, I would like to add 5 more interesting support cases.

Let’s get into the cases.

Case 6: Data Monitor Dashboard slow to respond

In the year 2017, we have started a new initiative called ‘Customer Relationship Team. This team will get in touch with our customers regularly in the frequency of 3-4 months. The team will make sure about how the customers are using the BizTalk360 product, whether they are facing any problems, and if they have any queries.

If so, we will clarify the customer queries during the call. If can’t clarify the problem within the short time of the call, then we will create a support case for their queries and make sure we will solve the case.

In one such a call, a customer raised a concern about the Data monitoring dashboard being slow to respond and it takes more and more time.

Troubleshooting

During the investigation of the slowness, we came to know that the customer had configured 147 data monitoring alarms. Out of those, 110 data monitoring alarms were scheduled for every 15 minutes cycle, which will produce a huge result.

In more detail:

110 data monitoring results for every 15 minutes cycle.

110*4*8(business hours) = 3520 results.

110*4*24(whole day) = 10,560 results.

Loading 10k results in a single load, for sure it will take time to load all the results.

Solution

Most of the customers won’t use many schedules for data monitoring alarms. To handle such a huge load, we have improved the performance of the data monitoring dashboard, by having a filter option to select the specific alarms and corresponding status. The improved data monitoring dashboard is available from version 8.7 on.

Case 7: System resources configuration

A customer faced an exception ‘the network path was not found’ while trying to enable SQL Server System resources monitoring.

Troubleshooting

We have requested the customer to check the below things:

  1. The BizTalk360 service account is a local admin on the machine where SQL server is hosted
  2. The Remote Registry service is started or not
  3. Firewall ports are opened for SQL server
  4. From BizTalk360 server, can you connect to that SQL server through SQL Management Studio
  5. Connect to the remote computer (SQL Server configured for monitoring) from the BizTalk360 machine where the monitoring service is running

All other steps were passed, but in the Perfmon, while connecting the SQL Server on the BizTalk360 installed machine, they have faced the same exception.

Solution

To open the SQL server on another machine, port 1433 needs to be enabled. To monitor System resources of SQL machine, an additional port needs to be enabled ‘135’, which is for RPC and WMI. We have mentioned the depended ports what needs to be enabled in our existing blog.

Even after adding the port, still the problem persisted. At last, we found that the firewall rules were not activated/enabled, once after activating the rules we were able to solve the case. This is one such case in which we all missed to check the basic step that a rule should be activated because no one had access to view the rules other than the customer’s admin.

Case 8: SFTP Monitoring – PublicKeyAuthendication

A customer was trying to configure monitoring for an SFTP location and they were facing issues. It was working fine for the customer when the authentication was used with a simple username and password. However, once they configured for PublicKeyAuthendication, they faced issues during the configuration.

Troubleshooting

We started with the basic troubleshooting steps like authentication, access permissions and we understood that it has all rights to access the FTP site. During the investigation, we found that in a folder BizTalk was picking up the inner folders as well, instead of picking the files alone.

Solution

To find the exact root cause of the issue, we have developed a console application (with logs enabled) and provided it to the customer. It provided a clear picture of the problem, as mentioned earlier it has calculated the folder for PublicKeyAuthentication. Now, this has been fixed.

Case 9: Message Count mismatches

A customer faced a problem between the Receive and Send ports for the message count at Analytics Messaging Patterns.

Troubleshooting

The customer had a very simple scenario (see below) where a file is picked up and placed in a different location, but the Send Port count shows the Receive Port count twice. He gets similar doubling up on Receive and Send ports for other message flows as well.

Example:

Send Port – 12 messages

Receive Port – 6 messages

During the investigation, we have found that whenever BizTalk retries to submit the suspended messages the counts get double.

Solution

As of now, we are showing the message transfer count rather than the message count. We are doing this because this will help us to determine the message performance of BizTalk Artifacts in an environment. We are going to take this as a feature enhancement in the future.

Case 10: Not possible to expand columns in query outcome

Normally you can expand the column size of the query outcome in a grid. But customers were facing a problem that they were unable to expand the columns in the MessageBox Queries grid.

Troubleshooting

During the investigation the customers were facing this problem in Chrome,  but not in Internet Explorer and Firefox. They faced the same issue while opening the browser in an incognito window as well. This is really something very strange for us, because while using the same version we were not able to reproduce the same problem.

We have investigated at the code level and everything seems fine at our end. So, we have decided to go for a meeting. During the meeting, we were able to see the problem at the customer end and we had no clue at that time, requested a few days time and closed the meeting.

We analyzed the case and it was hard to reproduce the case at our end. It is working for most of our team members and only a few are facing this issue. The team member who faced the issue and the one who’s working fine worked together, they compared each component from scratch to find what’s the difference and we found the cause.

Solution

If the Chrome page is zoomed out or zoomed in, then the column resize wasn’t working for us and this happened at the customer as well.

It seemed that this was a problem with the Kendo Grid control in the latest version this issue was introduced by Kendo. We worked along with Kendo and solved the case.

Satisfaction does it!

As a support engineer, we receive different cases on a daily basis. Every support case is unique because the problem will be faced by different customers in different environment architecture. But some of the support cases are interesting by the root cause of the problem and the way of troubleshooting the case. I’m happy that I have worked on such challenging and interesting cases.

The post Interesting support cases 2018 – Part 2 appeared first on BizTalk360.

Microsoft Integration Weekly Update: March 11, 2019

Microsoft Integration Weekly Update: March 11, 2019

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

 

Microsoft Announcements and Updates

Community Blog Posts

 

Videos

 

Podcasts

How get started with iPaaS design & development in Azure?

Feedback

Hope this would be helpful. Please feel free to reach out to me with your feedback and questions.

Achieving a Consolidated Monitoring View Using Custom Widgets

Achieving a Consolidated Monitoring View Using Custom Widgets

BizTalk360 Monitoring

BizTalk360 comes with out of box capabilities to monitor BizTalk Artifacts, BizTalk Environment, Queues (IBM, Azure Service Bus and MSMQ), File Location (File, FTP and SFTP) and so on. The Monitoring Dashboard represents the Monitoring Status in a nice graphical chart view. Different set of users like BizTalk Administrators, Operators, Technical members and Business users are using the monitoring features for their day to day activities. Users create multiple alarms to suffice their needs to monitor the health of their BizTalk environment.

While users are monitoring the multiple alarms, they must keep an eye on multiple dashboards. It is a cumbersome task if a user takes care of more than two Alarms. Based on this requirement, few customers have requested for consolidate view of alarms. In this article, we are going through the process of how you can create a consolidated view of multiple alarms using Custom Widgets.

Scenarios

BizTalk360 users are configuring the alarms based on their business vertical or alignment to their process. There have been different patterns in alarm configurations. We can see some of the commonly used patterns;

  • Integration-based alarm configuration

Configuring BizTalk Artifacts, Web Endpoints, Queues, File Location etc. of integration in an alarm.

  • Role-based alarm configuration

BizTalk Administrators will look after infrastructure settings like Disk, System Resources, Host Instances and configure in an alarm BizTalk Operators can configure BizTalk Artifacts and other entities in a single alarm

  • Entity-based alarm configuration

For instance, users can configure all their Queues (IBM/Azure Service Bus) in an alarm. Similarly, they can configure Web Endpoints, Infrastructure (Disks, System Resources) in separate alarms.

Based on the above patterns, Role-based and Entity-based alarms can have a single view of multiple alarms in which you can monitor all the entities. When you follow the Integration-based alarm pattern, then entities like Application Artifacts, Azure Service Queues and BizTalk Server health activities are in different alarms.

Many organizations follow the Integration pattern by grouping all the related artefacts in an alarm. In this scenario, users (BizTalk Operators/BizTalk Technician) must monitor multiple alarms. For Instance, BizTalk Operators looking after BizTalk Artifacts or Developers responsible to monitor Azure Service Bus Queues will have to monitor the multiple alarms. Let’s see how to overcome this challenge by using custom widgets to group multiple alarms.

Consolidated Dashboard

The Monitoring Dashboard is one of the most used features among the users to monitor BizTalk Environments and Artifacts in BizTalk360. An enriched monitoring capability while using Custom widgets to group the multiple alarms to get a single view based on business or team members role.

Custom Widgets are a powerful feature in BizTalk360, as such widgets will provide the opportunity to meet their business scenarios. Many customers adopted Custom Widgets to provide solutions to end users.

Following are a few illustrated scenarios of grouping alarms when it is configured based on integrations

  1. Azure Service Bus Queues
  2. BizTalk Application Artifacts
  3. BizTalk Environment Health

In this article we take one of these scenarios, to show how users can achieve the grouping alarms using Custom Widgets

Azure Service Bus Queues

Custom Widgets used BizTalk360 Secure SQL Query API’s to fetch all Azure Service Bus Queues Monitoring Status from BizTalk360 Database. A user can select the filter with different alarms they want to group.

By default, the user can see only the unhealthy queues in an environment. If they set the Enable Healthy Queue option, they will be able to view all queues in the tree nodes. The Azure Service Bus Queues Custom Widget will fetch the information from Secure SQL Query with a refresh interval which is configured in the script of the custom widget.

Follow the below steps to achieve the Custom Widget’s script;

1. Secure SQL Query

Create the Secure SQL Query to fetch the monitoring status of Azure Service Bus Queues in the environment. To know more about how you can execute Secure SQL Queries using custom widgets, refer to this article.

2. Custom Widget Script

Create the initial variables, Place holders and SQL Query to call the BizTalk360 API method in the Custom Widget’s script

// BEGIN User variables
    azureQueueRefresh = 20;
    username = "`username_812414`"; 
    password = "`password_701422`"; 
    environmentId = "`productionenvid_189308`";    
 queryId = "8eea2771-10a7-44c6-8709-a597687434cf"; 
 queryName = "Azure ServiceBus Queues"; 
 sqlInstance = "kovai-bts";     
database = "BizTalk360"; 
    sqlQueryForAzureGraph = "Select AA.Id,AA.[Name],AME.MonitorStatus,AME.ExecutionResult from [dbo].[b360_alert_MonitorExecution] AME Inner Join[dbo].[b360_alert_Alarm] AA ON AME.AlarmId = AA.Id and AME.MonitorGroupType = 'Queues' WHERE AME.LastExecutionTime = (SELECT MAX(LastExecutionTime) from [dbo].[b360_alert_MonitorExecution] WHERE MonitorGroupType = 'Queues' and AA.IsAlertDisabled='false' and AlarmId = AA.Id and LastExecutionTime &amp;gt;= DATEADD(MINUTE,-60,GETUTCDATE())) AND AME.EnvironmentId ='`productionenvid_189308`'"; // The SQL Query which needs to be executed from the custom Widget
    bt360server = "biztalk360.westus.cloudapp.azure.com"; // Name of the BizTalk360 server (needed to do an API call to execute the SQL query
    //Mention the created alarm details and the respective partner name to display in the graph.
    AlarmDetails = [
        { AlarmName: "Threshold-1", PartnerName: "Air India" },
        { AlarmName: "Threshold-2", PartnerName: "British Airways" },
        { AlarmName: "Threshold-3", PartnerName: "DHL" },
		{ AlarmName: "BizTalk360 Default Alarm1", PartnerName: "Jet Airways" }
    ];
    // END User variables

3. GO JS Framework

BizTalk360 uses the GO JS Framework- Organization chart to represent the monitoring hierarchical view in the UI. It will display the organization chart structure as Azure Service Bus Queue node as a root node, Integration friendly name as second level node and Queue threshold violation details as a child node. You can control the nodes expand/collapse capability of through the custom widget code.

4. Filter Options

Users can able to filter the data based on the selected alarms or the option to show healthy queues. By default, users can see only unhealthy Azure Service Bus Queues. Similarly, other scenarios are implemented through custom widgets. To get the full source code of the Custom widgets you can download it from our GitHub Project.

Summary

We hope this article is useful in grouping the multiple alarms into a single view in an environment. This will bring you more control over the custom development you may want to achieve.

Get started with the free 30 days trial. For any queries/feedback please write to us support@biztalk360.com.

The post Achieving a Consolidated Monitoring View Using Custom Widgets appeared first on BizTalk360.

Faster Service Fabric Deployments with PowerShell

Faster Service Fabric Deployments with PowerShell

Presentation

A few weeks ago I had the great privilege of presenting a 60 minute breakout session at Microsoft Ignite | The Tour in Sydney. It was thrilling to have over 200 people registered to see my topic “Seamless Deployments with Azure Service Fabric”, especially in the massive Convention Centre.

In the session I demonstrated the self-healing capabilities of Service Fabric by introducing a bug in the code and then attempting a rolling upgrade. It was impressive to see how Service Fabric detected the bug after the first node was upgraded and then immediately started rolling it back.

As you can imagine, it took a fair amount of practice to get the demo smooth and functioning within the tight time limits of the average audience attention span. (In fact, I had to learn how to tweak both the cluster and the application health check settings to shorten the interval – perhaps the subject of another blog post!) Naturally this also entailed frequently “resetting” the environment so that I could start over when things didn’t go quite as planned, or if I wanted to reset the version number. If you’ve ever worked with Service Fabric before you would know that deployments from Visual Studio (or Azure DevOps) can take a while; and undeploying an application from Service Fabric manually in the portal is painful!

For example, if I want to undeploy an application from a Service Fabric cluster in the web-based Service Fabric Explorer, I have to do the following in this order:

  1. imageRemove the service
  2. Remove the application
  3. Unregister the application type
  4. Remove the application package

What becomes really annoying is that each step elicits a confirmation prompt where you need to type the name of the artefact you want to remove! That gets old pretty fast.

Thankfully, there is an alternative. Service Fabric offers a number of different ways to deploy applications, including Visual Studio, Azure CLI, and PowerShell.  Underneath the covers I expect these all make use of the REST API. But in my case I found the simplest and most efficient choice was PowerShell. Using the documented commands, it is easy to create a script that will deploy or undeploy your application package in seconds. And I mean seconds! It was astounding to see how quickly the undeploy script could tear down the application!

The script I created is available in my demo code on GitHub. I’ll walk through some of it here.

Pre-requisites

First, it is necessary to have the Azure PowerShell installed. This is normally included when you install the Service Fabric SDK, but you must enable execution of the scripts first.

Second, in order for the Deploy_SFApplication.ps1 script to work, you must have already packaged the application. You do this by right-clicking the Service Fabric project in Visual Studio (not the solution file!) and selecting “Package”. The path to this package is a mandatory parameter for this script. The Undeploy_SFApplication.ps1 script does not require this.

Parameters

The make the scripts reusable with the minimum amount of changes, I’ve parameterized all of the potentially variable settings:

Parameter NameDescriptionExample/Default Value
pathThis is the path to your packaged application. (This parameter is not required for the Undeploy_SFApplication.ps1 script)C:ReposDemosVoting_v3VotingpkgDebug
imageStorePathWhere you want the package stored when uploaded in Service Fabric. Typically this can be the application name, perhaps with a version.Voting
appTypeNameUsually the app name with “Type” appendedVotingType
appNameMust be prepended with “fabric:/”fabric:/Voting
appVersionIMPORTANT! Cannot deploy the same version already existing, it will fail1.0.0
ServerCommonNameIf using your local development cluster, just “localhost”.
Otherwise, if in Azure,  “CLUSTER_NAME.REGION.cloudapp.azure.com”
myCluster.australiaeast.cloudapp.azure.com
clusterAddressAppend the port number to the $ServerCommonName variable, usually 19000$(ServerCommonName):19000

resolves to:
myCluster.australiaeast.cloudapp.azure.com:19000

thumbThe thumbprint of the certificate used for a secured cluster (not generally required for a local cluster)
NOTE: The script currently sets the location of the certificate in the current user’s personal store. However this could be easily parameterized.
https://docs.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-retrieve-the-thumbprint-of-a-certificate

Script Execution Steps

First thing we do is import the appropriate module:

Import-Module "$ENV:ProgramFilesMicrosoft SDKsService FabricToolsPSModuleServiceFabricSDKServiceFabricSDK.psm1"

Then it’s simply a matter of following using the documented commands, substituting the variables as appropriate in order to:

  • Connect to the cluster
  • Upload the package to the package store
  • Register the application type
  • Create the application instance

My Deploy_SFApplication.ps1 script also prints out the application instance details as well as the associated service instance details:

The Undeploy_SFApplication.ps1 script does much the same, except in reverse of course:

  • Connect to the cluster
  • Remove the application instance
  • Unregister the application type
  • Remove the application package

The use of the –force flag means that when you run this script you will NOT be prompted for confirmation like this:

2019-03-02_21-37-50

Whilst the deployment script takes about 20 seconds for this Voting application, the undeploy script takes less than five seconds!

As mentioned previously, the scripts are freely downloadable along with the rest of the demo code on GitHub. I’m no PowerShell guru, so I’m sure there’s plenty of room for improvement. Send me a pull request if you have any suggestions! And feel free to get in touch if you have any questions.

This post was originally published on Deloitte’s Platform Engineering blog.

Determine Ready to Run / Active Service Instance Details with Custom Widgets

Determine Ready to Run / Active Service Instance Details with Custom Widgets

When messages flow into the BizTalk Server, the messages may get persisted in BizTalk Server’s MessageBox database. For a healthy BizTalk environment, it’s important to keep an eye on the number of service instances in the environment.For example, having a large number of suspended service instances will bloat your message box database and adversely affect the overall performance of your environment.

Administrator should always  keep an eye on the service instances count via the BizTalk Administrator group hub page. The person who is monitoring this, need to be a BizTalk expert and understand the importance of each state. The group hub page only displays the instance count and it won’t tell you whether this is still at a healthy level. Whereas with BizTalk360 you can set the Warning and Error threshold levels (instance counts) for each states at application level. Once the number of instances count increases above the threshold, the system will send notification alert.

Also, the administrator can set up an alarm like “If there are >20 Suspended Service Instances between 09:00 AM and 05:00 PM, resume all the instances”. He can simply log in to the BizTalk360 Data Monitoring Dashboard to see the status of the message box data for the day. He can also set up email notifications for the alarm. By doing so, the administrator eliminates the need to often log in to the BizTalk360 application and check for the status of the service instances.

Custom widget to list all service instance details which are active for a long period

Custom Widget is one of the interesting and powerful features available in BizTalk360. With a custom widget, users can easily integrate third-party portals like Power BI, Sales Force or internal portals. You can also easily display Secure SQL Queries query results , monitor BizTalk Artefact statuses etc.

For instance, if a host instance is too busy to process all its associated service instances , then those instances will be in “Ready to Run” state until the host instance has available resources. When this situation remains for a longer timeframe, the service instances are going to get accumulated, thereby bloating the message box.

With the below script you can quickly create a custom widget to look up the number of service instances which are in the active state for a particular period of time. Say, the administrator can easily check service instance details which are in the Ready to Run or Active state for more than 15 minutes.

Creating such a widget, consists of the following steps:

  1. Create a Secure SQL Query
  2. Bind the SQL Query result to the custom widget

Both steps are described below.

1) Create Secure SQL Query

The below query retrieves the Service Instances which are in the Active state for more than 30 minutes.

DECLARE @dt DATETIME = ( DATEADD(MINUTES,-30,GETUTCDATE()))

exec ops_OperateOnInstances @snOperation=0 ,@fMultiMessagebox=0 ,@uidInstanceID='00000000-0000-0000-0000-000000000000', @nvcApplication=N'', @snApplicationOperator=0, @nvcHost=N'' ,@snHostOperator=0,@nServiceClass=111,@snServiceClassOperator=0,@uidServiceType='00000000-0000-0000-0000-000000000000', @snServiceTypeOperator=0, @nStatus=2, @snStatusOperator=1 ,@nPendingOperation=1 ,@snPendingOperationOperator=0,@dtPendingOperationTimeFrom='1753-01-01 00:00:00', @dtPendingOperationTimeUntil='9999-12-31 23:59:59.997', @dtStartFrom='1753-01-01 00:00:00', @dtStartUntil=@dt ,@nvcErrorCode=N'',  @snErrorCodeOperator=0 ,@nvcErrorDescription=N'' ,@snErrorDescriptionOperator=0,@nvcURI=N'',@snURIOperator=0,@dtStartSuspend='1753-01-01 00:00:00', @dtEndSuspend='9999-12-31 23:59:59.997', @nvcAdapter=N'', @snAdapterOperator=0, @nGroupingCriteria=0, @nGroupingMinCount=0,@nMaxMatches=10,@uidAccessorID='*******',@nIsMasterMsgBox=0;

2)Bind the SQL Query result to a custom widget 

You can create the custom widget and use below code. Don’t forget to include your environment details like the credentials of the BizTalk360 service account, etc.

<div id="WidgetScroll" style="top:30px;" data-bind="addScrollBar: WidgetScroll, scrollCallback: 'false'">
<table class="table table-lists">
<thead>
<tr>
<th style="width:30%">Application Name</th>
<th style="width:30%">Instance Id </th>
<th style="width:30%"> Service ID</th>
<th style="width:30%">Created Date</th>
<th style="width:30%">State</th>
</tr>
</thead>
<tbody>
<!-- ko if: (ServiceInstanceDetails()) -->
<!-- ko foreach: ServiceInstanceDetails() -->
<tr>
<td data-bind="text: nvcName"></td>
<td data-bind="text: uidInstanceID"></td>
<td data-bind="text: uidServiceID"></td>
<td data-bind="text: dtCreated"></td>
<td data-bind="text: nState"></td>
</tr>
<!-- /ko -->
<!-- /ko -->
</tbody>
</table>
</div>
<script>
// BEGIN User variables
username = ""; // BizTalk360 service account
password = ""; // Password of BizTalk360 service account
environmentId = ""; // BizTalk360 Environment ID (take from SSMS or API Documentation)
queryId = ""; // Id of the Secure SQL Query (take from SSMS)
queryName = ""; // Name of the Secure SQL Query as it is stored under Operations/Secure SQL Query
sqlInstance = ""; // SQL Instance against which the SQL Query must be executed
database = ""; // Database against which the SQL Query must be executed
sqlQuery = " " // The Secure SQL Query created in step1
bt360server = ""; // Name of the Server where biztalk360 is hosted
// END User variables

url = 'http://' + bt360server + '/BizTalk360/Services.REST/BizTalkGroupService.svc/ExecuteCustomSQLQuery';
ServiceInstanceDetails = ko.observable();

x2js = new X2JS({ attributePrefix: '', arrayAccessForm: "property", arrayAccessFormPaths: ["root.records.record"] });

ServiceInstanceDetailsList = function () {
var _this = this;
_this.getServiceInstanceDetails(function (data) {

var results = x2js.xml_str2json(data.queryResult);
if (Array.isArray(results.root.records.record)){
ko.utils.arrayForEach(results.root.records.record,function(item){

switch (item.nState){
case "1":
item.nState="Ready to Run";
break;
case "2":
item.nState="Active";
break;
}
});
_this.ServiceInstanceDetails(results.root.records.record);
}
else {
_this.ServiceInstanceDetails([results.root.records.record]);
}
});
};
getServiceInstanceDetails = function (callback) {
var _this = this;
$.ajax({
dataType: "json",
url: _this.url,
type: "POST",
contentType: "application/json",
username: _this.username,
password: _this.password,
data:
'{"context":{"environmentSettings":{"id":"' +
_this.environmentId +
'","licenseEdition":0},"callerReference":"REST-SAMPLE"},"query":{"id":"' +
_this.queryId +
'","name":"' +
_this.queryName +
'","sqlInstance":"' +
_this.sqlInstance +
'","database":"' +
_this.database +
'","sqlQuery":"' +
_this.sqlQuery +
'","isGlobal":false}}',
cache: false,
success: function (data) {
callback(data);
},
error: function (xhr, ajaxOptions, thrownError) {
alert(xhr.status);
alert(xhr.responseText);
},
});
};
ServiceInstanceDetailsList();
</script>

After you have created the custom widget and properly provided your environment details, it will look similar to the picture below.

We have written multiple articles about the capabilities of custom widgets, both in our blog, but also in the Documentation portal. You can check them out below:

Conclusion

It has been extremely beneficial to ensure the environment is healthy. With this custom widget you can easily get a clear insight about long running service instance details in a single view. If you have a particular scenario in which custom widgets could be useful, but you don’t know how to set this up, feel free to contact us at support@biztalk360.com.

Want to yank configuration values from your .NET Core apps? Here’s how to store and access them in Azure and AWS.

Want to yank configuration values from your .NET Core apps? Here’s how to store and access them in Azure and AWS.

Creating new .NET apps, or modernizing existing ones? If you’re following the 12-factor criteria, you’re probably keeping your configuration out of the code. That means not stashing feature flags in your web.config file, or hard-coding connection strings inside your classes. So where’s this stuff supposed to go? Environment variables are okay, but not a great choice; no version control or access restrictions. What about an off-box configuration service? Now we’re talking. Fortunately AWS, and now Microsoft Azure, offer one that’s friendly to .NET devs. I’ll show you how to create and access configurations in each cloud, and as a bonus, throw out a third option.

.NET Core has a very nice configuration system that makes it easy to read configuration data from a variety of pluggable sources. That means that for the three demos below, I’ve got virtually identical code even though the back-end configuration stores are wildly different.

AWS

Setting it up

AWS offers a parameter store as part of the AWS Systems Manager service. This service is designed to surface information and automate tasks across your cloud infrastructure. While the parameter store is useful to support infrastructure automation, it’s also a handy little place to cram configuration values. And from what I can tell, it’s free to use.

To start, I went to the AWS Console, found the Systems Manager service, and chose Parameter Store from the left menu. From here, I could see, edit or delete existing parameters, and create new ones.

Each parameter gets a name and value. For the name, I used a “/” to define a hierarchy. The parameter type can be a string, list of strings, or encrypted string.

The UI was smart enough that when I went to go add a second parameter (/seroterdemo/properties/awsvalue2), it detected my existing hierarchy.

Ok, that’s it. Now I was ready to use it my .NET Core web app.

Using from code

Before starting, I installed the AWS CLI. I tried to figure out where to pass credentials into the AWS SDK, and stumbled upon some local introspection that the SDK does. Among other options, it looks for files in a local directory, and those files get created for you when you install the AWS CLI. Just a heads up!

I created a new .NET Core MVC project, and added the Amazon.Extensions.Configuration.SystemsManager package. Then I created a simple “Settings” class that holds the configuration values we’ll get back from AWS.

public class Settings
{
public string awsvalue { get; set; }
public string awsvalue2 { get; set; }
}

In the appsettings.json file, I told my app which AWS region to use.

{
"Logging": {
"LogLevel": {
"Default": "Warning"
}
},
"AllowedHosts": "*",
"AWS": {
"Profile": "default",
"Region": "us-west-2"
}
}

In the Program.cs file, I updated the web host to pull configurations from Systems Manager. Here, I’m pulling settings that start with /seroterdemo.

public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.ConfigureAppConfiguration(builder =>
{
builder.AddSystemsManager("/seroterdemo");
})
.UseStartup<Startup>();
}

Finally, I wanted to make my configuration properties available to my app code. So in the Startup.cs file, I grabbed the configuration properties I wanted, inflated the Settings object, and made it available to the runtime container.

public void ConfigureServices(IServiceCollection services)
{
services.Configure<Settings>(Configuration.GetSection("properties"));

services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
}

Last step? Accessing the configuration properties! In my controller, I defined a private variable that would hold a local reference to the configuration values, pulled them in through the constructor, and then grabbed out the values in the Index() operation.

        private readonly Settings _settings;

public HomeController(IOptions<Settings> settings)
{
_settings = settings.Value;
}

public IActionResult Index()
{
ViewData["configval"] = _settings.awsvalue;
ViewData["configval2"] = _settings.awsvalue2;

return View();
}

After updating my View to show the two properties, I started up my app. As expected, the two configuration values showed up.

What I like

You gotta like that price! AWS Systems Manager is available at no cost, and there appears to be no cost to the parameter store. Wicked.

Also, it’s cool that you have an easily-visible change history. You can see below that the audit trail shows what changed for each version, and who changed it.

The AWS team built this extension for .NET Core, and they added capabilities for reloading parameters automatically. Nice touch!

Microsoft Azure

Setting it up

Microsoft just shared the preview release of the Azure App Configuration service. This managed service is specifically created to help you centralize configurations. It’s brand new, but seems to be in pretty good shape already. Let’s take it for a spin.

From the Microsoft Azure Portal, I searched for “configuration” and found the preview service.

I named my resource seroter-config, picked a region and that was it. After a moment, I had a service instance to mess with. I quickly added two key-value combos.

That was all I needed to do to set this up.

Using from code

I created another new .NET Core MVC project and added the Microsoft.Extensions.Configuration.AzureAppConfiguration package. Once again I created a Settings class to hold the values that I got back from the Azure service.

public class Settings
{
public string azurevalue1 { get; set; }
public string azurevalue2 { get; set; }
}

Next up, I updated my Program.cs file to read the Azure App Configuration. I passed the connection string in here, but there are better ways available.

public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) => {
var settings = config.Build();
config.AddAzureAppConfiguration("[con string]");
})
.UseStartup<Startup>();
}

I also updated the ConfigureServices() operation in my Startup.cs file. Here, I chose to only pull configurations that started with seroterdemo:properties.

 public void ConfigureServices(IServiceCollection services)
{
//added
services.Configure<Settings>(Configuration.GetSection("seroterdemo:properties"));

services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
}

To read those values in my controller, I’ve got just about the same code as in the AWS example. The only difference was what I called my class members!

private readonly Settings _settings;

public HomeController(IOptions<Settings> settings)
{
_settings = settings.Value;
}

public IActionResult Index()
{
ViewData["configval"] = _settings.azurevalue1;
ViewData["configval2"] = _settings.azurevalue2;

return View();
}

I once again updated my View to print out the configuration values, and not shockingly, it worked fine.

What I like

For a new service, there’s a few good things to like here. The concept of labels is handy, as it lets me build keys that serve different environments. See here that I created labels for “qa” and “dev” on the same key.

I saw a “compare” feature which looks handy. There’s also a simple search interface here too, which is valuable.

Pricing isn’t yet available, no I’m not clear as to how I’d have to pay for this.

Spring Cloud Config

Setting it up

Both of the above service are quite nice. And super convenient if you’re running in those clouds. You might also want a portable configuration store that offers its own pluggable backing engines. Spring Cloud Config makes it easy to build a config store backed by a file system, git, GitHub, Hashicorp Vault, and more. It’s accessible via HTTP/S, supports encryption, is fully open source, and much more.

I created a new Spring project from start.spring.io. I chose to include the Spring Cloud Config Server and generate the project.

Literally all the code required is a single annotation (@EnableConfigServer).

 @EnableConfigServer
@SpringBootApplication
public class SpringBlogConfigServerApplication {

public static void main(String[] args) {
SpringApplication.run(SpringBlogConfigServerApplication.class, args);
}
}

In my application properties, I pointed my config server to the location of the configs to read (my GitHub repo), and which port to start up on.

server.port=8888
spring.cloud.config.server.encrypt.enabled=false
spring.cloud.config.server.git.uri=https://github.com/rseroter/spring-demo-configs

My GitHub repo has a configuration file called blogconfig.properties with the following content:

With that, I started up the project, and had a running configuration server.

Using from code

To talk to this configuration store from my .NET app, I used the increasingly-popular Steeltoe library. These packages, created by Pivotal, bring microservices patterns to your .NET (Framework or Core) apps.

For the last time, I created a .NET Core MVC project. This time I added a dependency to Steeltoe.Extensions.Configuration.ConfigServerCore. Again, I added a Settings class to hold these configuration properties.

public class Settings
{
public string property1 { get; set; }
public string property2 { get; set; }
public string property3 { get; set; }
public string property4 { get; set; }
}

In my appsettings.json, I set my application name (to match the config file’s name I want to access) and URI of the config server.

{
"Logging": {
"LogLevel": {
"Default": "Warning"
}
},
"AllowedHosts": "*",
"spring": {
"application": {
"name": "blogconfig"
},
"cloud": {
"config": {
"uri": "http://localhost:8888"
}
}
}
}

My Program.cs file has a “using” statement for the Steeltoe.Extensions.Configuration.ConfigServer package, and then used the “AddConfigServer” operation to add the config server as a source.

public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.AddConfigServer()
.UseStartup<Startup>();
}

I once again updated the Startup.cs file to load the target configurations into my typed object.

public void ConfigureServices(IServiceCollection services)
{
services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});

services.Configure<Settings>(Configuration);
}

My controller pulled the configuration object, and I used it to yank out values to share with the View.

public HomeController(IOptions<Settings> mySettings) {
_mySettings = mySettings.Value;
}
Settings _mySettings {get; set;}

public IActionResult Index()
{
ViewData["configval"] = _mySettings.property1;
return View();
}

Updating the view, and starting the .NET Core app yielded the expected results.

What I like

Spring Cloud Config is a very mature OSS project. You can deliver this sort of microservices machinery along with your apps in your CI/CD pipelines — these components are software that you ship versus services that need to be running — which is powerful. It offers a variety of backends, OAuth2 for security, encryption/decryption of values, and much more. It’s a terrific choice for a consistent configuration store on every infrastructure.

But realistically, I don’t care which of the above you use. Just use something to extract environment-specific configuration settings from your .NET apps. Use these robust external stores to establish some rigor around these values, and make it easier to share configurations, and keep them in sync across all of your application instances.

Advertisements

Categories: .NET, AWS, Cloud, General Architecture, Microservices, Microsoft Azure, Pivotal, Spring