Introducing BizTalk360 v8.9.5

Introducing BizTalk360 v8.9.5

We always aim to constantly improve our product based on our customer feedback and the business needs of our customers. Keeping that in our mind, we add some significant features in every single release.  In this blog, you can get to know about all the stimulating features we have shipped in our latest v8.9.5 release.

Quick Alarm 

Are you new to BizTalk360?  Now you can easily setup monitoring, just in a Single click by using a Quick Alarm.

The Quick Alarm feature has been brought into the product to give a unified experience to the user to easily set up monitoring. A Quick alarm is a consolidated alarm which selects a few artefacts from the BizTalk Applications and map them for monitoring, under various categories (Threshold, Data Monitoring, Health Check) .

Quick Alarms can be configured from Monitoring Dashboard and from Manage Alarm section just by providing the alarm name and email id. This configuration also takes care of SMTP setup if you have not yet configured it in your environment. So once a Quick Alarm has been configured, you will be notified through email if any monitoring violation happens.

Send Port Group Operations

One of the powerful features in BizTalk Server is Send Port Groups, which help to route a single message to more than one destination.Send Port Group operations and monitoring is one such important feature we did not yet include in BizTalk360. At a point, we received a feedback from one of our customer “As they were not able to fully start/stop the applications from BizTalk360 when they have Send Port Groups configured ” .Hence, we have taken it for v8.9.5.

Now, you can perform operations such as start/stop/enlist/unenlist the Send Port Groups from BizTalk360. With this new capability it is possible to manage the operation of BizTalk applications (Start & Stop) for all the artifacts of each application  in BizTalk360. This implementation makes the life easier for the BizTalk support engineers without context switching between BizTalk admin console and BizTalk360.

One of the main objectives for us from the security perspective is auditing. We have leveraged the governance and audit capability to capture all the activities of Send Port Groups performed by all the users within BizTalk360.

Send Port Group Monitoring

From the latest version, you can completely manage and monitor your Send Port Groups from BizTalk360. Create an alarm, map the Send Port Groups for monitoring by setting up the expected state. If any threshold violation occurs, you will be notified. You can also setup auto-healing for this monitoring; once the Send Port Group goes down, then BizTalk360 will try to auto heal it to the expected state.

Alarm Auto Reset

We allow users to configure number of alerts to be triggered within stipulated duration if any threshold violation happens . When the configured alert reaches it limit ,it cannot be reset unless if the threshold violation resolves or user reset it manually .This is an impacting issue for some customers who monitor several artefacts under same alarm .In this scenario, when artefact 1 goes down, you will get notified with down alerts on periodic interval based on your configuration. Later no further notification will be sent when any other artefact goes down , since the number of alerts has already reached its maximum count .This problem is now resolved with the Alarm auto reset functionality, in which the alert counter will automatically reset after configured time. So, there is no chance of missing alerts when your artefacts are going down.

Auto Correct Reset

Auto healing is one of the dominant features available in BizTalk360. When you configured auto correct for your artefacts, the system will try to always maintain the expected state which you have set for those artefacts. When the artefacts state changes then the system tries to bring it back to the expected state. Here, the maximum retry is based on user configuration. So, what happens when it reaches the max retry count? It will not try to auto correct the state. To overcome this problem, we have introduced a new capability called Auto Reset. From this version on, the retry count will get automatically reset as per the user configuration (e.g.: 10 mins).

So when you configured Auto-healing for your artefacts, it is guaranteed the system will always try to maintain the expected state without any downtime with this new capability.

Delete ESB Fault Message

With the ESB Fault Delete functionality, you can delete the fault messages which are not important for resubmission.

Let us consider this scenario, a send port that you are using in your application fails unexpectedly, therefore both the service instance and the message become suspended and the fault information is written into the ESB exception database. Once the exception is corrected and resubmitted for further go, there may be two scenarios which will come into picture:

  1. Messages which are submitted successfully are residing in the ESB Exception database and are of no further use.
  2. Messages are rerouted to the ESB Exception database due to recurrent failure. In this specific case, the original message also available in the ESB portal.

In BizTalk360, we have introduced a new option “Delete” to clear the messages which are not required anymore. Additionally, all the deleted activities are audited in both Governance & Auditing and Live feed section to ensure the security.

Custom User Profile Templates

When we work on cutting edge technology, security is one of the important factors we always need to consider. When a new user is created, you can provide access to the corresponding features based on the roles of the user. Consider a scenario where the customer would need to provide similar permissions to multiple users. It could be very time-consuming to make the changes for every user. BizTalk360 made this process very easy for the users by introducing the capability to create custom profile templates. You can create a custom template with the desired features selected and choose the template while creating the users.

Few Enhancements and Bug Fixes

Besides these new features, we have also brought a number of enhancements and bug fixes.

Filter Operators Improvements

BizTalk360 has the capability to query the data from various databases to show important data such as Service Instance details, Tracked/Message service details, ESB and so on. To filter the data from the corresponding database, BizTalk360 provides rich Query Builder capabilities. From this version on, we have added few additional operators in line with the BizTalk Admin console for easy access to the data in both the Operation and Data Monitoring sections.

Notification Channel Improvements

 Associated Alarm

 Imagine a user wants to know about the mapping of a specific notification channel out of all the configured alarms in the environment. This was a tedious task in earlier versions of BizTalk360.

To get the quick status of the notification channel association with Alarms, we brought the new option “Associated Alarms” in the Setting side, to view all the mapped alarms in a single view.

DLL Reconfiguration

There was no option to reconfigure the notification channel, if any changes were made in the existing notification channel DLL. You need to go and change all the properties in all the alarms where the specified notification channel is mapped. Now, the reconfiguration process will be seamless from the UI without touching the database.

License Expiry Reminder

To provide an insight about the type of license and date of expiry , License expiry notification will be shown in UI for all the license types (Commercial, Partner & Product Specialist).

We have closed around 24 support tickets as part of fixing the issues in different areas. Please refer the Release Notes – https://docs.biztalk360.com/docs/v8951122201

Get Started Today !!!

Conclusion

Why not give BizTalk360 a try! It takes about 10 minutes to install on your BizTalk environments and you can witness and check the security and productivity of your own BizTalk Environments. Get started with the free 30 days trial.

XLVIII Porto.Data Community Meeting | January 30, 2019 | How we use Microsoft Flow and PowerApps: Real cases scenarios

XLVIII Porto.Data Community Meeting | January 30, 2019 | How we use Microsoft Flow and PowerApps: Real cases scenarios

This post is for the Portuguese Communities (Office365, Integration, and others). It will be held on January 30, 2019, between 18:50 – 20:30 the XLVIII Porto.Data Community meeting at the Science and Technology Park of University of Porto (UPTEC) in Oporto.

For me it is once again a pleasure to return to this community speaking once again about PowerApps and Microsoft Flow, this time showing real case scenarios on a talk with the following title: “How we use Microsoft Flow and PowerApps: Real cases scenarios

XLVIII Porto.Data Community Meeting | January 30, 2019 | How we use Microsoft Flow and PowerApps: Real cases scenarios

Abstract

We know that all business problem can be solved with a variety of technologies and different solutions. However, sometimes developing that type of solutions has traditionally been too costly and time-consuming for many of the need’s teams and departments face, especially those projects that are internally for organizations to use or for a short time period. As a result, many of these projects or solutions will be on the shelf or in the imaginary of the collaborators.

They are in Dynamics 365, Office 365, on premises, on the cloud… they are everywhere, and they are fantastic! Developers can do it; IT can do it… you can do it!

Microsoft Flow and PowerApps, sometimes together sometimes isolated are here to help you, and in this session, we will show you real live scenarios on how we use these two technologies in our customers and internally at DevScope.

XLVIII Porto.Data Agenda

  • 18:50 – Welcome reception
  • 19:00 – “How we use Microsoft Flow and PowerApps: Real cases scenarios” – Sandro Pereira – Azure MVP – DevScope
  • 20:00 – Coffee break / Q.A / Community News
  • 20:15 – Closure
  • 20:20 – Prize draw
  • 20:30 – Dinner (optional)

This is a free event with very limited seats that you will not want to miss, register now!

We are waiting for you.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

New Office365 icons are now included in Microsoft Integration (Azure and much more) Stencils Pack v3.1.1 for Visio

New Office365 icons are now included in Microsoft Integration (Azure and much more) Stencils Pack v3.1.1 for Visio

What started to be a Microsoft Integration Stencil Packs is now almost a full Microsoft stack stencil package that includes Microsoft Integration, Azure, BAPI, Office365, devices, products, competing technologies or partners and much more Stencils Pack it’s a Visio package.

This package contains fully resizable Visio shapes (symbols/icons) that will help you to visually represent On-premise, Cloud or Hybrid Integration and Enterprise architectures scenarios (BizTalk Server, API Management, Logic Apps, Service Bus, Event Hub…), solutions diagrams and features or systems that use Microsoft Azure and related cloud and on-premises technologies in Visio 2016/2013:

  • BizTalk Server
  • Microsoft Azure
    • Azure App Service (API Apps, Web Apps, Mobile Apps, and Logic Apps)
    • Event Hubs, Event Grid, Service Bus, …
    • API Management, IoT, and Docker
    • Machine Learning, Stream Analytics, Data Factory, Data Pipelines
    • and so on
  • Microsoft Flow
  • PowerApps
  • Power BI
  • PowerShell
  • Infrastructure, IaaS
  • Office 365
  • And many more

This new small update includes the new Office365 icons that were recently announced by Microsoft. It includes an additional of 19 new shapes and some reorganization.

New Office365 Stencils

The Microsoft Integration Stencils Pack v3.1.1 is composed of 22 files:

  • Microsoft Integration Stencils v3.1.0
  • MIS Additional or Support Stencils v3.1.0
  • MIS Apps and Systems Logo Stencils v3.1.0
  • MIS AI Stencils v3.1.0
  • MIS Azure Additional or Support Stencils v3.1.0
  • MIS Azure Others Stencils v3.1.0
  • MIS Azure Stencils v3.1.0
  • MIS Buildings Stencils v3.1.0
  • MIS Databases Stencils v3.1.0
  • MIS Deprecated Stencils v3.1.0
  • MIS Developer Stencils v3.1.0
  • MIS Devices Stencils v3.1.0
  • MIS Files Stencils v3.1.0
  • MIS Generic Stencils v3.1.0
  • MIS Infrastructure Stencils v3.1.0
  • MIS Integration Patterns Stencils v3.1.0
  • MIS IoT Devices Stencils v3.1.0
  • MIS Office365 v3.1.1
  • MIS Power BI Stencils v3.1.0
  • MIS PowerApps and Flows Stencils v3.1.1
  • MIS Servers (HEX) Stencils v3.1.0
  • MIS Users and Roles Stencils v3.1.0

You can download Microsoft Integration, Azure, BAPI, Office 365 and much more Stencils Pack for Visio from:
Microsoft Integration Azure Stencils Pack VisioMicrosoft Integration, Azure, BAPI, Office 365 and much more Stencils Pack for Visio (18,6 MB)
GitHub

Or from:
Microsoft Integration Azure Stencils Pack VisioMicrosoft Integration and Azure Stencils Pack for Visio 2016/2013 v3.1.1 (18,6 MB)
Microsoft | TechNet Gallery

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

Microsoft Integration Weekly Update: Jan 28, 2019

Microsoft Integration Weekly Update: Jan 28, 2019

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

Feedback

Hope this would be helpful. Please feel free to reach out to me with your feedback and questions.

Advertisements

Go “multi-cloud” while *still* using unique cloud services? I did it using Spring Boot and MongoDB APIs.

Go “multi-cloud” while *still* using unique cloud services? I did it using Spring Boot and MongoDB APIs.

What do you think of when you hear the phrase “multi-cloud”? Ok, besides stupid marketing people and their dumb words. You might think of companies with on-premises environments who are moving some workloads into a public cloud. Or those who organically use a few different clouds, picking the best one for each workload. While many suggest that you get the best value by putting everything on one provider, that clearly isn’t happening yet. And maybe it shouldn’t. Who knows. But can you get the best of each cloud while retaining some portability? I think you can.

One multi-cloud solution is to do the lowest-common-denominator thing. I really don’t like that. Multi-cloud management tools try to standardize cloud infrastructure but always leave me disappointed. And avoiding each cloud’s novel services in the name of portability is unsatisfying and leaves you at a competitive disadvantage. But why should we choose the cloud (Azure! AWS! GCP!) and runtime (Kubernetes! VMs!) before we’ve even written a line of code? Can’t we make those into boring implementation details, and return our focus to writing great software? I’d propose that with good app frameworks, and increasingly-standard interfaces, you can create great software that runs on any cloud, while still using their novel services.

In this post, I’ll build a RESTful API with Spring Boot and deploy it, without code changes, to four different environments, including:

  1. Local environment running MongoDB software in a Docker container.
  2. Microsoft Azure Cosmos DB with MongoDB interface.
  3. Amazon DocumentDB with MongoDB interface.
  4. MongoDB Enterprise running as a service within Pivotal Cloud Foundry

Side note: Ok, so multi-cloud sounds good, but it seems like a nightmare of ops headaches and nonstop dev training. That’s true, it sure can be. But if you use a good multi-cloud app platform like Pivotal Cloud Foundry, it honestly makes the dev and ops experience virtually the same everywhere. So, it doesn’t HAVE to suck, although there are still going to be challenges. Ideally, your choice of cloud is a deploy-time decision, not a design-time constraint.

Creating the app

In my career, I’ve coded (poorly) with .NET, Node, and Java, and I can say that Spring Boot is the fastest way I’ve seen to build production-quality apps. So, I chose Spring Boot to build my RESTful API. This API stores and returns information about cloud databases. HOW VERY META. I chose MongoDB as my backend database, and used the amazing Spring Data to simplify interactions with the data source.

From start.spring.io, I created a project with dependencies on spring-boot-starter-data-rest (auto-generated REST endpoints for interacting with databases), spring-boot-starter-data-mongodb (to talk to MongoDB), spring-boot-starter-actuator (for “free” health metrics), and spring-cloud-cloudfoundry-connector (to pull connection details from the Cloud Foundry environment). Then I opened the project and created a new Java class representing a CloudProvider.

package seroter.demo.cloudmongodb;

import org.springframework.data.annotation.Id;

public class CloudProvider {
        
        @Id private String id;
        
        private String providerName;
        private Integer numberOfDatabases;
        private Boolean mongoAsService;
        
        public String getProviderName() {
                return providerName;
        }
        
        public void setProviderName(String providerName) {
                this.providerName = providerName;
        }
        
        public Integer getNumberOfDatabases() {
                return numberOfDatabases;
        }
        
        public void setNumberOfDatabases(Integer numberOfDatabases) {
                this.numberOfDatabases = numberOfDatabases;
        }
        
        public Boolean getMongoAsService() {
                return mongoAsService;
        }
        
        public void setMongoAsService(Boolean mongoAsService) {
                this.mongoAsService = mongoAsService;
        }
}

Thanks to Spring Data REST (which is silly powerful), all that was left was to define a repository interface. If all I did was create an annotate the interface, I’d get full CRUD interactions with my MongoDB collection. But for fun, I also added an operation that would return all the clouds that did (or did not) offer a MongoDB service.

package seroter.demo.cloudmongodb;

import java.util.List;

import org.springframework.data.mongodb.repository.MongoRepository;
import org.springframework.data.rest.core.annotation.RepositoryRestResource;

@RepositoryRestResource(collectionResourceRel = "clouds", path = "clouds")
public interface CloudProviderRepository extends MongoRepository<CloudProvider, String> {
        
        //add an operation to search for a specific condition
        List<CloudProvider> findByMongoAsService(Boolean mongoAsService);
}

That’s literally all my code. Crazy.

Run using Dockerized MongoDB

To start this test, I wanted to use “real” MongoDB software. So I pulled the popular Docker image and started it up on my local machine:

docker run -d -p 27017:27017 --name serotermongo mongo

When starting up my Spring Boot app, I could provide database connection info (1) in an app.properties file, or, as (2) input parameters that require nothing in the compiled code package itself. I chose the file option for readability and demo purposes, which looked like this:

#local configuration
spring.data.mongodb.uri=mongodb://0.0.0.0:27017
spring.data.mongodb.database=demodb

#port configuration
server.port=${PORT:8080}

After starting the app, I issued a base request to my API via Postman. Sure enough, I got a response. As expected, no data in my MongoDB database. Note that Spring Data automatically creates a database if it doesn’t find the one specified, so the “demodb” now existed.

I then issued a POST command to add a record to MongoDB, and that worked great too. I got back the URI for the new record in the response.

I also tried calling that custom “search” interface to filter the documents where “mongoAsService” is true. That worked.

So, running my Spring Boot REST API with a local MongoDB worked fine.

Run using Microsoft Azure Cosmos DB

Next up, I pointed this application to Microsoft Azure. One of the many databases in Azure is Cosmos DB. This underrated database offers some pretty amazing performance and scale, and is only available from Microsoft in their cloud. NO PROBLEM. It serves up a handful of standard interfaces, including Cassandra and MongoDB. So I can take advantage of all the crazy-great hosting features, but not lock myself into any of them.

I started by visiting the Microsoft Azure portal. I chose to create a new Cosmos DB instance, and selected which API (SQL, Cassandra, Gremlin, MongoDB) I wanted.

After a few minutes, I had an instance of Cosmos DB. If I had wanted to, I could have created a database and collection from the Azure portal, but I wanted to confirm that Spring Data would do it for me automatically.

I located the “Connection String” properties for my new instance, and grabbed the primary one.

With that in hand, I went back to my application.properties file, commented out my “local” configuration, and added entries for the Azure instance.

#local configuration
#spring.data.mongodb.uri=mongodb://0.0.0.0:27017
#spring.data.mongodb.database=demodb

#port configuration
server.port=${PORT:8080}

#azure cosmos db configuration
spring.data.mongodb.uri=mongodb://seroter-mongo:<password>@seroter-mongo.documents.azure.com:10255/?ssl=true&replicaSet=globaldb
spring.data.mongodb.database=demodb

I could publish this app to Azure, but because it’s also easy to test it locally, I just started up my Spring Boot REST API again, and pinged the database. After POSTing a new record to my endpoint, I checked the Azure portal and sure enough, saw a new database and collection with my “document” in it.

Here, I’m using a super-unique cloud database but don’t need to manage my own software to remain “portable”, thanks to Spring Boot and MongoDB interfaces. Wicked.

Run using Amazon DocumentDB

Amazon DocumentDB is the new kid in town. I wrote up an InfoQ story about it, which frankly inspired me to try all this out.

Like Azure Cosmos DB, this database isn’t running MongoDB software, but offers a MongoDB-compatible interface. It also offers some impressive scale and performance capabilities, and could be a good choice if you’re an AWS customer.

For me, trying this out was a bit of a chore. Why? Mainly because the database service is only accessible from within an AWS private network. So, I had to properly set up a Virtual Private Cloud (VPC) network and get my Spring Boot app deployed there to test out the database. Not rocket science, but something I hadn’t done in a while. Let me lay out the steps here.

First, I created a new VPC. It had a single public subnet, and I added two more private ones. This gave me three total subnets, each in a different availability zone.

Next, I switched to the DocumentDB console in the AWS portal. First, I created a new subnet group. Each DocumentDB cluster is spread across AZs for high availability. This subnet group contains both the private subnets in my VPC.

I also created a parameter group. This group turned off the requirement for clients to use TLS. I didn’t want my app to deal with certs, and also wanted to mess with this capability in DocumentDB.

Next, I created my DocumentDB cluster. I chose an instance class to match my compute and memory needs. Then I chose a single instance cluster; I could have chosen up to 16 instances of primaries and replicas.

I also chose my pre-configured VPC and the DocumentDB subnet group I created earlier. Finally, I set my parameter group, and left default values for features like encryption and database backups.

After a few minutes, my cluster and instance were up and running. While this console doesn’t expose the ability to create databases or browse data, it does show me health metrics and cluster configuration details.

Next, I took the connection string for the cluster, and updated my application.properties file.

#local configuration
#spring.data.mongodb.uri=mongodb://0.0.0.0:27017
#spring.data.mongodb.database=demodb

#port configuration
server.port=${PORT:8080}

#azure cosmos db configuration
#spring.data.mongodb.uri=mongodb://seroter-mongo:<password>@seroter-mongo.documents.azure.com:10255/?ssl=true&replicaSet=globaldb
#spring.data.mongodb.database=demodb

#aws documentdb configuration
spring.data.mongodb.uri=mongodb://seroter:<password>@docdb-2019-01-27-00-20-22.cluster-cmywqx08yuio.us-west-2.docdb.amazonaws.com:27017
spring.data.mongodb.database=demodb

Now to deploy the app to AWS. I chose Elastic Beanstalk as the application host. I selected Java as my platform, and uploaded the JAR file associated with my Spring Boot REST API.

I had to set a few more parameters for this app to work correctly. First, I set a SERVER_PORT environment variable to 5000, because that’s what Beanstalk expects. Next, I ensured that my app was added to my VPC, provisioned a public IP address, and chose to host on the public subnet. Finally, I set the security group to the default one for my VPC. All of this should ensure that my app is on the right network with the right access to DocumentDB.

After the app was created in Beanstalk, I queried the endpoint of my REST API. Then I created a new document, and yup, it was added successfully.

So again, I used a novel, interesting cloud-only database, but didn’t have to change a lick of code.

Run using MongoDB in Pivotal Cloud Foundry

The last place to try this app out? A multi-cloud platform like PCF. If you did use something like PCF, the compute layer is consistent regardless of what public/private cloud you use, and connectivity to data services is through a Service Broker. In this case, MongoDB clusters are managed by PCF, and I get my own cluster via a Broker. Then my apps “bind” to that cluster.

First up, provisioning MongoDB. PCF offers MongoDB Enterprise from Mongo themselves. To a developer, this looks like a database-as–a-service because clusters are provisioned, optimized, backed up, and upgraded via automation. Via the command line or portal, I could provision clusters. I used the portal to get myself happy little instance.

After giving the service a name, I was set. As with all the other examples, no code changes were needed. I actually removed any MongoDB-related connection info from my application.properties file because that spring-cloud-cloudfoundry-connector dependency actually grabs the credentials from the environment variables set by the service broker.

One thing I *did* create for this environment — which is entirely optional — is a Cloud Foundry manifest file. I could pass these values into a command line instead of creating a declarative file, but I like writing them out. These properties simply tell Cloud Foundry what to do with my app.

---
applications:
- name: boot-seroter-mongo
  memory: 1G
  instances: 1
  path: target/cloudmongodb-0.0.1-SNAPSHOT.jar
  services:
  - seroter-mongo

With that, I jumped to a terminal, navigated to a directory holding that manifest file, and typed cf push. About 25 seconds later, I had a containerized, reachable application that connected to my MongoDB instance.

Fortunately, PCF treats Spring Boot apps special, so it used the Spring Boot Actuator to pull health metrics and more. Above, you can see that for each instance, I saw extra health information for my app, and, MongoDB itself.

Once again, I sent some GET requests into my endpoint, saw the expected data, did a POST to create a new document, and saw that succeed.

Wrap Up

Now, obviously there are novel cloud services without “standard” interfaces like the MongoDB API. Some of these services are IoT, mobile, or messaging related —although Azure Event Hubs has a Kafka interface now, and Spring Cloud Stream keeps message broker details out of the code. Other unique cloud services are in emerging areas like AI/ML where standardization doesn’t really exist yet. So some applications will have a hard coupling to a particular cloud, and of course that’s fine. But increasingly, where you run, how you run, and what you connect to, doesn’t have to be something you choose up front. Instead, first you build great software. Then, you choose a cloud. And that’s pretty cool.

Advertisements

Categories: AWS, Cloud, Cloud Foundry, DevOps, Docker, General Architecture, Microservices, Microsoft Azure, Pivotal, Spring

Notes From The Road: 2018 Year In Review

Notes From The Road: 2018 Year In Review

Last year I didn’t have the opportunity to deliver 16 sessions across the world like 2017 because I had to slow down my trips, or I didn’t participate as I would or did in the past in some communities and I don’t regret a single second. If I have to describe 2018 in a single word I have to choose: family.

I don’t have any difficulty in selecting the best moments, my baby boy born and I move my family to a bigger house because we now have 3 kids, two beautiful girls, like there mother and of course a mini-me.

01-Family02-Family

So, as you may imagine, I struggle to have free time, but when you love what you do, instead of finding excuses you will always find a way to do it. So in the end, I was able to:

  • Publish 70 new posts in my blog; I wrote more blog post that in 2017, And I just now realize that! And the countries that most visited my blog still are the United States, followed by India, the United Kingdom, and Canada in a total of 194 countries

03-countries-visit-my-blog-2018

  • Publish 2 guest blog posts on Serveless360
  • Publish 8 guest blog posts on BizTalk360
  • Moreover, publishing 2 whitepapers:

However, that not all:

  • Deliver 5 speaking sessions:
    • IntegrationMonday |online |The birth of a new SSO Application Configuration Tool
    • XXXX Porto.Data meeting, March 27 | Porto | Automatization Platform with Dynamics 365 | Dynamics365, Microsoft Flow and PowerApps

03-speaking-portodata-2018

    • CSI about Microsoft Flow and PowerApps

04-speaking-CSI-2018

    • Integrate 2018 | London | BizTalk Server Notes from the Road

06-speaking-Integrate-2018

05-speaking-Integrate-201807-speaking-Integrate-2018

    • MVPDays Microsoft Flow Conference 2018 | online | How we use Microsoft Flow and PowerApps: Real cases scenarios
  • And migrate all and improve, at least almost all, my community projects to GitHub:

And this is just a small set of my contributions because I still publish several small scripts and sample codes on TechNet Gallery and Code Gallery and help in other ways the community.

My favorite’s posts

The top posts that I enjoyed writing or add more fun to do it last year was:

Because of the fact normally people look at me and think that I’m a BizTalk guy but I really do more than BizTalk Server, but I only like to write about real stuff that I use daily and indeed I had a lot of fun writing these blog post and real-life cases.

And of course, to close this list:

Thanks for following me, for reading my blog and I promise that I will continue to share my knowledge during 2019.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

User Access Profiles can now be customized

User Access Profiles can now be customized

A few more days to go for the release of our new version, BizTalk360 v8.9.5. Release by release, we aim at improving the user experience by adding new features and enhancing the existing features as per the suggestions and feedback received from our customers. The feedback can be posted in our User Voice portal http://feedback.biztalk360.com. There, customers can vote for the posted suggestions if they feel that the ideas also fit into their business requirements. These suggestions will be taken up for development based on the priority of voting.

There have been quite a few features and enhancements added in this release. Let’s look at one of the features which got the attention of many customers and that we thought it would ease the users in their business operations while using BizTalk360. So come on, lets jump in!

User Access Policies

After the installation of BizTalk360 and activation of the license, the next thing the user would do is to create the users, groups and provide access to the BizTalk360 features in the User Access Policy section. This is available only for providing access to the features in BizTalk360 and not for BizTalk level access. The importance of this feature can be known from the article User Access Policy.

In our previous version, BizTalk360 v8.9, this feature was enhanced to configure different rules for providing access to the existing as well as newly created BizTalk applications for the users. Get to know about the different rules here “User Access Policy – The New Look for Application Access”.

As part of this enhancement, we have introduced the concept of the Application groups for grouping the applications of similar category and provide users the access to the Application groups. So, any new application created, when added to such an Application Group will be automatically given access to the users.

Now there is an additional feature of managing custom User Access templates, which is described in the later part of this article.

As part of this enhancement, the user management, application groups and the custom templates must be managed in different sections. This has been done to avoid confusion to the users. Hence, the existing look of the User Access Policy section has been modified as below.

Manage Application groups

The application groups can now be managed in a separate section. Here, the application groups can be created, modified or deleted. Now it would be easy for customers to create application groups and manage them from a single section rather than moving between the screens.

Manage Users

This section remains the same as in previous versions, where the users and NT groups can be created, modified or deleted. Modification has been done in the Add permissions section for the normal users to include the custom profile templates newly created.

The Profile Templates

Keeping in mind the level of security needed for accessing the features and performing the operations with the BizTalk environment, we have the fine-grained authorization within BizTalk360. When a new user is created, the permissions to the BizTalk360 features can be selected from the list of features under each section and saved to the user profile. When the new user logs in to BizTalk360, he can only use the features which are added to his profile.

In the earlier versions, the features were selected from the list available or from the predefined templates available. These predefined templates come as part of the installation. There are three predefined templates available:

  • View only modules – Choosing this template will provide read-only access to few features in BizTalk360. This will be helpful for the Level 1 support team
  • Limited operation access – This provides access for the users to operate on Host Instance, applications and service instances
  • Full access to all modules – As the name implies, the user will have access to all the features when this template is chosen

As per the feedback received from the customer, we have added the new capability to create custom templates and select them for providing access to the users.

How do the custom templates help?

Consider a scenario, where the customer would need to provide similar permissions to multiple users. This can be done by creating a group and applying the profile to the group account. But what if we need to make some minor changes to some of the users. It would be very difficult to make changes for some users within the group.

With the help of a custom profile template, this can be made in a single click. You can create a custom template with the desired features selected and choose the template while creating the users. Won’t this be an easy and quick way of configuring permissions to the users? Yes, of course!

The creation of custom templates is also done very quickly. The custom template can be created by providing a name to the template and selecting the features to be added to the template. With this feature, for each custom template, you can select permissions and store them as a custom template.

This would be similar to the Add permissions section in the user creation, but the only difference being the selected features can be saved to a template and used later in creating different users. The predefined templates would be suffixed with System Predefined in the name.

With the custom profile templates, the user creation is made much easier. The customer need not scroll through the entire list of features while creating users.

Conclusion

We always monitor the feedback portal and take up the suggestions and feedback. Now we would like to request you, our customers, to please take the time to fill this questionnaire to help us prioritize the next upcoming feature tasks, to let us know what are your main pain points and help us to further improve the product.

Why not give BizTalk360 a try! It takes about 10 minutes to install on your BizTalk environments and you can witness and check the security and productivity of your own BizTalk Environments. Get started with the free 30 days trial. Happy monitoring with BizTalk360!

Author: Praveena Jayanarayanan

I am working as Senior Support Engineer at BizTalk360. I always believe in team work leading to success because “We all cannot do everything or solve every issue. ‘It’s impossible’. However, if we each simply do our part, make our own contribution, regardless of how small we may think it is…. together it adds up and great things get accomplished.”

Microsoft Integration Weekly Update: Jan 21, 2019

Microsoft Integration Weekly Update: Jan 21, 2019

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

Feedback

Hope this would be helpful. Please feel free to reach out to me with your feedback and questions.

Advertisements

Easily set up monitoring with Quick Alarms

Easily set up monitoring with Quick Alarms

Are you new to BizTalk360 ? Now you can easily setup monitoring just in a Single click!
The Monitoring capabilities are a key feature of BizTalk360. Setting up monitoring is a very easy and quick task. With a two-step process, you create an alarm and map the artefact(s) to be monitored. We wanted to make this much easier for our customers. So keeping that in mind we bring the new feature ‘Quick Alarm’ in our upcoming release v8.9.5 .

What is a Quick alarm?

A Quick Alarm is a consolidated alarm which selects a number of artefacts from the BizTalk Applications, which are in the Start/Stop/Partially Started state, and map them for monitoring. Under various categories (Threshold, Data Monitoring, Health Check) a Quick alarm maps the following resources for threshold monitoring:

  • Receive Locations
  • Send Ports
  • Orchestrations
  • Host Instances
  • Host Throttling
  • SQL Jobs
  • BizTalk Servers

For Data Monitoring (with basic filter configuration) the Quick Alarm is configured for:

  • Process Monitoring
  • Message Box
  • Tracking
  • BAM
  • EDI
  • ESB

Is a Quick Alarm different from normal alarms?

No, a Quick alarm is not all different from normal BizTalk360 alarms. Quick Alarms have been brought into the product to give an unified experience to the user to easily set up monitoring. A Quick alarm supports all the normal BizTalk360 alarm capabilities, like:

• Reset notification count
• Copy the alarm configurations
• Change status – Enable/Disable Alarm
• Once a Quick Alarm is configured it can be edited or deleted any time

Quick Alarm can be configured in 2 ways

  1. From the Dashboard
    When no alarms have been configured in your environment, you can configure a Quick alarm either from the general Monitoring Dashboard or from the Data Monitoring Dashboard.

Quick Alarm Configuration

2) From the Manage Alarms section
You can also create a Quick Alarm from the Manage alarms section by selecting the ‘Quick Alarm’ button, even though you have already configured some alarms.

Configure Quick Alarm From Manage Alarm Section

Note: There is no restriction on the number of Quick Alarms you create. You can create any number of Quick Alarms in your environment. Though how many times you create a Quick Alarm with the same set of applications, it will always configure the same artefacts for monitoring.

Once a Quick Alarm has been configured you will be notified through email.

• If there is any threshold violation on the mapped artefacts
• Data monitoring alerts on every 15 mins
• Health check alert on 12 PM everyday

These configurations can be changed later at any point in time.

If in case BAM, EDI,ESB is not configured in the environment, the Quick Alarm configuration skips these and configure the rest of the resources for monitoring.

Who can create a Quick Alarm

A Quick alarm can be configured only by super users. This facility is restricted for normal and NT Group Users.

SMTP Configuration

If SMTP is not configured in the Settings side, the Quick Alarm configuration will automatically update the default BizTalk360 SMTP account.

Note: This default settings can be modified at anytime.
The Quick Alarm configuration will not modify/update anything if you have already configured your SMTP account. It uses your configured SMTP details for sending alerts.

Conclusion

A Quick alarm is an easy and fast way to set up monitoring of the most important artifacts of your BizTalk artifacts. Are you tired of constantly having to monitor your BizTalk environment in a manual fashion? Give BizTalk360 a try and take benefits of Quick Alarms. A trial version of BizTalk360 can be requested here.

Azure AD Set Passwords to Not Expire

This blog post is more of a reminder for myself as much as anything. I had a need to mark some service accounts in Azure AD so that their passwords dont expire.

The aim was that we had a few service accounts used in a couple of places and we wanted to have a controlled process to change their passwords.

To do this we did the following:

  • Create a group to store associate all of the service accounts for our project for easy management
  • Add all of the service accounts to that group
  • Run a script which will check every member of the group and to change the password policy so the password doesnt expire

I had a look online and couldnt really find a resource showing how to do this which didnt use the old Office 365 mso powershell functionality so I thought id share this for anyone else who might find it useful.

Below is the script I used and usually run each time we might need a new service account where we want more granular control of the changing of passwords for service accounts.

Set-ExecutionPolicy -ExecutionPolicy Unrestricted

install-module azuread
get-module azuread


function ProcessUsers([string] $groupName)
{
    Write-Host 'Processing Users Function strted'
     
    $ServiceAccountsGroup = Get-AzureADGroup -SearchString $groupName -All $true
    Write-Host 'Group Found' $ServiceAccountsGroup.DisplayName
    Write-Host 'Group Found' $ServiceAccountsGroup.ObjectId


    $groupMembers = Get-AzureADGroupMember -ObjectId $ServiceAccountsGroup.ObjectId -All $true

    Foreach ($member in $groupMembers)
    {
        Write-Host $member.DisplayName

        $user = Get-AzureADUser -ObjectId $member.ObjectId
        
        Write-Host 'Pre-update Password Policy: ' $user.PasswordPolicies
        Set-AzureADUser -ObjectId $user.ObjectId -PasswordPolicies DisablePasswordExpiration

        $user = Get-AzureADUser -ObjectId $member.ObjectId
        Write-Host 'Post-update Password Policy: ' $user.PasswordPolicies
        Write-Host 'AccountEnabled: ' $user.AccountEnabled

        Write-Host ''
        Write-Host ''
    }

    Write-Host 'Processing Users Function Ended' 
}


$cred = Get-Credential
Connect-AzureAD -Credential $cred
ProcessUsers -groupName '<Group name goes here>'
Write-Host 'All Done'