Microsoft Integration Weekly Update: October 15, 2018

Microsoft Integration Weekly Update: October 15, 2018

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

Feedback

Hope this would be helpful. Please feel free to reach out to me with your feedback and questions.

Advertisements

Wait, THAT runs on Pivotal Cloud Foundry? Part 5 – .NET Framework apps

Wait, THAT runs on Pivotal Cloud Foundry? Part 5 – .NET Framework apps

Looking for a host suitable for .NET Framework apps? Windows Server virtual machines are almost your only option. The only public cloud PaaS product that offers a higher abstraction than virtual machines is Azure’s App Service. And that’s not really meant to run an entire enterprise portfolio. So … what to do? Don’t say “switch to .NET Core and run on all the Linux-based platforms” because that’s cheating. What can you do today? The best option you don’t know about is Pivotal Cloud Foundry (PCF). In this post, I’ll show you how to easily deploy and operate .NET apps in PCF on any infrastructure.

This is part five of a five part series. Hopefully you’ve enjoyed my exploration of workloads you might not expect to see on a cloud-native platform like PCF.

About PAS for Windows

Quickly, I want to tell you about Pivotal Application Service (PAS) for Windows. Recall that PCF is really made up of two software abstractions atop a sophisticated infrastructure management platform (BOSH): Pivotal Application Service (for apps) and Pivotal Container Service (for raw containers). PAS for Windows extends PAS with managed Windows Server instances. As an operator, you can deploy, patch, upgrade, and operate Windows Server instances entirely through automation. For developers, you get a on-demand, scalable host that supports remote debugging and much more. I feel pretty safe saying that this is better than whatever you’re doing today for Windows workloads!

PAS for Windows extends PAS and uses all the same machinery

Deploying a WCF application to PCF

Let’s do this. First, I confirmed that I had a Windows “stack” available to me. In my PCF environment, I ran a cf stacks command.

Yup, all good. I created a new Windows Communication Foundation (WCF) application targeting .NET Framework 4.0. All of your apps aren’t using the latest framework, so why should my sample? Note that you can run all types of classic .NET projects in PCF: ASP.NET Web Forms, MVC, Web API, WCF, console, and more.

My WCF service doesn’t need to change at all to run in PCF. To publish to PCF, I just need to provide a set of command line parameters, or, write a manifest with those parameters. My manifest looked like this:

---
applications:
- name: blog-demo-wcf
memory: 256M
instances: 1
buildpack: hwc_buildpack
stack: windows2016
env:
betaflag: on

There’s a buildpack just for .NET apps on Windows and all I have to do is push the code itself. About fifteen seconds after typing cf push, my WCF service was packaged up and loaded into a Windows Server container.

Browsing the endpoint returned that familiar page of WCF service metadata. 

Operating your .NET app on PCF

It’s one thing to deploy an app, it’s another thing to manage it. PCF makes that pretty easy. After deploying a .NET app, I see some helpful metadata. It shows me the stack, buildpack, and any environment variables visible to the app.

How long does it take you to get a new instance of your .NET app into production today? Weeks? Months? I just scaled up from one to three Windows container instances in less than ten seconds. I just love that.

Any app written in any language gets access to the same set of PCF functionality. Your .NET Framework apps get built-in log aggregation, metrics and monitoring, autoscaling, and more. All in a multi-tenant environment. And with straightforward access to anything in the marketplace through the Service Broker interface. Want your .NET Framework app to talk to Azure’s Cosmos DB or Google Cloud Spanner? Just use the broker.

Oh, and don’t forget that because PAS for Windows uses legit Windows Server containers, each app instance gets its own copy of the file system, registry, and GAC. You can see this by SSH-ing into the container. Yes, I said you could SSH in. It’s just a cf ssh command.

That’s a full Windows file system, and I can even spin up Powershell in there. Crazy times.

Advertisements

Categories: .NET, Cloud, Cloud Foundry, DevOps, Pivotal, WCF/WF

Wait, THAT runs on Pivotal Cloud Foundry? Part 4 – Data pipelines

Wait, THAT runs on Pivotal Cloud Foundry? Part 4 – Data pipelines

Streaming is all the rage! No, not binge-watching Arrested Development on Netflix. Rather, I mean data stream processing: ingesting and handling infinite datasets. Instead of chewing through a nightly or weekly batch of records, you’re doing near real-time processing. Done correctly, this helps you improve data quality and make faster decisions. But how do you arrange the sequence of steps to process that data? Data pipelines! In this post, I’ll show you that this is yet another unexpected workload that runs pretty darn well on Pivotal Cloud Foundry (PCF).

So far in this series, we’ve looked at other workloads ranging from Docker images to batch jobs.

Let’s build a pipeline that processes a stream of shipment data that flows out of a relational database, gets enriched with additional info, and finally gets written to a log.

Spinning up Spring Cloud Data Flow on PCF

You could do streaming a few ways in PCF. You could manually deploy a PCF-managed instance of RabbitMQ, Solace PubSub+, or Apache Kafka. Or connect to a cloud-based broker like Azure Service Bus or Google Pub/Sub through a Service Broker. Any of those options give you a messaging backbone, but a data pipeline often involves a sequence of orchestrated steps. One turnkey solution that combines lightweight messaging with smart orchestration is Spring Cloud Data Flow (SCDF).

While it’s not that challenging to install SCDF yourself, PCF bundles it all up into a single package. All it takes is deploying the “Data Flow Server” from the PCF marketplace.

After BOSH built and deployed the Spring Cloud Data Flow server and dependent services (database, Redis cache, RabbitMQ instance), I also provisioned an instance of PostgreSQL from Crunchy Data. This is the source to my data stream.

That was easy.  From this screen on PCF Apps Manager, I could click through and log into the SCDF dashboard. From here, I loaded all the Spring Cloud Stream App Starters. These are “just” Spring Boot apps, but we can use these to build data streams. We can build our own apps to, but it’s great to pre-load these starters. Note that everything I’m doing with this dashboard you can also do with a CLI.

With that, I had everything I needed to build out my data pipeline. 

Building and deploying a data pipeline

Before building my pipeline, I wanted to prep my PostgreSQL database. To do this, I built a simple ASP.NET Core app that created a data table and added records. I deployed this to PCF, bound it to the Crunchy Data instance, and now had a way to instantiate my relational database and add rows.

I wanted to enrich data as part of my data pipeline. When a “shipment” record comes out of PostgreSQL, it has an identifier for which warehouse it came from. I wanted to use that ID to look up the US state associated with the warehouse. I could try and use an out-of-the-box App Starter to do it, or just build my own. I chose the latter. What’s wicked is these are just Spring Cloud Stream apps. I created a new app from start.spring.io, created a POJO that represents a “warehouse shipment”, added an annotation and a method, and assembled the jar file. No other configurations needed! 

@EnableBinding(Processor.class)
@SpringBootApplication
public class DemoPipelineEnricherApplication {

  public static void main(String[] args) {
     SpringApplication.run(DemoPipelineEnricherApplication.class, 
  args);
  }

  @StreamListener(Processor.INPUT)
  @SendTo(Processor.OUTPUT)
  public shipment EnrichShipment(shipment s) {
    switch(s.warehouse_id) {
    case 400:
        s.warehouse_location="CA";
        break;
    case 401:
        s.warehouse_location="WA";
        break;
    case 402:
        s.warehouse_location="TX";
        break;
    case 403:
        s.warehouse_location="FL";
        break;
    }
    return s;
  }
}

To make this app available to my new data pipeline, I needed to register it with the SCDF server. That means the jar file needed to be visible to the server. I uploaded the jar file to GitHub (better choices include the Maven repo, or another legit artifact repository) and registered it:

It’s pipeline time! I designed a pipeline that started with a JDBC source, sent the individual rows to my “enricher” app, and then routed the results to the application log. For fun, I also tapped that result stream to count how many messages came in for each US state.

The pipeline definition is something you can add to source control and version like any other deployment artifact. My pipeline looks like:

warehouse-stream=jdbc
--spring.datasource.username='[username]'
--spring.datasource.url='jdbc:postgresql://[url]:5432/shipments'
--jdbc.max-rows-per-poll=5 --jdbc.query='SELECT * FROM WarehouseShipments WHERE
is_read=FALSE' --jdbc.update='UPDATE WarehouseShipments SET is_read=TRUE WHERE
is_read=FALSE;' --spring.datasource.password='[password]' |
demo-enricher | log 

What’s cool is that after creating the stream, I had all sorts of deployment options for each app in the pipeline. That means that each app could have its own instance count and resource allocation. Much better than coarsely scaling the whole pipeline when just one component needs to scale! 

After deploying the streams, I saw the underlying Spring Boot apps deployed to my PCF environment. SCDF is pretty sophisticated but still an easy-to-use platform!

I continually added records to my PostgreSQL database, and saw them immediately stream through SCDF on PCF. Each individual message got enriched with additional details before printing out to the log.

In this post, we saw that data pipelines have a natural home in PCF. Spring Cloud Data Flow is an ideal replacement for heavyweight ESB products in certain scenarios, and a replacement for ETL in others. Give it a try on PCF, Kubernetes, or other runtimes.

Advertisements

Categories: Cloud, Cloud Foundry, OSS, Pivotal, Spring

SFTP (SSH File Transfer Protocol/Secure File Transfer Protocol)

SFTP (SSH File Transfer Protocol/Secure File Transfer Protocol)

Introduction

SFTP (SSH File Transfer Protocol also known as Secure File Transfer Protocol) is a secure file transferring protocol between two remote systems, which runs over Secure Shell protocol (SSH). It provides strong authentication and secure encrypted data communication between two computers, which are connecting over an insecure network. It was designed by the Internet Engineering Task Force (IETF) as a secure extension. SSH provides the secure file transfer capabilities.

In this article, we will explain how to configure SFTP, how to use it with BizTalk Server and how you can set up monitoring SFTP using BizTalk360.

Contents

  • How to Configure SFTP
  • Types of authentication available in SFTP
  • Using SFTP in BizTalk Server
  • Monitoring SFTP using BizTalk360 Application

How to Configure SFTP

SFTP has replaced legacy FTP (File Transfer Protocol) and FTP/S and it provides all the functionality offered by these protocols, but the protocol is more secure and reliable. Also, configuration is easier.

Following are the steps to configure SFTP:

  • Download the OpenSSH for server using OpenSSH for Windows binaries (Packages OpenSSH-Win64.zip or OpenSSH-Win32.zip)

Link: https://github.com/PowerShell/Win32-OpenSSH/releases

  • Extract the package in folder location ‘C:Program Files’ as an administrator and install the SSH and SSHD services using the following command:
    exe -Execution Policy Bypass -File install-sshd.psl
  • Once you have run the above command, the SSH server and server agent will install the system and start the service in the services.msc

The SFTP port number for the SSH port is 22, basically just an SSH Server. Once the user has logged in to the server using SSH, the SFTP protocol can be initiated. There is no separate SFTP port exposed on the servers. There is also no need to configure another rule into the firewalls.

Once the command is executed in PowerShell, the rule is created in the firewall section.

Using Public Keys for SSH Authentication

One effective way of securing SSH access to the server, is to use a Public/Private Key pair. This means that the generated key pair, consists of a public key (allowed to know) and a private key (keep secret and don’t give to anybody). The private key can generate a Signature and cannot be forged for anybody who doesn’t have that key. But using the public key, anybody can verify that a signature is genuine. The public key is placed on the server and a private key is placed on local workstation. Using a key pair, it becomes impossible for someone to log in by using just a password, in case you have set up SSH to deny password-based authentication.

Create the .ssh directory in a local folder and create a file named as “authorized_keys”, where we store the public key for authentication.

Generating Keys

PuTTYgen is a key generator. It generates pairs of public and private keys. When you run the PuTTYgen, you will see a window where you have two choices:

  • Generate – to generate a new Public/Private key pair
  • Load – to load an existing private key

Before generating a key pair, using PuTTYgen, you need to select which type of key you need.

PuTTYgen currently supports the following type of keys:

  • An RSA key for use with the SSH-1 protocol
  • An RSA key for use with the SSH-2 protocol
  • A DSA key for use with the SSH-2 protocol
  • An ECDSA (Elliptic Curve DSA) key for use with the SSH-2 protocol
  • An Ed25519 key (another elliptic curve algorithm) for use with the SSH-2 protocol

Here, we will generate a RSA key, for use with the SSH-1 Protocol.

  • Download the PuTTYgen from the web site
  • Launch the program and click “Generate” button. The program generates the keys for you

  • Once you click the Generate button, you must generate some randomness, by moving the mouse over the blank area

  • Enter the unique Key passphrase and Confirm passphrase fields

  • Save the public and private keys, by clicking Save Public Key and Save Private Key buttons

  • From the Public Key, for pasting it into the OpenSSH authorized_keys file field at the top of the window, copy all the text (starting with ssh-rsa). The copied key must be pasted either into the public key tool, in the Control Panel, or directly into the authorized keys file on your server.

Using SFTP Adapter in BizTalk Server

BizTalk Server provides the SFTP adapter to send and receive a file from a secure FTP server using the SSH file transfer protocol. Let’s see how can configure the SFTP adapter for receiving and sending a file from the secure server.

  • In the BizTalk Admin Console, create a SFTP Receive Port in the BizTalk application where you want to have it
  • Create a Receive Location within that Receive Port
  • Select the Transport Type as SFTP from the drop-down list

In the Properties section, configure the following steps:

Others

  • Connection Limit – Specify the maximum number of concurrent connections that can be opened to the server

Polling

  •  Polling Interval – Specify the interval at which the adapter polls the server. To poll continuously, set this value to zero

Default Value: 5

  • Unit – Specifies the unit in which the polling interval is specified. For example: Seconds, Minutes, Hours or Days

Security

  • Accept Any SSH  Server Host key – When the option is set as True, SSH will accept the connection from the host server; when it is set as False, the Receive Location uses the fingerprint of the server for authentication. For the authentication, you need to provide the finger print in the SSHServerHostKeyFingerPrint field.

  • Client Authentication

There are three client authentication methods:

  • Password
  • PublicKeyAuthentication
  • MultiFactorAuthentication

Password authentication mode is simply providing the password in the console for authenticating the client. For the PublicKeyAuthentication, you must provide the private key file in the PrivateKey field and provide the passphrase in the PrivateKeyPassword for authenticating.

For MultiFactorAuthentication, the user must provide the user name, password and Privatekey. If the private key is protected by a passphrase, you also need to provide that in the privatekeyPassword field.

  • Password –  Specify the password, if you have set the ClientAuthentication mode to password
  • Private Key – Specify the private key for the SFTP user, if you have set the ClientAuthenticationMode to Publickeyauthentication
  • Private Key Password – Specify the passphrase key to validate the private key
  • SSH Server Host Key Fingerprint – It specifies the fingerprint of the public host key for the SSH server
  • Username – Specifies a username to log on to the SFTP server

SSH Server

  • File Mask – Specifies the file mask to use when retrieving files from a secure SFTP server
  • Folder path – Specifies the folder path on the secure SFTP server from where the Receive Location can retrieve files
  • Port – Specifies the port address for the secure SFTP server on which the file transfer takes place
  • Server Address – Specifies the server name or IP address of the secure SFTP server

Configuring the Send Port

For Configuring the Send Port, create a Send Port or double click an existing send port to modify it in an application in the BizTalk Administration Console.

  • On the General Tab, Choose the type of SFTP artifact in the transport section and click the configuration button.
  • In the SFTP Transport Properties window configure the following options based on requirement

Others

  • Connection Limit – Maximum number of concurrent connections that can be opened to the server
  • Log – Creating client-side log file to trouble shoot any errors. Enter the full path for creating the log file and its available from BizTalk Server 2016
  • Temporary Folder – A temporary folder on the SFTP server to upload large files before automatically moved to the required location on the same server and its available from BizTalk server 2013 R2

Proxy

  • Address –Specifies either DNS or IP Address of the Proxy server
  • Password –Specifies the Password of the proxy server
  • Port – Specifies the port of the Proxy Server
  • Type – Specifies the protocol used by the proxy server
  • User Name – Specifies the User Name of the Proxy server

Security

  • Access Any SSH Server Host Key – When True, the send port accept any SSH Public host key from the server and if set as false, the port matches the host key with the key specified in the SSHServerHostKey
  • Client Authentication Mode – Specifies the authentication method that the send port uses for authenticating the client to the SSH Server.

Three modes of authentication

  • Password – If set as Password , you must provide the password in the Password Property
  • PublicKeyAuthentication – if set as PublicKeyAuthentication, you must provide the private key of the user in the PrivateKey
  • MultiFactorAuthentication – if set as MultiFactorAuthentication, you must provide UserName with its Password. If the private key is protected by a password, provide the password in the PrivateKeyPassword as well
  • EncryptionCipher – Provide the kind of encryption cipher and available from BizTalk Server 2013 R2. Options are Auto, AES and TripleDES in the BizTalk Server 2013 R2 and for the BizTalk Server 2016 Auto, AES, Arcfour, Blowfish, TripleDES, and
  • Password – Specify the SFTP user password if you set the ClientAuthenticationMode to Password
  • Private Key – Specify the private key for the SFTP user if you set the ClientAuthenticationMode to PublicKeyAuthentication
  • Private Key Password – Specify a private key password, if required for the key specified in the PrivateKey
  • SSH Server Host Key Finger Print – Specifies the fingerprint of the server used by the adapter to authenticate the server if the AccessAnySSHServerHostKey property is set to False. If the fingerprints do not match, the connection fails.
  • User Name – Specifies the username for the secure FTP Server

SSH Server

  • Append If Exist – if the file being transferred to the secure FTP server already exists at the destination, this property specifies whether the data from the file being transferred should be appended to the existing file. If set to True, the data is appended. If set to False, the file at the destination server is overwritten
  • Folder Path – Specifies the folder path on the secure FTP server where the file is copied
  • Port – Specifies the port address for the secure FTP server on which the file transfer takes place
  • Server Address – Specifies the server name or IP address of the secure FTP server
  • Target File Name – Specifies the name with which the file is transferred to the secure FTP server. You can also use macros for the target file name

  • Click Apply and OK again to save settings

Monitor the SFTP Location using BizTalk360

From the v8.4, under File Location in the Monitoring section, BizTalk360 has the capability to monitor SFTP servers. File Location Monitoring will list all the locations configured in the BizTalk artifacts (Send Ports and Receive Locations) for the SFTP Transport type. This helps users to easily monitor all the SFTP locations mapped within the Receive Locations/Send Ports.

It contains four sections:

  • SSH Server Section has the details about the SFTP Location
  • The Proxy Details Section is optional to connect to a SFTP Server behind a firewall

Note: In BizTalk, Proxy details are available from BizTalk 2013 R2

  • Security Details Section has the authentication details
  • In the SFTP Monitoring Config Section, you can configure the monitor with threshold conditions for the metric File Count

Based on the  need, you can monitor the location with threshold conditions. If the specific condition is met, the user gets notified through an email, a SMS or another communication channel.

For monitoring the SFTP server, BizTalk360 uses the third-party tool nSoftware. Using the nSoftware IPWorks SSH product, BizTalk360 connects to the secure server with Private Keys and password for monitoring the location.

For monitoring the SFTP in BizTalk360 you can refer the knowledge base in this link.

See below, some code snippets for connecting to the secure server using nSoftware.

Password Authentication


sftp.SSHUser = “test”;
sftp.SSHPassword = “password”;
sftp.SSHPort = 22;
sftp.SSHHost = “SSHHost”;
sftp.Config(“SSHAcceptServerHostKeyFingerPrint=6a:d3:65:96:d1:9f:9d:f9:57:4e:6b:3b:11:57:5a:15”);
sftp.SSHLogon(sftp.SSHHost, sftp.SSHPort);
Console.WriteLine(“Authenticated”);
sftp.SSHLogoff();


Public key Authentication
 
sftp.SSHUser = "test";
sftp.SSHCert = new Certificate(CertStoreTypes.cstPPKKeyFile, "....filesserver_cert.pem", "test", "*");
sftp.SSHAuthMode = SftpSSHAuthModes.amPublicKey;
sftp.SSHPort = 22;
sftp.SSHHost = "SSHHost";
sftp.Config("SSHAcceptServerHostKeyFingerPrint=6a:d3:65:96:d1:9f:9d:f9:57:4e:6b:3b:11:57:5a:15");
sftp.SSHLogon(sftp.SSHHost, sftp.SSHPort);
Console.WriteLine("Authenticated");
sftp.SSHLogoff()

Conclusion



This article demonstrates the creation of a SFTP Server. Using the SFTP server in BizTalk Receive Locations and Send Ports, you can send files securely and monitor the SFTP server using BizTalk360.
If you have any feedback or suggestions, please write to us at support@biztalk360.com.
Wait, THAT runs on Pivotal Cloud Foundry? Part 3 – Background, batch, and scheduled jobs

Wait, THAT runs on Pivotal Cloud Foundry? Part 3 – Background, batch, and scheduled jobs

So far in this series of posts, we’ve seen that Pivotal Cloud Foundry (PCF) runs a lot more than just web applications. Not every app has a user-facing front-end component. Some of your systems run in the background or on a schedule and perform a variety of important tasks. In this post, I’ll take a look at how to deploy background workers, on-demand batch tasks, and scheduled jobs.

This is the third in a five part series of posts:

Deploying and running background workers

Pivotal Cloud Foundry makes it easy to run workers that don’t have a routable address. These background jobs might listen to a database and respond to data changes, or respond to messages in a work queue. Let’s demonstrate the latter. 

I built a .NET Core console app that’s responsible for pulling “loan” records from RabbitMQ and processing them. You can built these background jobs is any programming language supported by Cloud Foundry.

What’s nice is that background jobs have access to all the useful PCF capabilities that web apps do. One such capability? Service Brokers! Devs love using Service Brokers to provision and access backing services. My background job needs access to RabbitMQ and I don’t want to hard-code any connection details. No big deal. I first spun up an on-demand RabbitMQ instance via the PCF Service Broker.

My .NET Core app uses the Steeltoe Service Connector (and the RabbitMQ .NET Client) to load service broker connection info and talk to my instance.

static void Main(string[] args){            
//pull service broker configuration
var builder = new ConfigurationBuilder()
.AddEnvironmentVariables()
.AddCloudFoundry();

var configuration = builder.Build();
//get our fully loaded service
var services = new ServiceCollection();
services.AddRabbitMQConnection(configuration);
var provider = services.BuildServiceProvider();
ConnectionFactory f = provider.GetService<ConnectionFactory>();

//connect to RMQ
using (var connection = f.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "loans", durable: true, exclusive: false, autoDelete: false, arguments: null);
var consumer = new EventingBasicConsumer(channel);

//fire up when a new message comes in
consumer.Received += (model, ea) => {
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
Console.WriteLine("[x] Received loan data: {0}", message);
};
channel.BasicConsume(queue: "loans", autoAck: true, consumer: consumer);
Console.ReadLine();
}
}

Apps deployed to Cloud Foundry are typically accompanied by a YAML manifest. You can provide the parameters on the CLI, but versioned, source-controlled manifests are a better way to go. For these background jobs, the manifests are simple. Note two key things: the no-route parameter is “true” so that we don’t get a route assigned, and the health-check-type is set to “process” so that the orchestrator monitors process availability and doesn’t try to ping a non-existent web endpoint. Also notice that I bound my app to the previously-created RabbitMQ service instance.

---  
applications:
- name: core-demo-background
memory: 256M
no-route: true
health-check-type: process
services:
- seroter-rmq

After a quick cf push, my background app was running, and bound to the RabbitMQ instance.

This job quietly sits and waits for work to do. What’s neat is this can also take advantage of PCF’s autoscale capability, and scale by monitoring RabbitMQ queue depth, for example. For now, one instance is plenty. I logged into RabbitMQ and sent in a couple sample “loan” messages.

Sure enough, when I viewed the aggregated application logs for my background job, I saw the content of each read message printed out. 

These sorts of workers are a useful part of most systems, and PCF offers a resilient, manageable place to run them.

Deploying and running on-demand batch tasks

How many useful, random scripts do your system administrators have sitting around? You know, the ones that create users, reset demo environments, or purge FTP shares. Instead of having those scripts buried on administrator desktops, you can run these one-off batch jobs in PCF.

I created another .NET Core console application. This one pretends to sweep expired files from a shared folder. I deployed this application to PCF with a –no-start command since I want to trigger it on demand.

cf push --no-start

Now, to trigger the job, I need to know the start command. This depends on how you deployed it. Since I used the .NET Core buildpack, I want to start up the app one time to discover how PCF starts up the app.

That command showed me where the .NET Core executable lives in the container. I stopped the app again, and switched over the “Tasks” view in the PCF Apps Manager interface. I can do all these things via the CLI as well, but I’m a sucker for a nice UX. There’s a “run task” button that lets me define a one-off task definition.

Here I gave the task a name, pasted the start command I found above, and that was it! When I hit, “run”, PCF instantiated a new container instance and shut down the container when the task was complete. And that’s what I saw. There was a log entry indicating a successful job run, and the application logs showed the output of the task. Nice!

This is a great option for one-off jobs and scripts. Consolidate them in PCF, and get all the availability and auditing you need.

Deploying and running scheduled jobs

Finally, some of those one-off jobs may not be as one-off as you thought! Instead of asking your admin to trigger a task once a day to purge expired files, how about you schedule the job to run on a schedule? 

PCF also offers a scheduling component to trigger tasks (or API calls!) on a recurring basis. On the same “tasks” tab of the PCF Apps Manager UX, there’s a “jobs” section for scheduled tasks. Besides giving the job a name and a command (the same as the task command above), you enter a Cron expression for the schedule itself. The expression is in a MIN HOUR DAY-OF-MONTH MONTH DAY-OF-WEEK format. For example “15 * ? * * *” means you should run the job every 15 minutes, and “30 10 * * 5” means you should run the job at 10:30am every Friday. My job below is set to run every minute.

We’re all building lots of web apps nowadays, but you have lots of need for event-driven or scheduled background work. PCF may surprise you as an entirely suitable platform for those workloads.

Advertisements

Categories: .NET, Cloud, Cloud Foundry, Messaging, Pivotal

BizTalk WCF-SQL Error: Microsoft.ServiceModel.Channels.Common.ConnectionException: Login failed for user

BizTalk WCF-SQL Error: Microsoft.ServiceModel.Channels.Common.ConnectionException: Login failed for user

And yes, this is just another “Login failed for user” SQL Server WCF-Adapter related error. In the past I wrote about a similar topic BizTalk WCF-SQL Error:

This time the error message, the cause, and the solution are slightly different. This time while trying to communicate to a brand-new SQL Server server/database to insert date on a table thru BizTalk WCF-SQL adapter I got the following error:

Microsoft.ServiceModel.Channels.Common.ConnectionException: Login failed for user ‘BTSHostSrvc’. —> System.Data.SqlClient.SqlException: Login failed for user ‘BTSHostSrvc’.

at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, SqlCredential credential, Object providerInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString userConnectionOptions, SessionData reconnectSessionData, DbConnectionPool pool, String accessToken, Boolean applyTransientFaultHandling, SqlAuthenticationProviderManager sqlAuthProviderManager)

at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, DbConnectionPoolKey poolKey, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection, DbConnectionOptions userOptions)

at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnectionPool pool, DbConnection.

BizTalk Server WCF-SQL: Login failed for user

In the event viewer the message is pretty much the same:

A message sent to adapter “WCF-Custom” on send port “STAGING_BULK_SQL_WCf_SEND” with URI “mssql://SQLSRV/ /ESBAsync” is suspended.

Error details: Microsoft.ServiceModel.Channels.Common.ConnectionException: Login failed for user ‘DOMAIN BTSHostSrvc’. —> System.Data.SqlClient.SqlException: Login failed for user DOMAIN BTSHostSrvc’.

at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, SqlCredential credential, Object providerInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString userConnectionOptions, SessionData reconnectSessionData, DbConnectionPool pool, String accessToken, Boolean applyTransientFaultHandling, SqlAuthenticationProviderManager sqlAuthProviderManager)

at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, DbConnectionPoolKey poolKey, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection, DbConnectionOptions userOptions)

at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnectionPool pool, DbConnection owningObject, DbConnectionOptions options, DbConnectionPoolKey poolKey, DbConnectionOptions userOptions)

at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject, DbConnectionOptions userOptions, DbConnectionInternal oldConnection)

at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject, DbConnectionOptions userOptions, DbConnectionInternal oldConnection)

at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection)

at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection)

at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)

at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)

at System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)

at System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry)

at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)

at System.Data.SqlClient.SqlConnection.Open()

at Microsoft.Adapters.Sql.SqlAdapterConnection.OpenConnection()

— End of inner exception stack trace —

Server stack trace:

at Microsoft.Adapters.Sql.SqlAdapterConnection.OpenConnection()

at Microsoft.Adapters.Sql.ASDKConnection.Open(TimeSpan timeout)

at Microsoft.ServiceModel.Channels.Common.Design.ConnectionPool.GetConnection(Guid clientId, TimeSpan timeout)

at Microsoft.ServiceModel.Channels.Common.Design.ConnectionPool.GetConnectionHandler[TConnectionHandler](Guid clientId, TimeSpan timeout, MetadataLookup metadataLookup, String& connectionId)

at Microsoft.ServiceModel.Channels.Common.Channels.AdapterRequestChannel.OnOpen(TimeSpan timeout)

at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)

at Microsoft.BizTalk.Adapter.Wcf.Runtime.OneWayOperationSendPortRequestChannel`1.OnOpen(TimeSpan timeout)

at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)

at System.ServiceModel.Channels.ServiceChannel.OnOpen(TimeSpan timeout)

at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)

at System.ServiceModel.Channels.CommunicationObject.Open()

Exception rethrown at [0]:

at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)

at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)

at System.ServiceModel.ICommunicationObject.Open()

at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.GetChannel[TChannel](IBaseMessage bizTalkMessage, ChannelFactory`1& cachedFactory)

at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.SendMessage(IBaseMessage bizTalkMessage)

MessageId: {84B22A22-13F7-47C7-91B5-A863E64E268E}

BizTalk Server WCF-SQL: Login failed for user

Cause

Once again, sometimes is not quite true, the cause of the problem is simple to diagnose, and the error message gives a very good intel in the cause of the problem.

This problem occurs because the user account that you used to access the database, in my case the BizTalk Host Instance Account, don’t have permissions to connect… the SQL Server or SQL Server instance.

Just to be clear, this is not having permission to insert, read or event full permission to do operation on a specific database, that is completely different – I check all of that and the user have the correct access/permission. What I forget was to give access to connect to the SQL Server/SQL Server Instance.

Solution

To solve this issue, you must give access to the user, in my case BizTalk Host Instance Account to connect to the SQL Server and for that, you must:

  • Open SQL Server Management Studio and connect to your server.
  • In the Object Explorer, expand the “Security” folder under the server.
  • Right click on the “Logins” folder and choose “New Login…”
  • Add the username or group in the format “DomainUserNameOrGroup”

03-WCF-SQL-Receive-Location-Login-Failed-For-user-Create-SQL-Server-Login

  • Choose the “Securables” tab and make sure that you grant “Connect SQL” permission to the SQL Server/SQL Server instance

04-WCF-SQL-Receive-Location-Login-Failed-For-user-Create-SQL-Server-Login-Securables

  • Click “OK” and your user will be created and have access to connect to your SQL Server.
Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

Wait, THAT runs on Pivotal Cloud Foundry? Part 2 – TCP-routable services

Wait, THAT runs on Pivotal Cloud Foundry? Part 2 – TCP-routable services

Platform-as-a-Service products typically run web apps. That is, apps that accept HTTP traffic and listen on ports 80, 8080 or 443. As you survey the landscape today, you’ll find that’s still the case in the most popular public cloud application runtimes. That’s not a bad thing, but sometimes you have workloads with different routing needs. In this post, I’m going to demonstrate TCP Routing in Pivotal Cloud Foundry (PCF), and show Redis running directly in the platform.

As a reminder, this is the 2nd post in a series about “unexpected” workloads running on PCF.

  • Part 1 – Deploying and running Docker images
  • Part 2 – Setting up TCP routable services
  • Part 3 – Running batch and scheduled jobs
  • Part 4 – Configuring data streaming apps
  • Part 5 – Deploying .NET Framework apps to Windows Server

About TCP Routing in PCF

TCP Routing has been part of Cloud Foundry for two years now. Basically, TCP Routing lets your app handle traffic over non-HTTP TCP protocols. This is valuable for custom-built apps or packaged software that communicate with binary payloads or specialized transports.

By default, custom-built apps are set to always listen on port 8080 in Cloud Foundry. The buildpack process (mentioned in part 1 of the series) configures that, although you can change this behavior. Even if your app does listen on port 8080, TCP Routing makes it easy to expose a non-HTTP port to the outside world via network address translation.

Source: https://docs.cloudfoundry.org/adminguide/enabling-tcp-routing.html

Assuming your Cloud Foundry admins configured TCP Routing in your environment(s), you can set up this type of per-app routing entirely via self-service.

Deploying a TCP routable workload

Instead of demonstrating with an app I wrote myself, I thought it’d be more fun to deploy a well-known software product. Enter Redis! Redis is a wildly-popular key-value store, and there are many ways to install it. One of the easiest options is the Docker image. Note that Redis typically exposes access over port 6379. When deploying Docker images to Cloud Foundry, the port defined in the EXPOSE directive is what’s actually exposed by Cloud Foundry app container. I didn’t know that until this week!

After logging into my PCF environment, I ran the cf domains command to see what routable domains were available to me.

I’ve got the “standard” domain for my regular web apps (here, apps.pcfone.io), a domain for TCP routing (tcp.apps.pcfone.io) and one for private traffic (apps.internal) that we’ll mess with shortly.

I started by pushing a Redis image to PCF. I’m purposely using the –no-route command to ensure it doesn’t get a default web route in the apps.pcfone.io domain.

cf push redisdocker --docker-image redis -i 1 -m 256M --no-route -u process

After about ten seconds, the container is up and running. Notice however, that it’s currently not routable.

Let’s change that. Now, because all apps sit behind the same edge router and TCP routes don’t have a path component, I can’t have two apps listening on the same TCP port. So, there’s a good chance that the default Redis port fo 6379 is already in use somewhere. That’s cool; we can tell PCF to assign a random port at the edge route that forwards traffic to port 6379 on the app container.

cf map-route redisdocker tcp.apps.pcfone.io --random-port

The result? I get a TCP route assigned on port 10011.

Again, note that the app container is still listening on 6379, because that’s what was set by the Docker image at deploy time. But through network address translation, the external facing port is a different value. Let’s prove that Redis is actually running and addressable.

I spun up the redis-cli and issued a command.

Ok, clearly it’s reachable via the public Internet over a non-HTTP connection. That’s neat. I did a LITTLE more with Redis than that, by also adding and retrieving a key.

With this pattern, my apps running in PCF (or anywhere) can send requests to PCF-hosted software that handles all kinds of payloads and protocols. But what if you don’t want these workloads to be Internet accessible?

Setting up private TCP routing

The above demo is cool, but you might not like having your cache, MQTT bus, or whatever, exposed to public traffic. This is where the relatively-new container-to-container networking is pretty darn neat.

By default, app instances in Cloud Foundry talk to each other through the shared router. That’s not awful, but for performance reasons, or to access private services, you may want to communicate directly with another app container. With polyglot service discovery now part of PCF, it’s easy to do this via DNS, versus hard-coded container addresses. Let me show you.

First, I removed the publicly-accessible TCP route from my Redis instance.

Now, you can no longer reach it. Next up, I wanted to map my Redis instance to the apps.internal domain that’s ONLY accessible within a Cloud Foundry.

cf map-route redisdocker apps.internal --hostname redisdocker

Because we’re not dealing with any extra NAT action, I can directly hit Redis on port 6379. I built a Node.js app that connects to Redis, adds a key, and reads a key. I set the connection details to the internal domain and standard port.

var options = {  host: "redisdocker.apps.internal",  port: 6379}
var redis = require("redis"), client = redis.createClient(options);

Then I pushed this app to PCF with a –no-start command so that I could set up connectivity between my app and Redis. Apps can’t automatically reach other apps on the apps.internal domain unless we give permission. It’s easy to do.  Via the Cloud Foundry CLI, I can create, delete, and list network policies. A network policy determines which apps can directly talk to each other (without going through the router), over which port and protocol.

cf add-network-policy demo-app --destination-app redisdocker --protocol tcp --port 6379

Notice that in that command, all I said was that one app (demo-app) could talk to another app (redisdocker). I didn’t have to map IP addresses, or anything like that. As app instances scale in and out, there’s no need to change the policies to reflect that. That’s a considerate UX.

After executing the above command, my Node.js app (demo-app) could “see” the redisdocker app instance. And notice that I’ve allowed traffic to the default Redis port, 6379.

With that policy in place, I loaded the Node.js app, and it directly routed requests over port 6379 to my Redis instance.

Unlike most PaaS-like products, PCF offers TCP routing over non-HTTP channels. While you may still (wisely) choose to run certain workloads—clustered services, apps that need multiple IPs exposed per container, or workloads with complex persistence needs—in an environment outside of PCF, it’s useful to know that you can leverage PCF to host and orchestrate a wide variety of publicly or privately routable workloads. Keep an eye out tomorrow for the next post, where we investigate batch jobs.

Advertisements

Categories: Cloud, Cloud Foundry, Node.js, Pivotal

It’s time to upgrade – Here is BizTalk360 v8.9

It’s time to upgrade – Here is BizTalk360 v8.9

Hi there! It’s time to upgrade your BizTalk360 installation! We are here with our next release of BizTalk360, v8.9. As promised, this release also comes up with a bunch of exciting new features, enhancements and of course some bug fixes.

The series of blogs, explaining the different features coming up in v8.9, have already been released. But, to make it easy for our customers, we thought it would be nice to give a brief description of all these features in a single place. This way it would be easy to get the real big picture of this new release.

As per the below quotes,

The key is to set realistic customer expectations, and then not to just meet them, but to exceed them- preferably in unexpected and helpful ways

                                                                                                                  – Richard Branson

The features are added to the product based on the customer feedback and suggestions. We understand the customer needs and add them to the product to make the product as suitable as possible for the user. 

Come on, let’s jump in to get the list!

User Access Policy enhancements

In the User Access Policy section, the Application access section has new capabilities. Initially, it was a list of applications which needed to be checked for providing access to the Normal Users/Groups. But then, what about the newly deployed applications? Every time when a new application was deployed, the Admin would need to scroll down the entire list to check for the new applications and then provide access. This was very time-consuming.
Now, to ease the process, we have provided different rules for configuring the access. But then, only one rule can be applied at a time. The different rules include:

  • Grant Access by Applications
  • Grant Access to All Applications
  • Wildcard Search
  • Grant Access to Application groups

Grant Access to All Applications

As the name denotes, enabling this rule will provide access to all the available applications for the Normal Users/Groups. The user will automatically be granted access to all the newly deployed applications.

Wildcard Search

This enables users to select the options from the wildcard operator drop down. Once this rule is configured, the user will have access to all the applications matching this wildcard. The user will automatically be given access to the newly created applications that match the wildcard.

Grant Access to Application Groups

With this new capability, you can create Application Groups and map the applications to that group. Once the user is given access to the Application group, he/she can access all the applications which are mapped to that group.

Grant Access by Applications

For persisting the existing configuration data, we have another rule available: ‘Grant Access by Application’.  Once the upgrade is completed, this will be the default rule which is selected for existing users. The only difference between this configuration and the other new rules, is that when Grant Access by Applications is configured, newly applications created will not automatically be given access as in the other rules.

Stop Alerts for Maintenance during business holidays

If a user sets up multiple maintenance windows, they need to configure the business holidays individually. It will take much of your time, to configure them for every single environment in BizTalk360. To reduce the time and ease the maintenance configuration for the users, the capability to add business holiday calendars has been introduced.

These business holiday calendars can be mapped during maintenance window setup. This new configuration section is introduced in the Monitoring Notification settings section as “Configure Business Holidays”.

In the Stop Alerts for Maintenance settings page, a new section is introduced to configure the business holiday calendars. All the configured calendars with Status enabled will be displayed in the “Select Business Holiday Calendar” drop down list. A user can select the desired calendar and use it for a maintenance window. During the business holiday, a maintenance window will be active.

The users can also exclude certain alarms during the maintenance. This means, that, except the selected alarms, other alarms will undergo maintenance. This capability is very useful in situations where administrators don’t want to receive alerts during the weekends except for few specific alarms.

Web Endpoint monitoring improvements

From 8.9 version on, BizTalk360 Web Endpoint authentication is extended to support:

Let’s have a look at the improvements in these areas.

Basic Access Authentication

This is a method for an HTTP user agent to provide a user name and password when making a request. To unauthenticated requests, the server should return a response whose header contains a HTTP 401 Unauthorized status and a WWW-Authenticate field. In the BizTalk Admin Console, an HTTP endpoint can provision Basic authentication with a username and password

Certificate Authentication

In BizTalk360, the authentication type of Basic or Windows, along with the client certificate thumbprint, is configured in the Authorization section of Web Endpoint monitoring.  

Azure Services Authentication

To be able to use Azure Services Authentication, a Service Principal must be configured in Azure. A Service Principal is an application within Azure Active Directory whose authentication tokens can be used as the client Id, client secret, and tenant fields (the subscription can be independently recovered from your Azure account details).

Additional content types

BizTalk360 8.9 extends the support to additional content types in request and response objects:

  1. SOAP (1.2) Content Type – “application/soap+XML” is a SOAP 1.2 content type which is added to the list. With this additional content type, SOAP V1.2 protocol is supported in web endpoint monitoring. The user can configure the XPath conditions to monitor the SOAP 1.2 endpoints, based on the results of the execution.
  2. Custom Content Type – When Endpoint Request/Response content types are not supported by BizTalk360, the Web Endpoint throws a HTTP 415 Unsupported Media Type. To prevent this from happening, you can configure Custom Content types. 

Extended Import/Export Configuration

In version 8.9, we added support for import and export of the following sections:

  1. Knowledge Base
    • Service Instances
    • ESB Exceptions
    • Event Logs
    • Throttling Data
  2. BizTalk Reports
  3.  Dashboards
    • Operation (Default & Custom Dashboards)
    • Analytics (Default & Custom Dashboards)
    • EDI Dashboards
    • ESB Dashboards
  4. Custom Widgets

The details of this feature can be found here.

Additional columns filter capability

Grid columns in BizTalk360 are getting a fresh look. You can customize the column headers which are most important to your business scenario.

Grid columns can be dynamically removed or added based on the user preference. As per the settings in the configuration section, columns will be aligned and displayed in the grid view. These customized column settings can be saved for future reference as well. We are sure, this capability will add more value when the administrator is looking for the instances/messages based on various conditions.

As an initial phase, this implementation has been done in the following areas in BizTalk360:

  1. Message Box Queries
  2. Graphical Flow (Tracking)
  3. Electronic Data Interchange

BizTalk360 allows saving as many patterns as the user wants. To search the messages based on different scenarios, admins prefer different filter conditions to validate. In those situations, BizTalk360 allows to save different query filters and keeps them for future use. You can also download the customized column data using the Export to Excel capability.

Centralized Advanced Event Log viewer performance improvement

In our previous versions, up to v8.8, Event Log collection logic is not segregated based per server. To enable the Event Log collection, it is bound to the BizTalk environment and not on an individual server level. However, the user has control on configuring the sources based on the need.

In the new version of BizTalk360, users can control the Event Log collection according to individual servers. As an administrator, you know the value of each source which needs most considerations. So, there is an option as well in BizTalk360 to configure the BizTalk & SQL server sources separately.

Using these settings, you can customize and narrow down your Event Log search.



PowerShell Notification Channel

In our earlier versions, the users were already able to send notifications to specific notification channels (E.g.: Slack, ServiceNow, Webhook, Teams). Now the user can configure PowerShell scripts in the Notification Channel while configuring an alarm.

More Enhancements in BizTalk360 v8.9

Besides the above mentioned features, we have also brought a number of enhancements to existing features.

Monitor queues for message age – We have enhanced the option to monitor the queues with message ages (time of message till it exists in the queue) for IBM MQ and Service Bus Queues.

Notifications grouped by Error Description – Previously, Service instances were grouped by Error Code in the alert emails. Now, there is a new setting “Enable Group by Description” introduced to group the service instances based on the Error Description to get full insight about your errored service instances.

New filter option in ESB Exception portal – There is a new filter option “Service Name” introduced in the ESB Exception Data query builder. This will enable users to perform extensive search and get the desired results.

Restore XSLT templates – Whenever we make any improvements to the default email template there is no option for the users to restore the changes from the GUI. This makes them to manually copy/paste the XSLT from the database to utilize the new changes. To avoid the manual intervention, a new option has been provided “Restore System XSLT” to restore the changes from GUI.

PDF download available in more areas – PDF download capability is not new in BizTalk360. We have provided this option in few other areas of the application, to download the reports, dashboards and message flows from the GUI. This option is provided in Operation, Monitoring & Analytics Dashboards, Graphical Flow (Tracking) and Messaging Patterns.

Group your Logic Apps by and monitor on Resource Groups – In the earlier versions, Logic Apps are not grouped by Resource Group name. There was an issue to display the Logic Apps which are created in different Resource Groups with the same name. With the introduction of “Resource Group” column, Logic Apps are grouped by Resource Group in every single configured Azure subscription.

Multiple installer improvements – We have enhanced the BizTalk360 installer in v8.9 in few areas:

  • Single credentials during upgrade – Only one set of credentials (User Name, Password) will be asked during the upgrade process, in case the same credentials have been used for all the installed components
  • SQL Authentication – BizTalk360 will now support SQL Authentication for upgrade

Finally, of course, they are a number of bug fixes as well. Kindly refer the Release Notes for the complete details.

Conclusion

We always monitor the feedback portal and take up the suggestions and feedback. Now we would like to request you, our customers, to please take the time to fill this questionnaire to help us prioritize the next upcoming feature tasks, to let us know what are your main pain points and help us to further improve the product.

Why not give BizTalk360 a try! It takes about 10 minutes to install on your BizTalk environments and you can witness and check the security and productivity of your own BizTalk Environments. Get started with the free 30 days trial. Happy monitoring with BizTalk360!

Author: Praveena Jayanarayanan

I am working as Senior Support Engineer at BizTalk360. I always believe in team work leading to success because “We all cannot do everything or solve every issue. ‘It’s impossible’. However, if we each simply do our part, make our own contribution, regardless of how small we may think it is…. together it adds up and great things get accomplished.”

Wait, THAT runs on Pivotal Cloud Foundry? Part 1 – Docker images

Wait, THAT runs on Pivotal Cloud Foundry? Part 1 – Docker images

When I say “PaaS” what comes to mind? If you’re like most people I talk to, you think of public cloud platforms for modern web apps. So I’ll forgive you if you didn’t realize that things are different now!

The first generation of PaaS products had a few things in common. They were public cloud only. You had to build apps with the runtime constraints in mind. They only ran statelesss web apps. Linux was the only runtime. When Cloud Foundry first came out, it checked most of those boxes. But over the years, Pivotal Cloud Foundry (PCF) evolved to do much more.

Many people still think of those first-generation PaaS constraints when considering PCF, and specifically, the Pivotal Application Service (PAS). So, I thought it’d be fun to look at non-traditional workloads. In this brief five-part series, I’m going to show off the following scenarios:

  • Part 1 – Deploying and running Docker images
  • Part 2 – Setting up TCP routable services
  • Part 3 – Running batch and scheduled jobs
  • Part 4 – Configuring data streaming apps
  • Part 5 – Deploying .NET Framework apps to Windows Server

Deploying and running Docker images

Most Cloud Foundry users depend on buildpacks. Developers push source code, and the buildpack pulls in dependencies, frameworks, and runtimes, then builds a tarball that’s deployed as an OCI-compatible container in Cloud Foundry.  One major benefit of the buildpacks model is that the platform brings the root file system to your app. You’re not responsible for finding secure base images or maintaining that “layer” of the stack. But all that said, some folks like using Docker images as their packaging unit whether manually created (don’t do that) or as the output from a continuous integration pipeline.

It doesn’t matter if Cloud Foundry builds the container or you send in a Docker image, it’s all treated the same by the platform. At runtime, the orchestrator executes all containers using runC, the same spec used by Docker and Kubernetes. Let’s see this in action.

You can try this for free on Pivotal Web Services if you don’t have a Cloud Foundry available. I’m using a different environment, but they all behave the same. That’s the point! After you cf login to Cloud Foundry, it’s time to push a container.

How about we start with a Node.js web app. Here’s an Express app built by the folks at Bitnami. We can actually push this to Cloud Foundry with a single command.

cf push nodedocker --docker-image bitnami/node-example:0.0.1 -i 2 -m 128M

In that command, notice a couple things. First, I’m using the –docker-image flag. Since I’m hitting a public image in the public Docker Hub, no credentials or anything are needed. PCF also works with private images, and private registries. Otherwise, it’s a standard command that asks for a single instance, and 128M of memory for each instance. Within ten seconds, you’ll have two routable instances ready to process traffic.

Seriously. That’s amazing. And PCF doesn’t “mess with” the image. Whatever layers are in your Docker image are what run in Cloud Foundry. One thing PCF *does* do is volume mount a directory that contains a unique certificate for the container. This regularly-rotated credential (up to hourly!) is used for things like mTLS. You can see it by SSH-ing into the container and doing printenv or browsing the file system. Yes, you can actually SSH into containers whether built by the platform or via Docker images. No black boxes here.

Deploying an app’s only half the story. Does PCF treat the running app the same way if it was packaged as a Docker image? Yup. Jumping to the PCF Apps Manager UX, you see our running app.

If you look closely, you see that we indicate the app type, in this case, that it’s from a Docker image.

More importantly, the platform bestows all the operational goodness on this app as any other. For example, all the logs from each app instance are collected and aggregated.

You can add environment variables. Configure auto-scaling. Monitor app and container health metrics. Bind to marketplace services. All the things that make PCF a great runtime for apps make it a great runtime for apps packaged as Docker images.

So try it out yourself. If you’re building custom apps, PCF is a great destination regardless of how you want to ship code. Stay tuned tomorrow for fun network routing demonstration.

Advertisements

Categories: Cloud, Cloud Foundry, DevOps, Docker, General Architecture, Microservices, Node.js

Microsoft Integration Weekly Update: October 8, 2018

Microsoft Integration Weekly Update: October 8, 2018

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

Feedback

Hope this would be helpful. Please feel free to reach out to me with your feedback and questions.

Advertisements