How to list all the columns in a database of a specific data type?
Working with SQL Server can be quite interesting. It’s a quite powerful tool and you can really be creative with what you can do with those SQL Queries.
read more
Working with SQL Server can be quite interesting. It’s a quite powerful tool and you can really be creative with what you can do with those SQL Queries.
read more
If you plan to use BAM Alerts in your BizTalk Server project, you must install SQL Notification Services and its prerequisites on your BizTalk Server computer. This SQL Server 2005 feature is not included in SQL Server 2008 R2/SP1, but you can install it from the Microsoft Download Center. To install SQL Server 2005 Notification […]![]()
FileStream allows you to store unstructured data (LOB) in the file system instead of the database.
read more
StreamInsight has a very powerful management service that is fully available to developers and administrators alike. Any technique that you see in the Event Flow Debugger or in the API can be remotely invoked via the Management Service. This enables all sorts of tasks and scenarios, such as remotely deploying queries to a live server, querying the memory used by a live query, or capturing event traces to diagnose query correctness issues.
In this blog I’ll explore some of the “getting started” techniques for successfully connecting to the management service for diagnostic and debugging tasks, and show some tips and techniques I’ve picked up along the way:
All of the StreamInsight API’s are available either in process (in the case of the embedded server model), or via the management service (in case of the remote server model). To think about this from a code perspective:
| Embedded Server | Remote Server |
var cepServerEmbedded = Server.Create(); |
var cepServerRemote = Server.Connect( "http://servername/StreamInsight"); |
In both cases the Server object is the same from a functional perspective – under the hood it’s a lot different (as the remote server usage calls the management service over the wire), but virtually all of the same calls, API usage, all work without any other modification. Regardless of whether you run embedded or remote, being able to connect diagnostic tools (such as the EventFlowDebugger or a PowerShell script) is crucial in understanding and evaluating the performance and stability of a running system.
In the case of the standard StreamInsight service that ships with V1, the management service is already exposed via the URL http://localhost/StreamInsight/Default. For deployments using an embedded server, you’ll need to expose the management service yourself before leveraging any of the tools or techniques in this blog post. Luckily, that’s a straightforward process.
The management service is exposed as a standard WCF service endpoint. Making that service available to consumers involves a few steps:
A full discussion of this topic is given in the StreamInsight documentation, under Publishing and Connecting to the StreamInsight Server.
This is about as boilerplate as it comes. This is the standard utility function that I use for registering and hosting the management service endpoint.
What this code does is, given a Server object and a URL, creates and hosts a management service using the default security model. The trace object is an instance of the StreamInsightLog object that I wrote about in this blog post, as a generic wrapper around log4net. If you get an exception here, it’s usually a security or configuration issue. The next two steps refer to the most common configuration/security problems.
In Windows 7 and Server 2008, processes need to be given explicit permission to bind/expose HTTP endpoints. If upon attempting to expose the management service, you receive an error message along the lines of:
HTTP could not register URL http://+:8001/. Your process does not have access rights to this namespace (see http://go.microsoft.com/fwlink/?LinkId=70353 for details). |
You’ll need to configure your machine with the requisite permission (remember that this command needs to be executed from a command prompt with elevated privledges):
netsh http add urlacl url=http://+:8090/MyStreamInsightServer user=<domain\userid> |
Replacing the url (including port number) and user account with your account information (including service account if your application runs as a service).
For more information on this topic, including configuring for remote connections and XP/2003 machines, see Publishing and Connecting to the StreamInsight Server.
By default, any user that wants to connect to the management service needs to belong to the StreamInsightUsers$[Instance Name] group, typically the StreamInsightUsers$Default group. For more information on this group, see the StreamInsight Users Group section in the Installation (StreamInsight) section.
To add a new user account to the group:
net localgroup StreamInsightUsers$Default /ADD SOMEDOMAIN\someuser |
Here’s the part I always forget – your group membership is cached at login, so you’ll need to log out and log back in again.
To confirm that your group membership is updated:
whoami /groups | findstr StreamInsightUsers |
We’ve created the service and exposed the endpoint, configured the appropriate network security, and added our user account to the right groups. The last step here is to ensure that we can connect to the endpoint. The easiest way to do this is simply start up the Event Flow Debugger, and connect to the service. The detailed documentation on using the debugger is Using the StreamInsight Event Flow Debugger. The quick steps are:
If the service endpoint is available, the debugger will display the list of applications available for that instance. If you receive the error “It is not possible to establish a connection with the Microsoft StreamInsight server”, click on the Details button to see the detailed error message.
In this case (There was no endpoint listening at http://localhost/StreamInsight/Defaultxx that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. The remote server returned an error: (404) Not Found), there’s one of two things that I missed:
The endpoint not being active can also manifest as The HTTP service located at http://localhost/StreamInsight/Default is too busy. ——————-
The remote server returned an error: (503) Server Unavailable.
If you encounter the error, SOAP security negotiation with ‘http://localhost/StreamInsight/Default’ for target ‘http://localhost/StreamInsight/Default’ failed. See inner exception for more details, ensure that your user account is properly registered with the StreamInsightUsers$[Instance] group.
We’ll start by using the Event Flow Debugger to observe some basic diagnostic information, then flow over into some basic PowerShell scripts to look at the same information. My next blog post will cover some more advanced uses of PowerShell scripting against the management API to answer more advanced questions such as “which of my queries is consuming the most CPU”.
Let’s get started with the Event Flow Debugger;
A snapshot of the diagnostic view looks like:
Lots of great information in the diagnostic view; full documentation for the views and the meaning of the various properties can be reviewed in the Monitoring the StreamInsight Server and Queries page on MSDN.
Let’s use PowerShell to look up the same information (with the intent of leveraging PowerShell’s scripting abilities to perform custom views, filters, etc on top of the data).
This code snippet executes the basic steps required to work with the StreamInsight management API in PowerShell. This dynamically loads up the DLL, creates a server object, and enumerates the list of the applications (thus triggering a call to the remote server). If the remote server isn’t available (URL is wrong, etc – as per the steps above), the error message will look something like:
Exception has been thrown by the target of an invocation.
+ CategoryInfo : NotSpecified: (:) [], TargetInvocationException
+ FullyQualifiedErrorId : System.Reflection.TargetInvocationException
|
I generated this particular error message by having an invalid $serviceUrl setting. Assuming that we’ve established a good connection, we should see the list of available applications (which in turn contain queries).
Key Value --- ----- OperationalAnalytics Microsoft.ComplexEventProcessing.Appli... |
In my case, I have an OperationalAnalytics application, which corresponds to the view in the Object Explorer. To get the diagnostic view shown in the event flow debugger, we retrieve the diagnostic view. In this case, I’m going to cheat a bit by retrieving the query’s URI (path to the query in the StreamInsight engine’s metadata store) from the Event Flow Debugger:
Just pasting out the name to notepad shows the URI below; this can be passed to the GetDiagnosticView method to look up the diagnostic values shown in the event flow debugger Show Diagnostics screen in the previous section.
cep:/Server/Application/OperationalAnalytics/Query/queueIpLookup
PS C:\Users\masimms> $server.GetDiagnosticView("cep:/Server/Application/OperationalAnalytics/Query/queueIpLookup")
Key Value
--- -----
QueryState Running
QueryStartTime 10/20/2010 10:01:07 PM
QueryCreationTime 10/20/2010 10:01:07 PM
QueryId 4503599627370497
QuerySystemInstance False
QueryInstanceGroupId 0
PublishedStreamId 4503599627370501
PublishedStreamEventShape Point
PublishedStreamEventType <EventType Name="Project.1.4" xmlns="h...
PublishedStreamProducerCount 1
PublishedStreamConsumerCount 0
StreamEventCount 0
StreamMemoryIncludingEvents 0
OperatorIndexEventCount 0
OperatorEventMemory 0
OperatorIndexMemory 0
OperatorTotalScheduledCount 0
OperatorTotalCpuUsage 0
PublishedStreamEventCount 0
PublishedStreamTotalEventCount 0
QueryTotalIncomingEventCount 0
QueryTotalConsumedEventCount 0
QueryTotalProducedEventCount 0
QueryTotalOutgoingEventCount 0
QueryTotalConsumedEventLatency 0
QueryTotalProducedEventLatency 0
QueryTotalOutgoingEventLatency 0
QueryLastProducedCtiTimestamp 1/1/0001 12:00:00 AM
In the next blog post, I’ll cover how to look this information up dynamically, walk through the metadata set, look for interesting conditions and patterns as well as converting these data sets into native PowerShell objects.
I have been invited to present a session on Exploring AppFabric (Server & Cloud) at Avanade, London on the 28th October (next week). This is part of the regular Soltech After Hours meetings. This session will be a hands on look into the components of both Server AppFabric as well as Azure AppFabric. We will […]![]()
Pablo Castilla contacted me about his take on auto starting a workflow with IIS and AppFabric.
It looks interesting. I read the code but I have not tried it myself. I thought I would share it with you – I’m interested to know what the community thinks about this approach.
Also I’m thinking of starting an open chat with the WF4 team. The idea is 2 or 3 of us get together and respond to questions from Twitter, Facebook or LiveMeeting. The time zone problem is an interesting one. If we did it at 8am Pacific Time that would be 4pm UTC – great for Europe but 12:30am not so great for China.
We could alternate time for different weeks – one week doing a time good for Europe/Africa, another week doing a time good for Asia. Of course we will post recordings of the chat so you don’t have to listen live.
I’m interested to hear what you think about the idea.
SQL Server 2008 introduces many enhancements to Transact-SQL; Here is a list of some of those enhancements and a brief description of those enhancements.<span style="font-family: 'Calibri','sans-serif'; font-size: 11pt; mso-ascii-theme-font: minor-latin; mso-fareast-font-family: Calibri; mso-fareast-theme-f
read more
Team Foundation Error:
The cache file C:\Documents and Settings\username\Local Settings\Application Data\Microsoft\Team Foundation\3.0\Cache\VersionControl.config is not valid and cannot be loaded.
read more
Error:
System.ArgumentException: The SqlParameter is already contained by another SqlParameterCollection
read more
One of the key requirements in development of cloud based applications is to be able to leverage existing on-premise assets by exposing them as web services. However, since most of the organizations are firewall protected, the on-premise web services are typically not accessible to external clients running outside the organization’s firewall, unless these web services are explicitly hosted in DMZ. More often than not, hosting services in the DMZ is a cumbersome process. Azure AppFabric Service Bus provides the capability to extend the reach of on-premise web services to external clients (without having to host them in the DMZ) in a secure way. This blog describes how BizTalk Server 2010 and Azure AppFabric can come together to help enterprises build hybrid cloud based applications.
The new ’BizTalk Server 2010 AppFabric Connect for Services’ feature brings together the capabilities of BizTalk Server and Windows Azure AppFabric thereby enabling enterprises to extend the reach of their on-premise Line of Business (LOB) systems and BizTalk applications to cloud. This is a new BizTalk Server 2010 feature and can be downloaded from http://go.microsoft.com/fwlink/?LinkID=204701.
With the advent of cloud platforms and people building cloud based applications, it is still true that a lot of data for these applications resides in on-premise LOB systems. More often than not, these applications would also want to leverage existing on-premise applications. To build such hybrid applications with components residing on-premise as well as on cloud, a secure mechanism to connect an enterprise’s on-premise assets with those on cloud is the need of the hour. While this is true for any applications, it is even more true for integration applications. The following fictional scenario illustrates this better:
Woodgrove bank wants to build an online banking portal where its customers can view their bank or stock related information, and trade their stocks. The data needed for this portal resides in on-premise LOB systems. The stock trading functionality is implemented using a BizTalk Server Orchestration. The bank has also designed an ASP.Net based web portal and hosted it in Windows Azure. To enable communication between the cloud based web portal and the on-premise assets, the bank exposes the on- premise LOB data and the BizTalk solution as WCF services with endpoints in Azure AppFabric Service Bus.
Security considerations when exposing on-premise assets on cloud:
Security is an important requirement when exposing on-premise assets on cloud. Azure AppFabric Service Bus endpoints can be secured using Azure AppFabric ACS. A more detailed description of how to secure Service Bus endpoints is here. On top of this, the regular WCF security features such as transport level and message level security can be used to secure end-to-end communication between the client and the service.
How to use this feature:
This section provides a quick walkthrough of how to expose your on-premise BizTalk Orchestrations and LOB data as a WCF Service on cloud. A more detailed tutorial can be accessed from here.
BizTalk Orchestrations:
Run BizTalk WCF Service Publishing Wizard
Choose BizTalk Orchestration(s) to publish
Extend the reach of the BizTalk Orchestration(s) to cloud
Configure Service Bus endpoints
LOB systems:
Run BizTalk WCF Adapter Service Development Wizard
Choose the LOB artifact(s) to publish
Extend the reach of the LOB artifact(s) to cloud
Configure Service Bus endpoints
Summary
As you saw from the above scenario, AppFabric Connect feature provides tooling enhancements, which help you connect your on-premise artifacts with those on the Cloud using the AppFabric Service Bus. This will help in accelerating and building your applications on Windows Azure platform. This feature is available for download from download center (http://go.microsoft.com/fwlink/?LinkID=204701) for BizTalk Server 2010 customers. For any feedback or queries, leave a post on this blog or mail at: [email protected].
Harsh Shrimal
Program Manager, BizTalk Server Team