Visual Studio debugger hanging with CLR objects

I was attempting to debug a stored procedure that calls CLR functions (however this story can be told for any CLR object) and the debugger would simply hang.

There are two steps to get around this issue:

First is to disable and re-enable CLR debugging. To do this, in Visual Studio right click the SQL Server and uncheck the Allow SQL/CLR Debugging line.

Then re enable it, where you will see the following dialog where you should press Yes.

The second step is to disable affinity for multiple cpu machines.

In SQL Management Studio, right click the server in the tree, and go to properties. There click on the Processors, Uncheck the Automatically set processor affinity mask for all processors:

This worked for me. I am wondering if there is a hotfix for this?

BizUnit Step to Query HAT

I was just refactoring an orchestration today and to help me test it I wanted to query HAT to ensure the orchestration had completed successfully, BizUnit doesn’t have a build in step to do this so I created my own which is easy to do thanks to the extensibility of BizUnit. I guess I could probably have used one of the BizUnit database steps and written some fiddly SQL to do this, but a new task would make this fairly reusable.

The xml to use my step is below:

<TestCase>

<TestSetup>

</TestSetup>

<TestExecution>

<TestStep
assemblyPath=Acme.BizTalk.Testing.dll
typeName=Acme.BizTalk.Testing.BizUnit.HAT.OrchestrationCompletedQuery>

<DurationToCheckSeconds>120</DurationToCheckSeconds>

<ExpectedNoOrchestrations>1</ExpectedNoOrchestrations>

<OrchestrationName>MyOrchestration</OrchestrationName>

<FailIfLess>true</FailIfLess>

<FailIfMore>false</FailIfMore>

<HATConnectionString>server=.;database=biztalkdtadb;integrated security=sspi;</HATConnectionString>

</TestStep>

</TestExecution>

<TestCleanup>

</TestCleanup>

</TestCase>

Some key points to this XML are:

  • The DurationToCheckSeconds is as it says a period of time that HAT will be checked for to see if your orchestration has completed
  • The ExpectedNoOrchestrations allows you to indicate how many instances you expect to find from this query
  • OrchestrationName is the name of the orchestration you are looking for
  • FailIfLess allows you to indicate if an exception should be thrown by the step if it finds less than the expected number of orchestrations
  • FailIfMore allows you to indicate if an exception should be thrown by the step if it finds more than the expected number of orchestrations
  • HATConnectionString is the connection to the HAT database

In my step I will execute a query which will check for instances which started after the start of my BizUnit test (the step gets this from the BizUnit context).

In the example above im basically saying that I want to find atleast 1 instance of my orchestration completing since my test began.

As we all know HAT data sometimes takes a little while to be available. In my step depending on how you configure the settings above it will not necessarily wait the whole duration to check before confirming that the test was ok. For example In my usage above it lets me confirm that at least 1 instance is in HAT since my test begun, but I don’t want to wait for ages ensuring that only 1 instance is there. Because my tests all run serially I can assume that so long as I find 1 instance then my test is good and I can continue as soon as I find it.

The code for the BizUnit step is available to download at the following link:

http://www.box.net/shared/647eo262x3

HTH

Mike

On Atomic Scope and Message Publishing

A few weeks back I worked on a process that looked something like this –

It was triggered by the scheduled task adapter and then used a SQL send port to call SP to return list of ’things’.
It needed to split the things in the list to individual records, and to start a new, different, process, through pub/sub (to avoid the binary dependency with the called process), for each ’thing’.

Fairly simple.

A lot of have been said on the different ways to split messages, I won’t repeat this discussion here; I would just say that initially I used a different approach – I used the SQL adapter in the initial, triggering, receive port and then used a receive pipeline, with an XmlDisassembler component, to split the incoming message so that each record was published individually thus avoiding the need to have a ’master process’; that back fired though, in my case – I quickly realised I’ll be choking the server with the amount of messages published and needed a way to throttle the execution; I’ve played a bit with host throttling but then came to the conclusion the best approach for me would be to throttle in a process, which is what I’ve done.

And so – to make things interesting, and because I already had it all ready – I decided to use a call to a pipeline from my process to split the message.

The first thing I realised, trying to take that approach, was that I had to change type of the response message received from the SQL port to be XmlDocument (which is an approach I generally dislike – I’m a sucker for strongly-typed-everything) – but my schema was configured as an envelope so that when I call the pipeline from my process it knows how to split it correctly, but, when used in the SQL port BizTalk split the message too early for me – I needed to whole message in the process first, which was no good to me; if , however, I removed the envelope definition from the schema when I would call the pipeline directly from my process it won’t know how to split the message, which is no good either; nor could i have two schemas (BizTalk, as we all know, dones’t like that bit at all, not without even more configuration); XmlDocument it is.

It then came back to me (in the form of a compile time error :-)) that the pipeline variable has to exist in an atomic scope, and so I added one to contain my pipeline variable; I then added the necessary loop with the condition set to the GetNext() method of the pipeline and in each iteration constructed a message using the GetCurrent() method; all standard stuff.

I would then set some context properties to route my message correctly and allow me to correlate the responses (I used a scatter-gather pattern in my master process) and published it to the message box

What I noticed when testing my shiny new process was that all those sub-processes that were meant to start as a result the published messages in my loop were delayed by quite a few minutes (6-8), which seemed completely unreasonable, so I embarked on a troubleshooting exercise which resulting in that big “I should have thought of that!” moment.

While the send shape in my loop successfully completed its act of publishing the message in each iteration, moving my loop to the next message and so on, being in an atomic scope BizTalk would not commit the newly published messages to the message box database, allowing subscriptions to kick in, before the atomic scope would finish; that is to allow it to rollback should something in the atomic scope would fail.
What it meant for me though, was that all the messages were still effectively published at once, which brought me back to square one (or, minus one, actually, considering that the great delay caused my this approach means I’m even worse off from my first debatch-in-pipeline approach).

And so I went back to the old and familiar approach of splitting the messages using xpath in the process, which allowed me to carefully control the publishing rate of messages for my process and throttle them as needed.

BizTalk WCF Custom Adapter – Developing WCF Custom Behaviours

As you sink your teeth into the depths of BizTalk 2006 R2/2009 and realise the potential
of integrating with the WCF Custom Adapter, a new world opens up.

Then

Then at one point down the track you’ll want to tweak or customise what is being sent/received.
For e.g. changing the shape of the XML transferred over a HTTP transports – maybe
compression, maybe minus the explicit XML notation (i.e. takes 5KB to send over 30
chars)

So you’ll then move onto something called a WCF Custom Behaviour.

Two great whitepapers are

Part 1- http://msdn.microsoft.com/en-us/library/cc952299.aspx

Part 2- http://msdn.microsoft.com/en-us/library/dd379134.aspx

New Webcast on MGrammar…

I’ve just added a new webcast to BloggersGuides.net, on an introduction to MGrammar. I have another one in the pipeline which I hope to get out next week that will look at a more real-world example of using MGrammar.
I found MGrammar confusing at first, and thought it would be one of the least used features of Oslo, but after having had the time to experiment with it a bit and get to know how it works I can think of a lot of scenarios where I will consider using it. At the MVP summit I met up with a few of the other MVPs who were finding some very creative uses for MGrammar.
So it seems you don’t need to have a big gray beard to develop a programming language anymore…
The webcast is here.
If you want to try it at home, here is the input…
Alan will do a presentation on Dublin at 16:00.
Johan will do a demo of Oslo at 17:00.
Dag will do a lab on Azure at 18:00.
And here is the MGrammar…
module BloggersGuides.Demos
{
language DemoLang
{
syntax Main = Session+;
syntax Session =
name:NameToken “will do a”
type:SessionTypeToken
OnOfToken
subject:SubjectToken “at”
time:TimeToken “.”
=> Session { name, subject, time, type};
token OnOfToken = “on” | “of”;
token NameToken = (‘A’..’Z’ | ‘a’..’z’)+;
token SessionTypeToken = “presentation” | “demo” | “lab”;
token SubjectToken = (‘A’..’Z’ | ‘a’..’z’ | ‘0’..’9′)+;
token TimeToken = (
“00” | “01” |”02″ |”03″ |”04″ |”05″ |”06″ |”07″ |”08″ |”09″ |
“10” | “11” |”12″ |”13″ |”14″ |”15″ |”16″ |”17″ |”18″ |”19″ |
“20” | “21” |”22″ |”23″
) ‘:’ ‘0’..’5′ ‘0’..’9′;
interleave Whitespace = ” “| “\r”| “\n”;
}
}
Have fun!

I will be speaking in Stockholm on Oslo

[Even though it may have been more appropriate to speak on Oslo in Oslo? :)]

Next month I will be speaking at the Cornerstone Developer Summit in Stockholm, Sweden. I will be doing 2 sessions, one on M, the other on the new wave of Microsoft SOA offerings (Oslo, BizTalk, Dublin, Azure, .NET 4, et al) and how things fit together from an architectural perspective.

I’m looking forward to this event, I hear it’s a top-notch event and always a lot of fun, and I’m looking forward to catching up with a couple of my old friends Julie Lerman and Scott Bellware. I’m also trying to get to the local BizTalk user group where MVP Alan Smith and I will present a BizTalk Best Practices session.

To find out more, or to sign up for the conference, visit:

http://www.cornerstone.se/sv/ExpertZone/developersummit/2009/Startsida/Arkitektur/

Technorati Tags: Oslo,BizTalk,Dublin,Azure,WCF,WF

What’s new with web services in Silverlight 3 Beta

Cross-posted from the Silverlight Web Services Team Blog.  


Silverlight 3 beta comes with a set of exciting web services features that address key customer requests.


Binary message encoding


In Silverlight 2 the only supported binding was BasicHttpBinding, which encodes outgoing messages as text and sends them over an HTTP transport. This binding is great for interoperability with SOAP 1.1 services and is also easily debuggable since messages can be viewed in plain text on the wire using HTTP debugging tools such as Fiddler.


However as Silverlight applications go into production and grow to scale, service developers start getting concerned with the cost of hosting services. Two things in particular that we are about:



  • Increased server throughput – more clients can be connected to a server, which means fewer servers need to be purchased. 

  • Decreased message size – smaller message sizes exchanged on the wire means lower bandwidth bills

Silverlight 3 introduces a binary message encoder, which produces significant improvements in both of the above indicators. A follow-up post is coming with some specific data on the improvements that can be expected.


Binary encoding is implemented as a custom binding, there is no out-of-the-box binary binding.


<bindings>
   <customBinding>
      <binding name=”binaryHttpBinding”>
         <binaryMessageEncoding />
         <httpTransport />
      </binding>
   </customBinding>
</bindings>


<endpoint address=”” binding=”customBinding” bindingConfiguration=”binaryHttpBinding”  contract=”Service” />



The BinaryMessageEncodingBindingElement can be used as part of any custom binding and so it composes easily to create things like a binary duplex binding.


The binary encoder offers performance gains over the text encoder, and there should never be any regressions. This is why binary is the new default in backend service scenarios, such as where the Silverlight-enabled WCF Service item template is used. Therefore the template has been modified to use binary. The interop cases is where binary should not be used (if the client is talking to a non-WCF service), since binary is a WCF-specific encoding: please continue to use BasicHttpBinding with text encoding in those scenarios (for example accessing ASMX services).


Duplex object model simplification 


Duplex is an innovative Silverlight 2 feature which allows the service to send data to the client without the client manually polling for the data (“smart” polling still occurs on the network layer, but the user does not need to know). However there were two significant limitations in the Silverlight 2 duplex object model:



  • Channel programming had to be used and

  • Serialization was not supported so a Message programming model had to be used.

Silverlight 3 lifts these restrictions and introduces Add Service Reference support for duplex services. Familiar-looking proxies are now generated, greatly reducing the amount of Silverlight code that is needed to access a duplex service. A simple stock ticker client implementation, which previously took 203 lines of code, can now be reduced to a mere 48 lines of code, a 76% reduction in code size. Not to mention that the channel-based code was complex and very error-prone due to its use of async patterns. Here is a snippet showing the crux of the new object model, in the context of the stock ticker example:


private void Button_Click(object sender, System.Windows.RoutedEventArgs e)


{


    ServiceClient proxy = new ServiceClient(binding, address);


 


    proxy.ReceiveCallbackCompleted += new EventHandler<ReceiveCallbackCompletedEventArgs>(proxy_ReceiveCallbackCompleted);


    proxy.StartAsync(symbol.Text);


}


 


void proxy_ReceiveCallbackCompleted(object sender, ReceiveCallbackCompletedEventArgs e)


{


    if (e.Error == null)


    {


        price.Text = e.price.ToString();


    }


}



Note that receiving the callback from the server is now just a matter of attaching a callback to an event. Also note the fact that we are working with CLR types and not Message objects, so serialization is now enabled. We have updated our documentation with a walkthrough of how to use the new object model. In addition, Eugene’s duplex chat server implementation, which has proven very popular, has also been updated with the new OM.


Faults support


In Silverlight 2 if an unexpected exception occurred in the service, the fault would not be propagated to the Silverlight client. Instead of getting the exception propagated to the user, Silverlight would throw an unhelpful CommunicationException which carries no useful information. There were two reasons for this: (1) faults are returned with a 500 status code, and the browser networking stack prevents Silverlight from reading the body of such a response, and (2) Silverlight did not support he necessary client-side logic to convert the fault message into an exception that can be surfaced to the user. These limitation made it very difficult to debug services from Silverlight.


In Silverlight 3 Beta limitation (1) is unfortunately still present. To work around this issue our documentation provides a WCF endpoint behavior, which can be applied to your WCF service to switch the response code from 500 to 200. With this response code the message will be accessible to Silverlight and we can address limitation (2). In Silverlight 3, we have added the necessary client-side OM to surface faults to the user. Look out for helpful FaultException and FaultException<ExceptionDetail> exceptions which will help you debug your service. Also please see the documentation page linked earlier for a full description of the faults object model in Silverlight.


New security mode 


A common scheme used to secure services for use by Silverlight clients is browser-based authentication. However browser-based authetnication is not safe to use if your service is accessible from any domain via a cross-domain policy file. This would expose your service to CSRF type attacks, where cached browser credentials can be used by malicious apps to access your sercure service without the user’s knowledge.


Silverlight 3 introduces a new security mode called TransportSecurityWithMessageCredential to address this configuration. In this mode, the credentials are included in every outgoing message to the service, and the service verifies those credentials on the SOAP layer. However since the credentials are in plain text inside the message the transport needs to be secure so we use HTTPS.


A more detailed walktrhough of valid Silverlight security configurations will follow.


Command-line proxy generation


In Silverlight 2 Add Service Reference as part of Visual Studio is the only way to generate proxies for Silverlight clients. In Silverlight 3 we are introducing a command-line tool called slsvcutil.exe, which allows customized command-line proxy generation. Using the tool, proxies can now be generated as part of your build process for greater robustness. The slsvcutil.exe tool is fully described in this documentation topic.


Thanks for reading through this, and please stay tuned for some in-depth posts about these new features.


Yavor Georgiev
Program Manager, Connected Framework team

70-241

70-241

Today I took the BizTalk 2006 R2 exam and passed! This is the third BizTalk exam (BizTalk 2k4, BizTalk 2k6 and BizTalk 2k6r2) that I took.
There is Non Disclosure Agreement (which I didn’t read) presented to thecandidates before the exam starts so I am probably not allowed to post a brain dump here
Anyway […]