SQL Adapter Wizard Query twice

When u want to receive XML data from your stored procedure which containts a SELECT and UPDATE (processed records) statement, you can have some problems to Generate the SQL Adapter XSD.


When u press the last Window in the Wizard you could get an error message Failed to execute queury.


What we saw in the SQL Profiler is that the Generate SQL Adapter Wizard Executes twice the stored procedure Oops. The first one from Application name SQLClient provider and the second one from Visual Studio .NET 2003.


The first query execute returns the XMLDATA (XML Schema) , but also the stored procedure update. The second run executes the stored procedure once again without your XMLData schema.


So disable your update in your stored procedure and generate your XML schema after that enable the update and delete the XMLData option.


Good to know that the Wizard runs the stored procedure twice……

IRuleSetTrackingInterceptor: tracing rules execution

I don’t know about you but I find the best way to understand how something is working is from reading the code and then stepping through its execution, with VS’s trusty local and immediate windows. A friend of mine calls this  “developers documentation”, and whilst I certainly wouldn’t go as far as saying it’s a substitute for good docs (and by that Chris, I mean “living models”!), I always end up doing it during any maintenance code cycles (unintended consequences and all that).


 


Unfortunately the execution of a BizTalk rule policy isn’t quite as straight forward. It’s not a set of simple sequential steps that you can step through. If it was it would probably be far less useful as a rules engine – and certainly far less performant.. Instead it implements a RETE algorithm for small rulesets and some propriety one for large rules sets according to the man himself. Anyway the long and the short of this is that rules processing is done behind closed doors, which makes for a tough time in understanding what happened during the policy being executed. I’m sure a good understanding of the RETE process would help – Dummies guide to RETE anyone?


 


The Microsoft Business Rules Composer (BRC) does allow rule “testing” by implementing the Tracking Interceptor DebugTrackingInterceptor. The following is a sample output produced by the BRC for the Loans Processing policy (from the Loans Sample in the SDK. 


 


RULE ENGINE TRACE for RULESET: LoanProcessing 5/19/2005 12:46:13 PM


 


FACT ACTIVITY 5/19/2005 12:46:13 PM


Rule Engine Instance Identifier: fb330399-15f0-4dc7-9137-4463a32f580e


Ruleset Name: LoanProcessing


Operation: Assert


Object Type: DataConnection:Northwind:CustInfo


Object Instance Identifier: 782


 


FACT ACTIVITY 5/19/2005 12:46:13 PM


Rule Engine Instance Identifier: fb330399-15f0-4dc7-9137-4463a32f580e


Ruleset Name: LoanProcessing


Operation: Assert


Object Type: TypedXmlDocument:Microsoft.Samples.BizTalk.LoansProcessor.Case


Object Instance Identifier: 778


 


FACT ACTIVITY 5/19/2005 12:46:13 PM


Rule Engine Instance Identifier: fb330399-15f0-4dc7-9137-4463a32f580e


Ruleset Name: LoanProcessing


Operation: Assert


Object Type: TypedXmlDocument:Microsoft.Samples.BizTalk.LoansProcessor.Case:Root


Object Instance Identifier: 777


 


CONDITION EVALUATION TEST (MATCH) 5/19/2005 12:46:13 PM


Rule Engine Instance Identifier: fb330399-15f0-4dc7-9137-4463a32f580e


Ruleset Name: LoanProcessing


Test Expression: NOT(TypedXmlDocument:Microsoft.Samples.BizTalk.LoansProcessor.Case:Root.Income/BasicSalary > 0)


Left Operand Value: 12


Right Operand Value: 0


Test Result: False


 


CONDITION EVALUATION TEST (MATCH) 5/19/2005 12:46:13 PM


Rule Engine Instance Identifier: fb330399-15f0-4dc7-9137-4463a32f580e


Ruleset Name: LoanProcessing


Test Expression: NOT(TypedXmlDocument:Microsoft.Samples.BizTalk.LoansProcessor.Case:Root.Income/OtherIncome > 0)


Left Operand Value: 10


Right Operand Value: 0


Test Result: False


 


CONDITION EVALUATION TEST (MATCH) 5/19/2005 12:46:13 PM


Rule Engine Instance Identifier: fb330399-15f0-4dc7-9137-4463a32f580e


Ruleset Name: LoanProcessing


Test Expression: TypedXmlDocument:Microsoft.Samples.BizTalk.LoansProcessor.Case:Root.PlaceOfResidence/TimeInMonths >= 3


Left Operand Value: 15


Right Operand Value: 3


Test Result: True


[.. cut for brevity ..]


 


This implements the IRuleSetTrackingInterceptor interface which you can use to produce you own custom debugging/tracing of rule processing.


 


Most of the information is there (although I’m still troubled that the tracing of the process doesn’t allow you to examine the facts i.e., the data rows, xml docs etc), but it’s not exactly easy to see what’s happened. Understanding the match-conflict resolution-action helps considerably.


 


If you are executing your policy outside of BizTalk, or in a component consumed within BizTalk you can specify an alternative IRuleSetTrackingInterceptor. This has the advantage of allowing you to step through the rule processing if you wish, and also allows you to view fact details (through the facts you pass to the policy). The following code demonstrates how to invoke your MyInterceptorClass().


 


xmlDocument = IncomingXMLMessage.XMLCase;


typedXmlDocument = new Microsoft.RuleEngine.TypedXmlDocument(“Microsoft.Samples.BizTalk.LoansProcessor.Case”,xmlDocument);


policy = new Microsoft.RuleEngine.Policy(“LoanProcessing”);


policy.Execute(typedXmlDocument,new MyInterceptorClass());


OutgoingXMLMessage.XMLCase = xmlDocument;


policy.Dispose();


 


Once you understand the various stages to the rule engine, you soon yearn for a better way to visualise a complex processing of rules. To this end I started thinking about how I’d like to see what was happening. In the past I’ve used UML sequence diagrams to demonstrate how messages are sent between object and also object lifetime.


 


Initially , I though that each fact would “constructed” by assertion and “destructed” by retraction. The same would happen with the rules placed into and out of the Agenda. This is sort of how it would look;


 


 


The problem with this approach is that it makes a very complex picture when many facts are asserted and many rules get passed onto the agenda. In the end I settled for a “swim lanes” type approach, separating each phase (facts asserting/retracting, condition matching, adding to agenda and firing actions) in a lane and then just processing sequentially. I think it works quite well, but you can see the loans processing sample output here.


 


 


The TrackingInterceptor also outputs to XML. If anyone’s interested in a copy of the source/component you can drop me a note here.


 


After my experiences I have the following request for MS


 


Request 1: Allow the BRC to use a difference Tracking Interceptor


Request 2: Allow the Tracking Interceptor to get hold of the full facts – i.e., let me dump out the xmldoc.


Request 3: Allow the “Call Rules” shape to specify a tracking interceptor

The Death Bell for .Net Remoting

We attended a .Net User Group last night the .Net evangelist gave us quite an in depth overview of Web Service Extensions 2.0, 3.0, the history of WSI Web Service Interoperability Organisation (the guys who come up with the standards) and a sneak preview of Indigo.


Indigo looked very cool, quite a nice vision by Microsoft 3 lines of declarative code to enable queuing, transactions and security i.e. COM+, MSMQ, WS extn’s rolled into one. Not to mention the x number of lines of configuration xml one smart developer recognised would need to be in place … looks like Microsoft hasn’t been able to come up with mind reading software, the instructions still need to sit somewhere. Indigo should add a lot of value and ease of implementation to cross-machine\process boundary method calls.


How Indigo will integrate with BizTalk was touched on Indigo handling the message transport with BizTalk sitting atop it orchestrating the integration logic. See my post entitled The Future of BizTalk.  


The big surprise however was when we go to the section entitled “How should I make my applications Indigo ready”. There were basically two points here:


1)      Make sure your applications are loosely coupled and don’t expose any implementation logic outside their interfaces.


2)      Don’t use .Net Remoting as Indigo won’t support it (but will provide a replacement)


The second point brought a few gasps and giggles from the crowd present. When asked the Evangelist wasn’t sure .Net Remoting would be supported in future versions of the framework. The feeling I got from him was that Microsoft has made a decision to move away from this technology and that the System.Runtime.Remoting namespace is soon to be unsupported, defunct and be absent from future versions of the framework. So you ask what should we use until Indigo is released well MSMQ or Web Services was the answer.


R. Addis

Future of BizTalk

For those of you that attended the training and Tech Ed you may remember there was some confusion over the roles of BizTalk and the messaging engine Indigo built into LongHorn (MS future operating system due for release 2006).


I found this article based on a Scott Woodgate (a New Zealander and the godfather of BizTalk) presentation trying to clear this up. Well it looks like good news for Indigo and good news for the future of BizTalk unfortunately I feel that backward compatibility due to the large amount of underlying architectural change will be a problem but it’s still to soon to be sure.


The main points:



  • From a BizTalk perspective, Indigo is the vehicle that will provide secure, reliable transacted services (over more than just http).

  • Indigo is an API, BizTalk is a set of tools. Indigo will natively integrated into the next version of BizTalk after BizTalk 2004. The demos showed a prototype Indigo adapter that worked with BizTalk 2004.

  • BizTalk has been achieving reliable services through the BizTalk Framework 2. In future this will be deprecated and replaced with WS-ReliableMessaging, which will be provided by Indigo.

  • BizTalk will use the security model built into Indigo to enable WS-Security support for Username tokens and X509 certificates that can be easily configured through attributes, configuration files and policy.

The presentation helped me understand that Indigo is the plumbing. A lot of business functionality will require the use of tools that take advantage of that plumbing, such as BizTalk. So I was interested a few days later to see an announcement that the BizTalk Orchestration will ship as part of Indigo in Longhorn. As the article points out, BizTalk seems to debut features that eventually become part of the plumbing.


This should sure up business confidence in BizTalk’s future with Microsoft’s commitment to this product.


R Addis 24/12/2004


Missing Vocabulary API


Has anybody seen a Vocabulary API for the rule engine? No? That’s probably because it seems to either be on its way out or never made it in the first place. According tothe BizTalk SDKThe following objects are exposed by Microsoft(r) BizTalk(r) Server 2004, but are not used in BizTalk Server programming.” These include the following from the Microsoft.RuleEngine Namespace



  • Vocabulary

  • VocabularyDefinition

  • VocabularyDefinitionDictionary

  • VocabularyDictionary

  • VocabularyInfo

  • VocabularyInfoCollection

  • VocabularyLink

All under the heading of “Unsupported BizTalk Classes”. So, I guess that means not to use them! Our guys are in the middle of doing a Biztalk Proof of Concept with MS at Reading, so I hope to get an answer on this!

Real asynchronous vs. Simulated asynchronous

Abstract: In the asynchronous world, we can talk about Real async and Simulated async. Each one has its own pros and cons. Let’s see a simplified sample of each case.


Sample scenario, let’s assume two systems, A and B:
   1.-   A sends a message to B.
   2.-   B processes the request from A.
   3.-   B returns a response to A.


Constraint: The processing of the request (step 2) takes some unpredictable time, so we cannot afford A to have an open connection waiting for the response from B at step 3. We need an asynchronous model, but it can be a real async or simulated async.


Real Async
In a Real async scenario all the communications are one-way, fire-and-forget. A sends the request to B and closes the connection. Once the processing is finished, B starts a new connection with A and sends a new message, the response.


The characteristics are:



  • Each message goes in a true one-way communication.
  • Both client and server must implement listeners –> from the communications point of view, both are client and servers.
  • Both A and B must be aware of the other system’s endpoint.
  • Bandwidth and CPU are optimized, and there are no blocking points.

here is a sample picture. Arrows shows who start the communications:


Probably, the most important issue is the second point: both A and B are clients and servers (or consumers and providers). Both systems must implement a listener or message sink. This is not suitable in many cases when A is a client app. So we can go for a simulated async.


 


Simulated Async
In the simulated approach, almost all the communications are still one-way, but B never starts a new connection; instead, B just puts the response available for A. It’s A responsibility to get the response, doing some polling.


The characteristics are:



  • All the communications are started by A. B does not deal with communications issues. 
  • B is not even aware of A. If there are many As, B does not need to know.
  • Easy to implement. Or at least, easier than Real Async.
  • Bandwidth overload, because of the polling, as well some CPU consumption on A.

Here is the sample picture, modified showing simulated async via polling. Arrows shows who start the communications:


In this case, the response is not sent (PUT); it’s retrieved (GET), so it’s not a one-way communication.


These two models can extend to the infinite with many variations, but I think here are the two most basic ones.



 

Adding External Tools in Visual Studio for Installing/Removing assemblies to/from the GAC

First you open visual studio and goto Tools in the Menu bar and all the way towards the bottom of

the list you’ll see External Tools (click on that).

 

Once that is open you will see a list of Menu Contents, to the right you will see an add and delete button.  Click on the Add button.  Once that is done you will see a [New Tool 1] entry into the bottom of the list.

 


  • Fill out the Title (to your liking) For this sample I used ‘Install to GAC’ and ‘Remove from GAC’
  • For the Command Field, point to the gacutil.exe (by default in C:\Windows\Microsoft.NET\Framework\(latest version your compiling with)\gacutil.exe)
  • For VS 2005 by default gacutil.exe is “C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil.exe“
  • Arguments -i$(TargetPath) – This is a Visual Studio built-in macro. 
    -u$(TargetName) obviously for the removal process.


  • Now I prefer clicking on Use Output window so I can see the results of the gacutil output.

 

Now once you have that setup you can just select the project (from your solution explorer) and goto Tools and run the ‘Install to GAC’ or ‘Remove from GAC’ without going to the DOS prompt.

 

      -B

Forward Chaining ?

The BizTalk 2004 rules execution engine is “a discrimination network-based forward-chaining inference engine designed to optimize in-memory operation”. Apparently.


What I understand this to mean is that it can arrive at a decision based on reasoning of facts against conditions (inference), and that it reacts to new facts being “asserted” in a way which means that only rules which are relevant are re-examined (forward chaining). Facts get put into working memory. This triggers rules whose conditions match the new data. These rules then perform their actions. The actions may add new data/facts to memory, thus triggering more rules. And so on. The wikipedia has a pretty good summary.


 


Whilst this sounds pretty academic, it’s also important. Without understanding this, you can get yourself in a real buggers muddle – I’m proof of this! The rules engine has 3 stages to execution. The first stage is “matching” – if you’ve done some tracing of rules execution in the BRC this stuff will sound familiar. This stage checks its rules “conditions” against current “in memory” facts. These facts being loaded into memory are the “asserts” you see in the debug output. Once a condition is evaluated as true it gets added to the rule “agenda” (Agenda Update in the trace output). This is a sort of wait area until the engine has been through all the rules in the policy. Next stage is “Conflict Resolution” (don’t you just love that!). This is where the order of the rule firing is determined based upon the priority you have set for the rule. In the next “Action” stage, the actions of the rule are fired.


 


What’s important to remember here is that the “Action” stage could assert some more facts into memory. So what? Well to quote MSDN;


 


“Note that rule actions can assert new facts into the rule engine, which causes the cycle to continue. This is also known as forward chaining. It is important to note that the algorithm never pre-empts the currently executing rule. All actions for the rule that is currently firing will be executed before the match phase is repeated. However, other rules on the agenda will not be fired before the match phase begins again. The match phase may cause those rules on the agenda to be removed from the agenda before they ever fire. “


 


This is essentially what you would expect to happen I guess, but as most of the rules processing is done “behind closed doors” it can be hard to get a handle on what’s causing this type of effect. The tracing/debugging leaves a little to be desired. That’s where I’m off to next …

Optimize Thyself

OK, so tell me we haven’t all done this at least once. We come across a way to improve our output efficiency, and at the same time to enforce good design and standards – we figure we should provide the rest of the team with a tool that can help them achieve the same. We feel, for a small and fleeting moment, that warm glow.


 


Until someone comes along and points out a far better implementation of your optimisation, and delivers up a tool that’s more extensible, better integrated and (more importantly) finished. Don’t you just hate that?


 


Well you shouldn’t, and I don’t anymore. Bill Jones makes a good case for why you should be looking at CodeSmith 3.0 (Erics site was temporarily down when this was posted).


 


I found myself frequently writing the same CRUD code for my data layer objects. After looking at some great DAL generators (nTierGen.Net from Gavin Joyce is in my opinion one of the best), I threw together a DAL generator based around a code templating system. CodeSmith takes that idea and implements it beautifully. Really, take a look!

Resurfacing from Deep Dive

I’ve just attended the latest of the BizTalk Deep Dive training courses in the UK, which Quick Learn have developed, and are running with a special invite to Microsoft Partners. (I was lucky enough to get a place on one of the free sessions.)

The “entrance exam” for the course ensured all the delegates had a solid grasp of most of the product, and brought with them real world experience of BizTalk development. Ensuring this high-level of expertise on the course kept the discussions and questions at the appropriate level; also if you’re new to the product, you would quickly fall over in the hands-on sections.


The course coverage was broad as well as deep, going through the Share Point, WSE2, and SQL adapters, and also looking at Host Integration Server, and connecting to a DB2 database. Quite a lot of this stuff had been on my “To-do” list for months, and it was good to get a look at these areas of BizTalk that I had not touched yet.

One highlight for me was the in class discussions around complex aspects of the orchestration and messaging engine. These often went beyond the depth of the course notes, and I picked up a lot of info that’s just not documented or, as yet, blogged about. John Callaway, the instructor, has spent a lot of time with the key members of the BizTalk development team, and was able to answer almost all the questions we could throw at him, (usually someone on the course could chip in with some insight into the really tough ones).

If you are making the jump from intermediate to advanced BizTalk development, you should defiantly book yourself a place on one this course.

I also got the chance to take a beer with one of the BizTalk Bloggers, Chrostof Claessens, who was attending another training course at the center. It’s the first time I’ve met one of the contributors face-to-face, and good to be able to put a face to the name. I hope to get the chance to meet with all the contributors to the guide, one down, thirty to go…