Trying *Real* Contract First Development With BizTalk Server

A while back on my old MSDN blog, I demonstrated the concept of “contract first” development in BizTalk through the publishing of schema-only web services using the Web Services Publishing Wizard.  However, Paul Petrov rightly pointed out later that my summary didn’t truly reflect a contract-first development style.
Recently my manager had asked me about contract-first […]

The Long Tail by Chris Anderson

Long Tail, The, Revised and Updated Edition: Why the Future of Business is Selling Less of More by Chris Anderson – Hyperion, 2006/2008,  267 p., ISBN 978-1-4013-0966-4


This book is almost a classic by now, so if you did not heard about it, at least briefly going through pages may be a great idea. Although it’s not the reason I decided to write about it.


The key idea of the book is that for certain markets, where cost of production is low, cost of delivery is low and cost of filters, which let you distinct good things from bad, is low, the majority of revenue may come not from a few hits selling at large numbers, but rather from a huge number of items selling very little each. “Long tail” refers to the long tail of the distribution curve, where these “bottom-sellers” reside. In particular, it’s shown that in markets like digital music, DVD rental, print-on-demand books and so on, the long tail may bring the majority of revenue. Indeed, 100,000 items selling 10 times per quarter is like one bestseller selling one million copies per quarter. The difference is, it’s not easy to find such a bestseller in the modern world, and those 100,000 titles from self-publishing and self-production are easy.


In fact, finding hits become more and more hard, because hits today sell less than they used before. For example, today’s top TV show would not make it even the first 10 in 70s. The author attributes it to rise of Internet and digital technologies, which reduced price of production and distribution and extended the choice. Hence, concludes the author, presented with more choice, customers started to use it, and hence less of the same market went to hits, and more to the long tail, which fits customer demand much better.


Here I’d like if not argue then at least complement that with another reason that the author did not mention or know about. It’s true that presented with more choice people buy more diverse, however there is another very material reason for the rise of long tail. And that reason is the transition from industrial to knowledge society.


Industrial society consists mostly of industrial workers. These are the main category of consumers for industrial society. Let’s consider how industrial workers are raised, in a sense, how industrial society “produces” its main category of consumers.


Industrial worker normally graduates from a high school: a highly uniform institution imprinting millions of children every year with about the same set of basic knowledge, skills and propaganda stereotypes no matter which country it serves. Granted, stereotypes impressed on students in United States were different from the ones in Soviet Union or Western Europe, but within a single economy it was uniform. It was (and still is) essentially highly standardized mass production, like production of bolts and nuts. Yes, in Europe nuts are metric – millimeters, and in US they are in inches, but it’s still the same within a single economy. And so were people produced by the mass school.


And when you have a lot of standardized nuts, you get a huge market for standardized bolts. Because most of your consumers have similar background, prepared by the school, same basic knowledge, same dictionary, same stereotypes, in the end, same memes populating their minds. So the same TV show was good for a lot of them, same music was likable to a lot of them and the same advertising was making a lot of them buy. This was the making of hits: mass markets are created by standardized consumers, which the standardized school system provided.


With the knowledge society, more and more consumers become knowledge workers (the term introduced in 60s by Peter Drucker, sometimes referred as “the father of American corporate management”, see, for example, The Essential Drucker: In One Volume the Best of Sixty Years of Peter Drucker’s Essential Writings on Management). And “more” means “majority”. See for example, The Rise of the Creative Class: And How It’s Transforming Work, Leisure, Community and Everyday Life by Richard Florida. Knowledge worker is a very different beast than an industrial worker. Knowledge worker normally has at least bachelor degree, and colleges are not uniform, they are very different, and so are professions that they teach. This means that the consumers become segmented, and not just because of different backgrounds like before, but because of the economy, because there is a systematic force in place that fragments them by their background, beliefs and stereotypes.


For example, consider software engineer at software startup and mechanical engineer at Boeing or Ford. First one must innovate; “innovate or die” is pretty much a business model of most software startups. Second ones have to prevent crash, and innovate means for them a risk of crash. Once it comes under the skin, it affect how they react on advertising, politics, everything. It affects what they watch, what they buy, whom do they vote for.


Of course, hits won’t go away, we still have highly uniform basic education system, which still provides quite a bit of common background, but the more segmented will become consumers, the less will go to hits, and more will go to the long tail. And knowledge industry cannot exist without making its workers – and also main consumers – segmented.


Juts thought, it may be worth sharing.

Back to the Past

Back to the Past

Every so often I see people asking in the newsgroups how to solve certain challenges
they encounter while working on their BizTalk applications. One common question revolves
around being able to go "back to the past" when an error happens during
processing of a message.

This isn’t a bad question at all, and usually revolves around how to simulate the
behavior of atomic transactions in an environment where transactions can be a lot
more complex and not always as natural.

The question usually goes like this: "I’m receiving a message in BizTalk, which
is triggering an orchestration instance. The orchestration does this and that, and
if it any of those things fail, I want to put the message back where I got it from".

This-That-There

The question might seem simple, but it’s not always necessarily so. In fact, sometimes
you have to stop a moment and ask yourself whether this really makes sense. There
are several aspects you need to consider:

  1. Handling the case where "this" causes an error is probably not
    a big deal. Handling the case where "this" succeeded but "that"
    failed, however, might not be that simple. Not all actions your orchestration might
    do can be undone.
  2. Most of the time you’ll find that both actions can’t be done as a single unit in a
    single atomic transaction. Fortunately, BizTalk provides very good support for long-running
    transactions and compensation which can help quite a bit.

    Unfortunately, long-running transactions and compensation models are often misunderstood
    (cue in the inevitable "How long does a transaction have to last to be a
    long-running transaction?
    " jokes/questions).

    Here are a few articles that do a great job of describing the BizTalk Transaction
    features and how to use them effectively:

  3. The sentence "put the message back where I got it from" can be either a
    very good thing, or a very problematic thing. It basically relates to leaving stuff
    as you found it; in particular, leaving the message back into its origin (thus relating
    to the transactional concept of "nothing happened here, move along")so that
    you can try processing it again later on and hopefully it will succeed at
    that time.
  4. >

    The problem with number 3 is that it (a) isn’t always possible, and (b) it isn’t always
    a good idea.

    It might not be possible to put the message back where you got it if someone was pushing
    the message to you instead of you pulling it from somewhere. If you had a SOAP/HTTP
    WebService exposed that received a message from someone else, then you probably can’t
    put the message back where you got it from!

    On the other hand, this is a very common model for queued messaging systems: If you
    run into an error processing the message, you put it back into the queue and try again
    later. And this works great many times and can simplify error handling a great deal.

    The point where this becomes a problem is when you rely on this as your only error
    handling mechanism. If you blindly send the message back to its origin to retry processing
    for any and all errors and a message comes in that always fails, you’ve got
    yourself a poison message!

    toxic I’ve
    already talked
    about Poison Messages
    in the past, so I won’t comment much more on them. But there
    are other things you can keep in mind to improve the "back to the past"
    error handling technique, particularly if you don’t care about message processing:

    1. If you can identify and classify the source/cause of the errors, you can make your
      orchestration smarter about how to handle them. For example:
      • Can you distinguish transient error conditions? For example, a timeout connecting
        to the database might be a temporary condition because of a network fluke or a server
        being restarted. Sometimes retrying the operation after a short while is enough to
        deal with this situation effectively.
      • Can you distinguish errors that might require manual intervention to fix? Example:
        Validating an operation fails because some configuration data is missing. This is
        a case where you want to be proactive and raise an appropriate alert so that someone
        can get in there and fix the issue. Extra points if you can tell apart conditions
        that require intervention from a business users and those that require it from a systems
        administrator.

        Notice, however, that in this case putting the message back at the start right after
        creating the notification is not the right thing to do. People don’t react
        that fast. You need to set the message aside until such time as the corrective measure
        has been taken and it is safe to try processing it again.

    2. Can you control when the retry might happen? Can you throttle it if necessary? If
      the answer is no, then you might want to be very careful about using this technique.
      You could easily increase the system load substantially if lots of messages fail in
      a short time and you try reprocessing them in a tight loop.
    3. Be mindful of adapters that provide no ordering semantics. For example, if your original
      location used the FILE adapter and you put the message back in the original folder,
      it will likely get picked up very soon again for processing; which can quickly get
      you back to step 2.

      At least with an adapter like MSMQ you can push the message to the end of the queue,
      which might buy you some time.

    4. Even if you take 1, 2 and 3 into account, you still need to provide a way to deal
      with poison messages. Keep in mind that what started as a transient error condition
      can suddenly escalate to a full-blown problem you can’t do nothing about, like when
      that temporary network fluke turns into a days-long outage after some idiot digging
      a whole outside snaps your network fiber cable in two.

      In fact, sometimes you might need to go so far as to completely shut down processing.
      Sometimes being able to detect that some things that should be working keep failing
      after an extended period of time and alerting about it can help get things sorted
      out before they spiral out of control.

    5. >

      These are just some ideas that might help make your system more reliable and more
      manageable. Some of them do cost money; that is, you have to invest time and development/testing
      efforts in getting them done, and that’s where you’re going to have to evaluate what
      makes sense and what not.

      technorati Messaging, BizTalk, Architecture

HL7 Message Encoding

To extend what I wrote about for the Encoding for HL7 messages post. There are two values that are used for the Message Encoding context properties. It isn’t obvious but follow the logic:

According to Extended Encoding Support page in the Note section of step 5 you can only choose Western European or UTF8, so if I look at the Encoding Class documentation, the table below are the values you can use:

Value To Set Name Display
65001 utf-8 Unicode (UTF-8)
850 ibm850

Western European (DOS)

I will be delivering my first public-facing Oslo presentation Oct 8th in Amsterdam at the SOA Symposium

And so it begins….

On October 8th, I will be presenting “A Preview of Microsoft Oslo” at the SOA Symposium conference in Amsterdam.

I have been fortunate to have been involved in the Oslo process since it began. And now, as we move closer to parts of it becoming real, we are allowed to talk publicly about it. I’m super-excited about this presentation, as I have been given permission by MSFT to present material that has previously only been shown internally, and I even have some tool prototype screenshots 🙂

This is the first of many many many Oslo presentations I will be doing over the next few years, and I am honored to be one of the first in the world to be doing one. As far as I know the only non-Microsoft Oslo presentation that has been done at a conference so far has been by my friend David Chappell at TechEd US this year.

My session abstract is:

>>

Microsoft’s Oslo project is a major initiative that represents a wave of technologies aimed at making it easier to construct, deploy and manage distributed applications and services. It is an evolution of SOA technologies, encompassing Windows Communications Foundation, the next version of .NET, BizTalk Server, Windows Workflow Foundation, Visual Studio and more. Using those technologies as a starting point and building on them, Oslo also introduces a suite of modeling tools and a repository that allow the creation of role-based tools that can be used throughout an application’s lifecycle.

The impact Oslo will have on the developer community using Microsoft tools cannot be understated, and will be equivalent in orders of magnitude to the impact of .NET 1.0: it will be a game-changing revolution. In this session we will take a very early look at the architecture and some of the capabilities Oslo provides, as well as what some of the tools may be.

<<

If you’re in the neighborhood, stop on by! The conference site is:

Technorati Tags: oslo,soa,esb,biztalk,wcf,wf

Setting up SqlCe Merge Replication with ISA Server in the middle.

Wow! I had to venture into the ‘cave’ and solved this problem – talk about a character
building experience!

I’m currently building a Mobile BizTalk RFID 1.1 solution for TechEd08 that
runs on a PPC with a Kenetics
CFUHF Reader.

*** Early Screen Shot *** 🙂

So in building out this application the details always bring unforeseen challenges
to light:

1) The application houses all the BizTalk RFID pieces (providers, device proxies etc)
so registration, and starting/stopping providers/device discovery and applying properties
to the device needs to be all taken care of.

2) I built an RFID Mobile Provider for the Kenetics device – I worked with their support
engineers solidly for a week to build what I needed. I took a trip down memory lane
and have had enough pinvoking to last till Christmas.

3) The app also manages a several local SQLCe databases – one for my app, the others
for the operation of BizTalk RFID Mobile locally on the device (mainly for it’s OOTB
store/forward mechanism).

After weighing up several options in this solution and how to get data to/from the
device reliably I decided to go with SqlCe Merge Replication as we needed to push/pull
data from several tables and schema changes.

4) Which leads me onto one of the most little known items……

How do I setup SqlCe Merge replication? it’s a mine field, change
something here and boom over there.

The picture

Phase 1:

Forget ISA for the moment. If you can, aim to get replication running in a local environment
first (e.g. Local LAN on same network, through VPNs etc)

Getting the SQL bits Setup

Ok – the pieces to the initial puzzle…..

  1. Sql Server Side

    1. Sql Server and it’s additional Sql Mobile Replication Bits – download
      from here.

    2. IIS to expose a replication ‘end point’ where the remote devices will connect to and
      replication will take place through. IIS can be separate out onto a different machine.

    3. As in my case, somewhere that the ‘snapshot’ DB information will live to merge down
      to the devices. Mine was a UNC share – SQL created this after I completed the Publication
      wizard.

    4. Installation -You want the SQL
      Server Compact 3.5 Server Tools installed on BOTH the IIS AND SQL Machines (if
      these are one and the same, then you only need it once)
      The server tools has two main components – one being the bits that drive IIS and the
      other being a wizard that configures the exposed virtual directory and sets security
      onto it.
      If IIS and SQL are on separate machines, the easiest way to go is:
      get SQL to publish the snapshot to a UNC share e.g. \\sqlserver\data

      – On the IIS box, run the Configure Web and Synchronization Wizard (installed
      with the above server tools) and a later screen will ask you where this data is coming
      from – simply point to the UNC share.
  2. Mobile Device Side

    1. The equivalent SQL Mobile Replication tools need to be installed (above and beyond
      just normal SqlCe database components install) – SQL
      Server Compact 3.5 for Windows Mobile

      *** NOTE: make sure that the bits on both the Mobile + Servers all match ***
  3. Server Side Security – For this let’s work backwards, from the publication
    through to the exposed endpoint.

    1. Publication Security – this is set through the Publication Access List within
      SQL Mgmnt Studio
      The group in question is the ExhibitorsGroup


      Create a publication within the SQL Management Studio

      (Publication General Properties)

      (Snapshot Properties – note the file location)

      (FTP Snapshot + Internet – I’ve just used Internet and no IIS server name as this
      is configured in the Mobile Wizard)

      (Publication access list – I’ve blanked out sensitive info, but you can see the BETDEV\ExhibitorGroup
      being manually addded to the list)
      The rest of the publication settings are defaults – for me anyway.

    2. Let’s go to the UNC share – = C:\Public\Exhibitor.SqlCE.FileShare
      This is the UNC share that IIS repl component will connect to at the back
      end.
      Note: the BETDEV\ExhibitorsGroup obviously needs r/w access to this folder.

    3. Let’s run the ‘Configure Web and Synchronization Wizard’ to configure
      the IIS component.
      (you’ll find it off the tools menu after you’ve installed the Mobile Server Tools
      from the links above)
      Note: one of the interesting things I found here is that after running
      the wizard, I normally go a tweak a few things in IIS – directory browsing etc. As
      a rule of thumb, if you want to change something with the Virtual Directory that is
      created at the end of this wizard, re-run the wizard to do it!!! 🙂

      Press next if prompted with the welcome screen note my options here – SQL Mobile and
      press Next.Cool

      Select the site and Create a Virtual Directory (I’m re-running the
      wizard so I’m going to select Configure Existing). Press Next.

      I created an alias of SqlCERepl directory and accepted a sub-directory
      under the SqlMobile dir.
      (you can change this, but looking around the forums it was a source of grief – I could
      do without 🙂 )

      Here – I selected HTTP and not HTTPS access to the VirtualDirectory (and
      SQL Service agent).
      I did this as if you remember the diagram at the top of this post, ISA will
      serve as the HTTPS endpoint
      and will fwd the request via HTTP to
      our IIS/SQL box.
      IF you do want to change from HTTP to HTTPS or visa versa – re-run this wizard. Save
      you about 4 hrs of head banging.
      Click Next when ready.

      On this page – I selected ‘Authentication required’ and not anonymous. This has something
      to do with the data that I’m replicating as I’m using a Filter based on ‘UserName’.
      So in my case, the username that the devices connect with will be my differentiator
      (I looked into using something like ‘deviceID’ but didn’t get too far)
      Click Next.

      Select the type of authentication to be made against IIS – I selected NTLM (basic
      is fine also – but you need to be mindful that we’re using HTTP at this point)
      Quick note on Security: So far, we’ve got 2 areas that need authentication.

      1) the IIS virtual directory and 2) accessing the actual SQL Publication in the UNC
      share and SQL Publisher Access List.

      So if the two machines are separated (IIS + Sql), NTLM will no transverse these machines
      (known as the ‘double-hop’ problem) so I’m assuming Basic or Kerberos is the safer
      bet.
      Click Next when ready.

      On the Directory Access Screen note the presence of the ExhibitorsGroup
      and also this publication is accessing the UNC Share we created earlier.
      Next to continue.

      UNC path specified – here you can see how this could be pointing to this SQL Share
      sitting on another machine as in the 2 machine hosted scenario.
      Click Next and Finish to see something like:

      You’re virtual directory is now configured.
      To test your configuration so far go to:
      http://<server>/sqlcerepl/sqlcesa35.dll?diag –
      diagnostics screen to get something like:
      You should be prompted to login – enter account details that have
      access.

      This is our fallback screen – next we will configure the ISA component and come back
      to our test screen to make sure.
      You’re done – here. 🙂

  4. Configure ISA Server
    ISA server will be the bridge between our public SSL access and our internal
    IIS/SQL Server. We would effectively like ISA to simply route the request and pass
    it through without to much tampering with our good packets.

    ISA Server is on IP address: IP:Y_Internal
    The Internal Server here is : 10.1.0.191
    The public Interface on the ISA Server is for our purpose known as IP:X_Public
    and it’s FQDN is : demo.micks.org (in otherwords – this is the
    public DNS name that will point to the public interface of your ISA box)

    NOTE: Make sure you have your SSL cert ready – I created an inhouse
    cert from a standalone cert server.
    You need at least a ‘Server Authentication’ Certificate to apply within ISA.
    (I’ll show you a little trick in the mobile app to get round the fact that the certificate
    is from a non-trusted Cert. Authority by default)
    The friendly name on the cert should be – ‘demo.micks.org’ (without
    the quotes)
    All this keeps SSL happy.

    1. Create a publishing rule in ISA 2006 that will effectively route all requests
      coming to the public interface to our internal IIS/SQL Server.

    2. Fire up the ISA MMC and create a New Web Server Publishing Rule
      I’ve called this sample rule, “Public to Internal IIS/SQL Repl”

      Click Next when done.

    3. Rule Action – set to Allow

      Next

    4. Publishing Type=Single Web

      Next

    5. Server Connection Security – SSL.This means that SSL is going to be used over the
      public network.

      Next

    6. On the Internal Publishing Details – I tend to hardcode the IP address in, just to
      reduce any ambiguity.
      Note the IP address – internally acessible only. 10.x.x.x

      Next

    7. Further settings on the Internal Publishing Details
      NOTE: the option of fwding the original client host headers to the internal
      IIS/SQL (I found a variety of incomplete  HTTP Header details errors attempting
      to sync if I cleared this checkbox)

      We also can restrict the access on this rule by specifying the path of /SqlCeRepl/* (this
      is obviously the Virtual Directory created earlier)

      Next

    8. Fill in your public DNS name – don’t worry that the wizard screen is showing http://demo.micks.org and
      NOT https://demo.micks.org

      Next

    9. Create a listener (if you need to ) as follows:
      (I’ve modified the screen shot slightly – from my listener)
      Note the ports: 8443 that SSL requests is coming on. You can use 443 if you prefer,
      I had other things on those ports)
      Also – I setup NO Authentication and replication works. You *could*
      try setting up Basic Authentication here and using Delegated Authentication (ISA server
      will login to the IIS/SQL box on your behalf with the inputted security credentials).

      I’ve also supplied the Certificate here as well (add your cert to the machine store
      ahead of time)

      A way to test if your auth is going to work – fire up your browser and try http://<server>/sqlcerepl/sqlcesa35.dll?diag

      You should be prompted for login details ONLY ONCE. If you need to supply
      them twice
      and then you see the diagnostic page, your mobile application
      replication will fail :-(. Once and once only.

      Next.

    10. Authentication Delegation- we want the client to auth. directly against the backend
      (routed through ISA of course 🙂 )

      Next.

    11. User Sets – because we don’t have authentication here, ISA can’t
      determine users, so All Users is our only option.

      Next.

    12. What a glorious site….almost done……

      Click Finish to complete the wizard.

    13. Right click on the rule just created and select Properties – we need
      to change the Link Translation to OFF
       
      This was the major source of my grief – I kept getting ‘HTTP Headers malformed…’
      ERROR:28035 when trying to sync from the Device – yay!

      I was fortunate to be able to contact a friend of mine Darren
      Shaffer (Mobile MVP) that explained what was required to be sent back/forth in
      the headers during the conversation – big thanks there Darren!

    14. You should be able to browse to https://demo.micks.org/sqlcerepl/sqlcesa35.dll?diag –
      it should WORK 🙂
      If not – resolve before moving on. (you may get IE grumbling about the Certificate
      being invalid if it’s an inhouse cert)
  5. Configure the MOBILE replication piece!!!

    1. Make sure you have installed the SQL CE 3.5 Core + Repl CABs at least.

    2. On the mobile device, I tend to have routines to Add and Remove
      DB Subscriptions
      as I found that if any publication changes on SQL Server
      happened – e.g. a field was modified, or a table added/removed from the Publication,
      then Merge Repl would fail even though it previously was working.

      Easier to Remove the Subscription on the local SQLCE db, and then add it again.

      Note: InternetUrl = https://demo.micks.org

      Username + pass must be a user that has access to all the bits we configured above.
      In my case, someone who is a member of the ExhibitorsGroup.

      The code looks like this:

       1:  public void AddReplAndSync()
       2:  {
       3:  //using System.Data.SqlServerCe;
       4:  bool bAddRepl = false;
       5:  try
       6:  {
       7:  if (DoDBLookup("SELECT
      count(*) as cRow FROM __sysMergeSubscriptions WHERE Subscriber='ExhibitorSubscription'", "cRow")
      != "1")
       8:  {
       9:  bAddRepl = true;
       10:  }
       11:  }
       12:  catch 
       13:  {
       14:  bAddRepl = true;
       15:  }
       16:  
       17:  SqlCeReplication repl = new SqlCeReplication();
       18:  repl.InternetUrl = AppSettings.Settings.ReplServer
      + "sqlcesa35.dll";
       19:  repl.InternetLogin = AppSettings.Settings.ReplUser;
       20:  repl.InternetPassword = "XXXXXX";
       21:  
       22:  repl.Publisher = AppSettings.Settings.ReplPublisher;
       23:  repl.PublisherDatabase = AppSettings.Settings.ReplPubDB;
       24:  repl.PublisherSecurityMode = SecurityType.NTAuthentication;
       25:  repl.Publication = AppSettings.Settings.ReplPubName;
       26:  repl.Subscriber = AppSettings.Settings.ReplSubName;
       27:  repl.SubscriberConnectionString = string.Format("DATA
      SOURCE='{0}'", ESDAL.GetDBPath());
       28:  
       29:  try
       30:  {
       31:  if (bAddRepl)
       32:  repl.AddSubscription(AddOption.ExistingDatabase);
       33:  CloseAllDBConnections();
       34:  repl.Synchronize();
       35:  }
       36:  catch (SqlCeException
      e)
       37:  {
       38:  MessageBox.Show(e.ToString() + e.NativeError.ToString());
       39:  }
       40:  
       41:  }
       42:  
       43:  public void ReplRemove()
       44:  {
       45:  CloseAllDBConnections();
       46:  SqlCeReplication repl = new SqlCeReplication();
       47:  repl.SubscriberConnectionString = string.Format("DATA
      SOURCE='{0}'", ESDAL.GetDBPath());
       48:  repl.InternetUrl = AppSettings.Settings.ReplServer
      + "sqlcesa35.dll";
       49:  repl.InternetLogin = AppSettings.Settings.ReplUser;
       50:  repl.InternetPassword = "XXXXXX";
       51:  repl.Publisher = AppSettings.Settings.ReplPublisher;
       52:  repl.PublisherDatabase = AppSettings.Settings.ReplPubDB;
       53:  repl.PublisherSecurityMode = SecurityType.NTAuthentication;
       54:  repl.Publication = AppSettings.Settings.ReplPubName;
       55:  repl.Subscriber = AppSettings.Settings.ReplSubName;
       56:  try
       57:  {
       58:  CloseAllDBConnections();
       59:  repl.DropSubscription(DropOption.LeaveDatabase);
       60:  }
       61:  catch (SqlCeException
      e)
       62:  {
       63:  MessageBox.Show(e.ToString() + e.NativeError.ToString());
       64:  }
       65:  }
       66:  
       67:  private void CloseAllDBConnections()
       68:  {
       69:  if ((_dbCon != null)
      && (_dbCon.State != ConnectionState.Closed))
       70:  {
       71:  _dbCon.Dispose();
       72:  _dbCon = null;
       73:  GC.Collect();
       74:  }
       75:  
       76:  }

      .csharpcode, .csharpcode pre
      {
      font-size: small;
      color: black;
      font-family: consolas, “Courier New”, courier, monospace;
      background-color: #ffffff;
      /*white-space: pre;*/
      }
      .csharpcode pre { margin: 0em; }
      .csharpcode .rem { color: #008000; }
      .csharpcode .kwrd { color: #0000ff; }
      .csharpcode .str { color: #006080; }
      .csharpcode .op { color: #0000c0; }
      .csharpcode .preproc { color: #cc6633; }
      .csharpcode .asp { background-color: #ffff00; }
      .csharpcode .html { color: #800000; }
      .csharpcode .attr { color: #ff0000; }
      .csharpcode .alt
      {
      background-color: #f4f4f4;
      width: 100%;
      margin: 0em;
      }
      .csharpcode .lnum { color: #606060; }

Trick to deal with Inhouse generated certificates

Within your mobile app we create a class that essentially returns True when
asked ‘Is this Cert. valid?’

Somewhere upon starting up your app – e.g. Form_Load – insert LINE#1 below.

LINE#3 onwards describes the class ‘MyCustomSSLPolicy’

 1: System.Net.ServicePointManager.CertificatePolicy
= new MyCustomSSLPolicy();
 2: ......
 3: using System;
 4: using System.Collections.Generic;
 5: using System.Text;
 6: using System.Net;
 7: using System.Security.Cryptography.X509Certificates;
 8:  
 9: namespace MicksDemos.Utilities
 10: {
 11:  public class MyCustomSSLPolicy
: ICertificatePolicy
 12:  {
 13:  public bool CheckValidationResult(ServicePoint
srvPoint,
 14:  X509Certificate certificate, WebRequest request, int certificateProblem)
 15:  {
 16:  return true;
 17:  }
 18:  }
 19: }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Closing note:

Hope you find this useful – I’ve done this a few times now and am amazed with the
lack of info around this especially through ISA.

If you get any errors around “Can’t contact SQL Reconciler…” etc errors – GENERALLY
try and rebuild the snapshop server side, then try syncing again.

Nighty night!

Jon Flanders will speak at BizTalk User Group Sweden

Our user group has been fortunate enough this year to sign speakers like Darren Jefford and Dwight Goins. On the 4th of September we'll add Jon Flanders to that list. 

Jon is an MVP BizTalk, instructor for Pluralsight and is well known a cross the world as an author, speaker, instructor and for his commitment to the community. You might have seen him talking at events such as TechEd, about WF, WCF, BizTalk, BAM or any other related technologies from the Microsoft Connected System Division. If you have, you know that he is not only competent, but uses a great deal of irony and jokes to make his presentation even more interesting.

There will be two, one hour, sessions, and we've left it up to Jon to deside what the sessions should be about, and just set the topic to -"In the head of Jon Flanders"

So if you are in Sweden on the 4th of September, make sure to let us know you are comming.

For more information: www.biztalkusergroup.se

BizTalk & WCF – the answers are here!

Folks – fellow MVP Richard
Seroter
has written a VERY comprehensive series around this very topic including
the new BizTalk Adapter Pack V1.0 (V2.0 is in Beta at the moment).

Over 20+ thousand words + 178 screen shots – all for the love of BizTalk/WCF.

Complete with Source Code!!!

What a champion series – I’m looking forward to in tucking into some of his great
material!

The BizTalk community is in debt to you Richard – well done!!!

SERIES
SUMMARY FOUND HERE

Email BizTalk Errors for BizTalk Administrators

I’ve been on a few projects where clients have requested there be some kind of monitoring associated with BizTalk. Sometimes they want all error messages that BizTalk might spit out to the Event Log emailed to them.

For these circumstances I’ve created a simple .EXE that takes the following 4 parameters:

Argument 1 = ToEmail
Argument 2 = FromEmail
Argument 3 = SmtpServer
Argument 4 = Covast EDI Accelerator 2004 – Subsystem ErrorFlag Y/N

I then set up a Scheduled Task to run a batch file that runs the Executable.

The .bat looks like this:

“C:\Biztalk\Event Log Emailer\EventLogEmailer.exe” [email protected] [email protected] YourSMTPServerName.smtp.com N

You’ll see it comes in an HTML formatted message. I run it at 1 AM, and it gives yesterday’s data.

Currently it’s set to pick up BizTalk 2004, 2006, HIPAA, and CovastErrors(parameter flag can be used). Tell me if you’d like other parameters, or if you’d like one for a completely different application other than BizTalk.