Best Practice: Frames Vs MasterPages Vs ICallBackEventHandler Vs DataIslands and the dreaded page refresh

ASP.NET 2.0 hasn’t put the final nail in the coffin of Frames. It does however provide some better alternatives depending on what you are trying to achieve.


I googled up some comparisons of Frames, Master Pages and the ICallBackEventHandler a good overview article on ASP.net 2.0 sates in favour of master pages:


              Bookmark a page and recall all the information on the specific page, not just the default frame page. A master page isn’t really a frame. It’s a single page that contains collated content from the master page and the content page that builds on the master. Therefore it looks and acts like a single Web page rather than a frame.


              Work by means of controls and tags rather than HTML. Thanks to Visual Studio, you don’t have to worry about opening and closing frame tags or modifying countless HTML attributes to ensure that each frame displays in the correct fashion. You can simply create the place holder and modify its properties through Visual Studio.


              Leverage Visual Studio’s code creation to visually design the layout, manage the frames, and provide all of the plumbing to link the content pages into the master page. You can add new content without having to worry that the overall HTML layout of the page will be affected. “


Migrating from ASP.NET 1.x to ASP.NET 2.0


By Jayesh Patel, Bryan Acker, Robert McGovern – Infusion Development, July 2004


https://www.mainfunction.com/DotNetInAction/Technologies/display.aspx?ID=2760&TypeID=17#master


And this extract from an interview with Scott Guthrie Microsoft’s ASP.net Architect discusses how the new callback manager (ICallBackEventHandler) which is basically a XMLHTTP wrapper can also help in reducing the amount of visible page refreshes.


“…So, one is to do what we call, out-of-bound call backs, where you can stay on the same page as an end user, but then through script, you can actually make a call back to the server and fetch new data that you populate down to the client, without having to, again, refresh the entire page, without having to lose scroll position, etc.



So will out-of-bound callbacks spell the end of the evil IFRAME?


The IFRAME today has a bum wrap in terms of reputation. I think the combination of out-of-bound callbacks as well as some of the things we’re doing in terms of master pages, to provide much cleaner layout of a page, where you can go ahead and rather than have to rely on frames in order to cleanly separate, or integrate content into a site, you can now rely on master pages. I think the combination of those two are going to put the hurt on the IFRAME out there. It’s still fully supported, but the nice thing is, there’s much richer mechanisms you can rely on now. “


http://www.theserverside.net/talks/videos/ScottGuthrie/interview.tss?bandwidth=56k


Well master pages seem a fantastic tool for development. They give you page inheritance, reusability you can even nest them giving the developer a simple way for you to ensure consistent look and feel to an entire application. So from a developer’s point of view they rock.


But what about the user experience well even if a content page “inherits” a master page each time the page is requested the entire page will refresh whether it is the content part or the master part of the page. Users hate full page refreshes, developers hate trips to the server to retrieve redundant html, the menu bar and advertisement is already on the client’s browser so why go and get it again (even if it is cached on the server).


Can implementing the ICallBackEventHandler help, well yes and no. Use of this technique can only refresh data. For example, enable the user to page through a grid without refreshing the entire page. You can’t really use this to redisplay the entire content area of the page if it contains asp: web controls (well I have done it but using Frames would be much much easier).


If you haven’t already come across the concept of XML Data Islands it has been around for a while along with Frames and XMLHTTP (now the ICallBackEventHandler). This technique requires the use of the DHTML XML tag you use this to build a client side repository of data in XML. For example if a user is entering data to complete a form over a number of pages, it would be more beneficial if you stored the data on the client between pages until the user reaches the final page and presses submit and only then hit the server to update your data source. This potentially has no effect on the dreaded page refresh it is great for relieving the pressure on your database server however. Note this is not the only use for XML Data Island the example is a common one however.


So the only way to stop the entire page refreshing if you are using asp: or user controls on your page (these need to be rendered by the asp.net runtime) and only refreshing the content area of a page is still through the use of Frames. Unfortunately in ASP.net Beta 2 unlike ASP.net 1.X it doesn’t ship with a Frameset template although there will be an online template available and adding the tags yourself won’t cause injury.


So what are the best practices I have come up with many seasoned web developers will already practice these however for those beginners and intermediates I hope this will be of some help and relive you of trial and error driven development. Some of you may disagree with these and I would like to hear back from you guys. Remember it is the users we are building these web applications for not the developers.


Use master pages when


              Not concerned about a full page refresh


              Development requires a highly modifiable inheritable page standard and users will live with a page refresh


              Need to be allow the user to bookmark a specific page


Use the ICallBackEventHandler when


              Want to refresh data or images in a control and not refresh the entire page


              Want to refresh part of the page with simple HTML (no asp: controls) and not refresh the entire page


Use XML Data Islands


              You want to build stores of data on a client and by doing this reduce the number of hits to update the servers data source


Use Frames when


              Don’t want full page refreshes


              Have complex asp: control driven pages which can’t be refreshed by data only i.e. the asp: controls need to be generated into html by the web server first


              Need multiple areas of the page to be refreshed at different times by the user


 R. Addis & A. Rambabu

Building a custom Pipeline Decoder component to modify a Message Context Property

An instance of the Biztalk MessageContext is created and stays with each message until the message leaves Biztalk. It contains properties such as RecievePortName, RecieveFileName, MessageType i.e. all the meta data for a message.


If you want to modify this meta data you need to modify the IBaseMessageContext using either the following methods, this may be done in either a custom pipeline component or inside an orchestration:


IBaseMessageContext.Promote: Make the Message Context property a promoted property


IBaseMessageContext.Write: Make the Message Context Property a distinguished property


Where you make the call to modify the message context depends on WHEN it should be modified. If you want to use a message context property to correlate two messages you should do it inside a custom pipeline component. If you want to set the message context during processing after some decision logic do so in a Message Assignment shape in an Orchestration. We are only looking at doing it in a custom pipeline component.


An example of using this would be if you had two input files which needed to be correlated in an orchestration based on an id or date in the filename which defined a relationship between those files. Another example of a problem this would solve is found here: http://www.eggheadcafe.com/ng/microsoft.public.biztalk.server/post21008841.asp


So in both these cases we want to look at the file name of the message, extract and promote part of it so we can either use that extracted information to either correlate two or more incoming messages or base some decision logic on those properties. The way I’m going to show you how to do it is to actually overwrite the RecieveFileName property with the extracted value from it.


Notes:


              I am going to show you how we did it instead of just giving you code or a solution.


              Although I think “Biztalk Server 2004 Unleashed” tries too cover too much in one book it does have a very good explanation of Pipelines and what the different stages of a pipeline should implement.  


1)      Download Martijn Hoogendoorns’ Pipeline Component Wizard http://www.gotdotnet.com/Workspaces/Workspace.aspx?id=1d4f7d6b-7d27-4f05-a8ee-48cfcd5abf4a and install it:


a.       Unzip the source code, open the Visual Studio sln file and rebuild the PipelineComponentWizardSetup project.


b.       Right-click the PipelineComponentWizardSetup project and select “Install” to install it.


2)      In your Biztalk project add a new project and select Biztalk Server Pipeline Component Project set the following properties in the wizard.






3)     Explaining all of the Interfaces which are implemented in a pipeline component is not in the scope of this post. I will   however advise you to take a quick look and familiarise your self with the following methods which implement the PropertyBag: Load, Save, ReadPropertyBag, WritePropertyBag.


The method of the Decode Pipeline Component which does the Biz is the Execute method to which we add the following code:


public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)


{


//Make sure the message contexts property name isn’t empty


       if (_PropertyName != null)


                     {


              //Get the current value of the property


              object oPropertyValue = inmsg.Context.Read(_PropertyName, _PropertyNamespace);


              if (oPropertyValue != null)


              {


                     string sPropertyValue = (string)oPropertyValue;


                     System.Text.RegularExpressions.Regex oReplaceRegularExpression =


                     new


                     System.Text.RegularExpressions.Regex(_ReplaceRegularExpression);


                                 


                     //Replace the regular expression with the value specified


              sPropertyValue =


              oReplaceRegularExpression.Replace(sPropertyValue,_Value);


                                 


                     //Either promote or distinguish the property


                     if (_PromoteOrDistinguish == “Promote”)


                     {


                           inmsg.Context.Promote(_PropertyName, _PropertyNamespace,


                           sPropertyValue);


                     }


                     else if (_PromoteOrDistinguish == “Distinguish”)


                     {


                            inmsg.Context.Write(_PropertyName, _PropertyNamespace,


                           sPropertyValue);


                     }


              }


                                                        


return inmsg;


}


Note I have also changed the Value property to allow you replace with an empty string so you can remove characters using a regular expression.


        public string Value


        {


            get


            {


                return _Value;


            }


            set


            {


              if (value == null)


              {


                     _Value = string.Empty;


              }


              else


              {


                     _Value = value;


              }


               


            }


        }


4)     Well that’s the development easy aye thanks to the Pipeline Component Wizard. As for testing I guess you could do this by STOP PRESS a much better way of debugging a pipeline:


              referencing the new pipeline component project in your Biztalk project


              change the output path in the pipeline components project properties dialog to C:\Program Files\Microsoft BizTalk Server 2004\Pipeline Components


              adding the compiled pipeline dll as a component by right clicking on the pipeline component toolbar and choosing add new item then selecting the Biztalk Pipeline Components Tab


              placing a breakpoint in the Execute method code


              add the component to the decode stage of a pipeline where you want to use it set the properties on the decode component to (this is just a suggestion, the regular expression tries to remove all the characters around a date which was in a filename setting the ReceivedFileName property to a date)



              deploy your Biztalk project


              set up a receive port & location


              attach to the BTSNTSvc,exe process


              drop a file in the receive location and hopefully the debug runtime will stop at your breakpoint


5)     To use the modified and promoted RecieveFileName property to correlate two messages


              create a Correlation Type and set the “Correlation Type Properties” property to FILE.ReceivedFileName


              create a Correlation Set based on this type


              initialise the correlation set property of the receive shape (for the messages which need to be correlated) to the name of the correlation set you created above


R. Addis & Emil @ Microsoft

Changing a VPC Computer Name with BizTalk

Im looking at using Virtual PC with a team of BizTalk developers. The idea is to install the dev env (Server 2003, SQL Server, Visual Studio, BizTalk etc) on a virtual PC image, and then each team member uses a copy of the image to develop with.


We have network problems if two or more images with the same computer name are started with network access (to SourceSafe). So I attempted to change the computer name in an image. After a while I managed to do it like this:


Export any information from the BizTalk databases (business rules etc.)
Run ConfigFramework /u
Delete BizTalk jobs in SQL Server Agent
Delete BizTalk logons in SQL Server Security
Delete BizTalk databases
Change computer name
Re-start computer
Change SQL Server Name (with sp_dropserver, sp_addserver)
Run ConfigFramework
Change BizTalkMgmtDB connection in Visual Studio
Change rules engine DB connection in Business Rules Policy Editor
Re-enable BackupBizTalk server DBs job
Add any Hosts and host instances using BizTalk Server Administration
Import information to the BizTalk databases (business rules etc.)


I’m wondering if anyone else is using VPC with a BizTalk team, and has come across this issue, or has any tips. (I’ll post an article here and include any feedback).


Is there a workaround for having multiple VPCs with the same name? And is there an ‘easy’ way to change the name of a BizTalk box?


Update
Another Stockholm BizTalk guy called Ali pointed me to this. Looks like it’s easy to avoid the “world of pain” of re-naming the VM PC, thanks to Dunk (wonder why that never got in the Bloggers Guide??). I’d still be keen to know if the re-naming procedure could be optimised…


 


 


 


 

Message Context and Mapping Inside an Orchestration in BizTalk 2004

Message context is a critical part of message routing and processing inside BizTalk Server 2004.  How this context is handled during mapping is different depending on the location of the mapping.  The context is copied differently using Receive Port mapping verses Orchestration mapping.Why should you care?  If you are using Direct Message Box Binding to route message out of an Orchestration you might not have the correct context propertied to route your message.  This only impacts messages that needed to be routed out of an Orchestration based on a value in the original pre-mapped message. Let’s look at the two types of mappings and what happens to the context.Receive Port MappingThis is a common type mapping since it generally allows for greater flexibility.  Receive port mapping occurs after the pipeline completes.  This means that context properties will already be promoted by various stages of the pipeline.  In this case, the new mapped message has all the original context values of the initial message with any duplicate values updated (i.e. like message type is now the type of the mapped message).  In addition, any prompted values in the mapped message are now promoted into the context.To sum it up, Receive Port mapping yields a superset of message context data from both the original message and the new mapped message.Orchestration MappingOrchestration mapping behaves in a totally different manner.  Using Orchestration mapping, the context of the original message is NOT copied automatically into the newly created message.  But, any promoted fields in the new message are promoted into the message context after mapping. Getting the original message context into the new message is easy.  Just add a Message Assignment shape after the transform but inside the same Construct.  Add the following code:  MessageOut(*) = MessageIn(*)This would look something like this:This will copy the entire context from the original message into the new message and result in a superset just like the Receive Port mapping.  Individual message context fields can also be copied using this same method.To sum it up, Orchestration Mapping does not copy original message context by default.Overall, it is import to know what values are inside your message context as your message flows through your workflow process.  This will ensure correct message routing and help resolve routing failures quickly.

Schema Design Patterns: Salami Slice

This is the second of five entries talking about schema design patters.  The previous entry discussed the Russian Doll approach.


In the Salami Slice approach all elements are defined globally but the type definitions are defined locally.  This way other schemas may reuse the elements.  With this approach, a global element with its locally defined type provide a complete description of the elements content.  This information ‘slice’ is declared individually and then aggregated back together and may also be pieced together to construct other schemas.


 


<?xml version=”1.0″ encoding=”UTF-8″?>
<xs:schema xmlns:xs=”http://www.w3.org/2001/XMLSchema” elementFormDefault=”qualified” attributeFormDefault=”unqualified”>
    <xs:element name=”BookInformation”>
        <xs:complexType>
            <xs:sequence>
                <xs:element ref=”Title”/>
                <xs:element ref=”ISBN”/>
                <xs:element ref=”PeopleInvolved”>
            </xs:sequence>
        </xs:complexType>
    </xs:element>
    <xs:element name=”Title”/>
    <xs:element name=”ISBN”/>
    <xs:element name=”PeopleInvolved”>
        <xs:complexType>
            <xs:sequence>
                <xs:element ref=”Author”/>
                <xs:element ref=”Publisher”/>
            </xs:sequence>
        </xs:complexType>
    </xs:element>
    <xs:element name=”Author”/>
    <xs:element name=”Publisher”>
</xs:schema>


 


The advantage is that the schema is reusable since the elements are declared globally.


The disadvantages are: the schema is verbose since each element is declared globally and then referenced to describe the data which leads to larger schema size.  This approach also is not self contained and is coupled.  The elements defined may be contained in other schemas and because of this the schema is coupled to other schema and thus changes to one schema will impact other schemas.


This type of approach is commonly used since it is easy to understand and create reusable components.  It would be an appropriate design to promote reusability and data standardization across differing applications.  This approach is not, however, recommended when modifications to the standard elements will be necessary.  If the length, data types, restrictions or other modifications of the elements need to be changed then this will cause added work as well as a larger impact to other systems.

Back on Track with the Guide

There’s a new Bloggers Guide out today. I missed the March release, (moving apartments, painting, laying floors etc.). There will be a May release in a couple of weeks, and I will add the new blogs that have been submitted (sorry I could not get them in this release guys). The 5MB size restriction of GotDotNet may cause problems in the future, (the current ZIP file is 5.26MB, so I’m not sure when (or if) the restriction will kick in).


 


Get it here.


 

In other news, I am now a BizTalk MVP, so if anyone is working with BizTalk in the Nordic region, feel free to contact me via my blog if you need any help/advice/tips with the product. I plan to hold a “Learning BizTalk” session with the Sweden .net User Group in Stockholm sometime after the summer, I’ll post here when I get further details.

Convoy Message Deep Dive White Paper on MSDN

The Convoy Deep Dive white paper and sample code is now available on MSDN.  You can read the paper on-line and download the sample code here.



For anyone who read the “beta” version of the paper, Scenario 2 went through a minor change.  I now use an atomic scope to build the output message rather then a simple string concatenation.  This eliminates messages collecting inside the message box in the “Delivered, not consumed” status. 



It seems that the status of a message is not updated until a persistence point is reached or the Orchestration dehydrates.  I would guess this is the case with any messages that are used inside an Orchestration and not just inside a looping-receive Convoy.  But, I have not verified this.



Overall, this change provides a better solution since the Orchestration state is being saved when each new message arrives and can be easily be restated/recovered in the event of a system failure.


 

Implementation of Message Transformation Normaliser Pattern in the BizTalk 2004

This is a simple explanation of how to join together two schemas with a 1:1 relationship between them in the Biztalk Mapper. I came across this problem when I had 2 legacy csv files which needed to be merged (using a unique identifier field in each file) before further processing could occur in a Biztalk Orchestration.


I will show by example. The example is a bit convoluted, but stay with me 😉 different departments are ordering the same parts but are paying different prices we want to see the difference in price for the parts. There are also fields missing from each field which we want to add to the finished schema.


1) So first create 2 schemas one for each department, note the common PartID which is the unique identifier and price, but also not that one file comes with a Part name and another with a manufacturer:







2) Create the Output Schema




3) Now create an orchestration which will call the Map to join the two schemas by taking them both in and using a transform shape to map the two then it will output them




4) Set the transform properties as below create a new map set the destination to the output map



5) Which will create a map



6) Finally the biz … ok on the map join the common fields from the first schema (but not the second) with fields in the output schema now add a script functoid and set it up like this. Using an XSLT template it will search the second message for the values (simple xpath) where the PartID in the first message matches the PartID in the second message. It will then return the value of the price attribute in the second message. From this you can see how powerful using inline xslt is.




7) Add another script block to get the manufacturer field set the inline script buffer to:



























9) Join up the schemas in the final map like so … THE END!



R. Addis & Emil @ Microsoft

Managing Flat Files in BizTalk 2004


I have done a heap of work with Flat Files (mostly csv) in Biztalk 2004 I collected a lot of articles from other Blogs on this which were fab I though I would centralise this info for you, again thanks goes to the fellow bloggers for posting this stuff.


BizTalk 2004 Flat File Schema Tutorial 1


BizTalk 2004 Flat File Schema Tutorial 2


BizTalk Flat File Parsing Annotations


The flat file strikes back: BizTalk 2004 parsing positional records


Not sure where I got this from but it’s a great way of removing the first line from a file if it contains the column headers without having to write a new biztalk dissassembler pipeline component, so thanks to whoever worked this one out.


“To setup a CSV / TSV file, do the following
Setup a schedma as normal. It should be -> products -> product -> all your fields
In , select schema editor extensions, and add “Flat File Extension”
In Products, set the child order to postfix, the child delimiter type to “Hex” and the delimiter to “0x0D 0x0A” (CR/LF). Adjust as needed.
In Product, set the Child order to Infix, the type to Hex (or char) and the delimiter to 0x09 (tab), or , (comma). Adjust as needed.
To skip the first line of a file, during the mapping (eg, the file has a header).
Setup the mapping as normal
Add an Iteration functiod. Connect it up to the “product” level record (eg, product, not products).
Add a not equal (<>) functiod. Connect it to the Iteration functoid, and the target record. Set the second value on that to 1.
If you have trouble deploying a generated xml deployment file (eg, from BTSDeploy wizard) which contains a sql connector, you may find that the user that BizTalkServer is using, does not have access to the target database. Give it access :).
Send Ports are the best place to put maps. I never managed to get them to work in receive ports – the documents tended to disappear.
When in doubt, reboot. For example, if everything works on the development machine, but for some odd reason deploy on the test server and then doesn’t work – reboot the test server. Chances are, its managed to have something hang around in the GAC or something…. restarting BTS might work too.”


This next one is a good overview of property setting on a flat file schema it is taken from this blog post which is not accessable at the moment: http://weblogs.ilg.com/brumfieldb/archive/2004/08/09/440.aspx


There doesn’t seem to be a lot of good information on disassembling various types of flat-files in BizTalk.   There are a number of flat-file properties added for a schema set with the Flat-File Extensions, but it’s not always clear on how to use all these options to accomplish what you need.  Hopefully this post will help serve as an example of how to use some of the flat-file features in BizTalk to parse more than a straightforward comma delimited file.


For one of my projects, I needed to disassemble a file with a format like this:






HEADER    USER_1234
PURCHASE PO_001
LINEITEM Item32 33
LINEITEM Item63 45
PURCHASE PO_002
LINEITEM Item454 12

The file contains a header record for each file, one or more purchase orders each with one or more line items.  Each line contains a tag that indicates its usage (e.g. LINEITEM).  The elements for each line are separated by a tab (ASCII hex code of 0x09).


We define a target schema to represent the xml-ized version of this file.


The schema root and each element group must be appropriately configured to correctly parse the flat file.  The configuration for each root and group node are defined as follows:



Schema Root

















Property Setting
Default Child Delimiter 0x0D 0x0A
Default Child Delimiter Type Hexadecimal
Default Child Order Postfix


The settings in the schema root define the default usage in sub-groups throughout the schema.  Here we’ve defined it to be CR LF and the Default Child Order defines that the data will precede the delimiters.  These settings can be overridden at each group level if necessary.



Orders















Property Setting
Child Delimiter Type None
Child Order Postfix


Header



























Property Setting
Child Delimiter 0x09
Child Delimiter Type Hexadecimal
Tag Identifier HEADER
Child Order Prefix
Min Occurs 1
Max Occurs 1


The header group allows us to specify a tag identifier as well as note that the elements on this line are tab (0x09) delimited.


Order





















Property Setting
Child Delimiter Type None
Child Order Postfix
Min Occurs 1
Max Occurs unbounded


OrderHeader





















Property Setting
Child Delimiter 0x09
Child Delimiter Type Hexadecimal
Tag Identifier PURCHASE
Child Order Prefix


LineItems















Property Setting
Child Delimiter Type Default Child Delimiter
Child Order Postfix


LineItems is a logical grouping of LineItems.  Since LineItems in the flat file are separated by a CR LF, the Delimiter type is set to the default child delimiter and that the delimiter will appear after the child item.


LineItem



























Property Setting
Child Delimiter 0x09
Child Delimiter Type Hexadecimal
Tag Identifier LINEITEM
Child Order Prefix
Min Occurs 1
Max Occurs unbounded


Notes



  • Most of the group nodes do not have any real values in the flat-file, but instead are logical groupings of real elements.  For instance, LineItems has no real representation in the flat file, but is the logical grouping of LineItem elements from the flat file.  These groups must be setup to for the postfix child order.
  • Tag Identifiers are used by BTS to recognize a line, but are not imported as data.
  • With this specification, there needs to be a CR LF following the last line.

R. Addis

Schema Design Patterns: Russian Doll

I am working on a BizTalk solution which uses a schema that was hand created using XML Spy.  The schema was created in such a way in which everything was globally available and when I brought it into the BizTalk editor the schema looked to have 266 root nodes.  Needless to say this spurred many conversations about schema design and the art of creating reusable types.

So this is the first of 5 blog entries talking about schema design patterns.  The last one in the series will be about creating schemas based on the patterns with the BizTalk Editor. 


What I have found is that there are four schools of thought on this subject.  These are the Russian Doll, the Salami Slice, the Venetian Blind and the Garden of Eden approach. 


The main characteristic that differentiates these approaches are whether the elements and types are globally defined.  Elements and types that are global are direct children of the <schema> node.  Elements and types are considered local when they are nested within other elements and types.  If elements or types are global then they are available for reuse by other schemas using either import or include statements.  Locally defined elements and types are not available for reuse.


In the Russian Doll approach, the schema has one single global element – the root element.  All other elements and types are nested progressively deeper giving it the name due to each type fitting into the one above it.  Since the elements in this design are declared locally they will not be reusable through the import or include statements.  This will not change if the elements are namespace qualified or namespace unqualified.


 


<?xml version=”1.0″ encoding=”UTF-8″?>
<xs:schema xmlns:xs=”http://www.w3.org/2001/XMLSchema” elementFormDefault=”qualified” attributeFormDefault=”unqualified”>
    <xs:element name=”BookInformation”>
        <xs:complexType>
            <xs:sequence>
                <xs:element name=”Title”/>
                <xs:element name=”ISBN”/>
                <xs:element name=”PeopleInvolved”>
                    <xs:complexType>
                        <xs:sequence>
                            <xs:element name=”Author”/>
                            <xs:element name=”Publisher”>
                                <xs:complexType>
                                    <xs:sequence>
                                        <xs:element name=”CompanyName”/>
                                        <xs:element name=”ContactPerson”/>
                                    </xs:sequence>
                                </xs:complexType>
                            </xs:element>
                        </xs:sequence>
                    </xs:complexType>
                </xs:element>
            </xs:sequence>
        </xs:complexType>
    </xs:element>
</xs:schema> 


 


The advantages of the Russian Doll approach are:  The schema is self contained as it has all of its parts in the schema and does not interact with other schemas.  In as much as it is self contained it is also decoupled.  Since the content of the schema is not visible to other schemas, changes to the schema are decoupled from other schema components.


The disadvantage is that it is not reusable.


This type of approach would be appropriate for use within a single application or for migration of data from legacy systems.