by community-syndication | Mar 7, 2009 | BizTalk Community Blogs via Syndication
[Source: http://geekswithblogs.net/EltonStoneman]
I’ve been working on a Guidance package which lets you use Visual Studio as a front end to generate items from T4 templates. Using a recipe wizard you choose your metadata from a Web service, database or file system source, and the template is rendered against each of the metadata objects you select. It’s on CodePlex here: Code Generation Guidance Package, and comes in two flavours. The basic Guidance package has a single recipe “Add generated items” – when you run it, you specify a metadata provider, select the items to use for metadata and the T4 template to render against the items.
The second package is a sample which extends the functionality of the main Guidance package with custom recipes. This demonstrates how you can define recipes for pre-configured generation types, so the user does not need to complete all the wizard pages – one example specifies a database connection string and a T4 template for generating stored procedure wrappers; to add a new DAL class, the user runs the recipe and just selects which stored procedure(s) they want generated.
Neither package comes with any Visual Studio solution or project templates, and they do not need to be used to create the original solution. Any existing project can make use of the code generation features by enabling the package(s) through ToolsGuidance Package Manager.
In this post I’ll give an overview of using the basic Sixeyed.Guidance.CodeGeneration package, and in future posts I’ll add some technical detail and give a walkthrough of the sample project.
Add Generated Items
Run the Add generated items recipe from the Guidance Package Manager, or from the context menu of any project in Solution Explorer. The wizard loads to collect settings – in the first page you select the metadata source type, configure the source and select the metadata provider:
During generation, the provider will load each of the selected metadata items and provide them as a property which can be accessed in the T4 template. For instance if you use the SQL Server Stored Procedure provider, your T4 script will have a populated SqlCommand object available to use.
Provided with the basic Guidance package are the following Sources and Providers:
-
File System
-
Database
-
SQL Server Database
- Query Result provider – for executing a SQL query and exposing the results in a populated DataSet
- Table provider – for using a table as metadata, exposed as an empty DataTable with the correct structure
- Stored Procedure provider – for using stored procedures, exposed as a populated SqlCommand object
-
WSDL
-
SOAP source
- Web Method provider – for using a Web service method, exposed as a MethodInfo object
- Schema provider – for using the WSDL of the service, exposed as an XmlSchema object
When you select a provider, it will load a list of all available metadata items, and you select which items you want to run the T4 template against:
The item selection is always a simple list displaying the item name – this will be the name of the file, table, Web method etc (an exception is the QueryResult provider which uses this page to capture the SQL query to be executed).
In the final screen, select the T4 template to use:
The file dialog will default to the “Templates” folder of the Guidance package, but you can navigate to any T4 file. If you want to override the output settings from the T4 file, specifying a different target filename, namespace extension or class name you can do it in the additional properties in this step:
Finish the wizard and the recipe will iterate over each of the selected items, populating a representation of the metadata and passing it to the T4 template property.
T4 Template Properties
In order to access metadata item information and create the output, your T4 script needs to specify two properties which will be populated using the Guidance PropertyProcessor engine:
<#@ property processor=”PropertyProcessor” name=”MetadataItem” #>
<#@ property processor=”PropertyProcessor” name=”TemplateConfiguration” #>
The MetadataItem property is an object implementing Sixeyed.CodeGeneration.Metadata.Interfaces.IMetadataItem:
The underlying Item property gives you access to the populated metadata item – the object type will depend on the provider, and will be specified in the ItemType property. The Key for the item (the name shown in the Select Items wizard page) is available, as is the metadata Provider which can be used to navigate up to the source.
The TemplateConfiguration property allows you to control the output of the template, setting default values for the target file name etc. (which can be overridden in the Select Template wizard page). It’s an object of type Sixeyed.CodeGeneration.Generation.TemplateConfiguration:
Sample Templates
The CodeGeneration.Sample Guidance package contains a sample template for each of the metadata providers, demonstrating the type of object they expose and how it can be used. The scripts don’t do much other than demonstrate usage, but there are fully-fledged scripts for generating wrappers for SQL Server stored procedures, and configuration sections from text files. The full script for the Table provider sample makes straightforward use of the underlying DataTable object:
<#@ template language=“C#“ #>
<#@ property processor=“PropertyProcessor“ name=“MetadataItem“ #>
<#@ property processor=“PropertyProcessor“ name=“TemplateConfiguration“ #>
<#@ assembly name=“System.dll“ #>
<#@ assembly name=“System.Data.dll“ #>
<#@ assembly name=“System.Xml.dll“ #>
<#@ assembly name=“Sixeyed.CodeGeneration.dll“ #>
<#@ import namespace=“System.Data“ #>
<#@ import namespace=“Sixeyed.CodeGeneration.Generation“ #>
<# PrepareOutput(); #>
/*
Table name (from metadata): <#= MetadataItem.Key #>
Table name (from table): <#= dataTable.TableName #>
Columns:
<# foreach (DataColumn column in dataTable.Columns)
{
#>Column Name: <#= column.ColumnName #>
Data Type: <#= column.DataType #>
<#} #>
*/
<#+
private DataTable dataTable
{
get{ return MetadataItem.Item as DataTable; }
}
private void PrepareOutput()
{
TemplateConfiguration.ClassName = string.Format(“Table_{0}”, MemberName.ToPascalCase(MetadataItem.Key, true));
TemplateConfiguration.FileExtension = “cs”;
}
#>
(If you don’t already have it, the T4 Editor Community Edition from Clarius makes editing templates much easier).
The directives specify the expected properties, and import the relevant reference assemblies and namespaces. The script provides the dataTable property to easily access the metadata item as a typed DataTable object. In the PrepareOutput method (called at the start of the template), the script sets up the target class name and file extension. In this case a target filename is not specified, so the engine will use the class name and file extension for the filename. Note there are some helper methods in Sixeyed.CodeGeneration.Generation.MemberName for preparing valid member names.
Selecting a single table called Manufacturers for this script generates and adds a file called Table_Manufacturers.cs to the selected project, containing the following output:
/*
Table name (from metadata): Manufacturers
Table name (from table): Manufacturers
Columns:
Column Name: ManufacturerId
Data Type: System.Int16
Column Name: ManufacturerName
Data Type: System.String
Column Name: AvailableFromDate
Data Type: System.DateTime
Column Name: AvailableToDate
Data Type: System.DateTime
*/
Usage
For ad-hoc code generation, this basic Guidance package lets you specify metadata to execute against a T4 template. For commonly used scripts and standard patterns you can build a custom Guidance package which contains your solution structure, and specific recipes for generating known types of item – for DAL classes, entities, configuration sections, proxies or anything else that is based on external metadata.
by community-syndication | Mar 6, 2009 | BizTalk Community Blogs via Syndication
I recently published a free video on Ajax-enabling your WCF services.
This screencast guides the viewer through the process of Ajax-enabling your WCF services, allowing you to easily consume them from within your Ajax client pages.
Check out our growing collection of free .NET screencasts and videos. Subscribe to the Pluralsight feed to be notified when new screencasts are published. Also, check out our growing library of online .NET training courses — see what you can learn with Pluralsight On-Demand!

by community-syndication | Mar 6, 2009 | BizTalk Community Blogs via Syndication
I recently published a new screencast on calling RESTful services with WCF.

This screencast introduces the client-side experience for using WCF to consume RESTful services. You'll see how to use the new WebChannelFactory class to create channels that know how to map method calls into traditional HTTP verbs (GET, POST, PUT, and DELETE).
Check out our growing collection of free screencasts. Subscribe to the Pluralsight feed to be notified when new screencasts are published. Also, check out our growing library of online .NET training courses — see what you can learn with Pluralsight On-Demand!

by community-syndication | Mar 6, 2009 | BizTalk Community Blogs via Syndication
[Source: http://geekswithblogs.net/EltonStoneman]
BizTalk 2006 R2 ships with WCF adapters and pre-configured settings for common bindings – basicHttp and wsHttp being typically used for SOAP messaging. With a static port you can use the WCF-Custom adapter, select an existing binding and configure it further in the UI, with the full set of binding options available to you:
Here I’m using basicHttp, but I’ve configured the maxReceivedMessageSize, sendTimeout and transferMode settings to allow us to call long-running WCF services which return large responses.
In a dynamic port, you’re limited to the settings you can configure, as the WCF Adapter Property Schema doesn’t contain the full set of binding properties. You can set the sendTimeout and maxReceivedMessageSize on an outgoing message using code:
port(Microsoft.XLANGs.BaseTypes.TransportType) = “WCF-BasicHttp”;
requestMessage(WCF.MaxReceivedMessageSize) = 104857600;
requestMessage(WCF.SendTimeout) = “00:10:00”;
– but there’s no way to access less common properties like transferMode if you’re using this approach. Originally this is how we were configuring our outgoing messages, using SSO to store the values used for the binding properties (see Receiving large WCF response messages in ESB Guidance), but we soon ran into an issue with one of the BizTalk servers running out of memory when attempting to process a large WCF response message:
System.InsufficientMemoryException: Failed to allocate a managed memory buffer of 104857600 bytes. The amount of available memory may be low. —> System.OutOfMemoryException: Exception of type ‘System.OutOfMemoryException’ was thrown.
at System.ServiceModel.Diagnostics.Utility.AllocateByteArray(Int32 size)
For basicHttp, the transferMode setting allows you to specify that messages should be streamed (one way or both ways), or buffered – the default is buffered, so when we receive the response message, although it will be streamed in the BizTalk stack, it is completely loaded into memory by the WCF stack. Note that WCF is trying to allocate the full value specified in maxReceivedMessageSize – 100 Mb – even though we’ve set this as a practical maximum, and the actual incoming message was half that size.
To remedy it, I’ve switched to using the WCF-Custom transport, and specifying basicHttp with my additional settings in the BindingConfiguration property:
port(Microsoft.XLANGs.BaseTypes.TransportType) = “WCF-Custom”;
requestMessage(WCF.BindingType) = “basicHttpBinding”;
requestMessage(WCF.BindingConfiguration) = “<binding name=\”basicHttpBinding\” sendTimeout=\”00:10:00\” maxReceivedMessageSize=\”104857600\” />”;
The BindingType needs to be specified in addition to the configuration. For the BindingConfiguration value, the Properties page for a static WCF-Custom port allows you to export the settings, so you can configure it in the UI and then save the XML representation, rather than coding it all by hand.
By shifting the BindingConfiguration value to the SSO config store, we’ll have the full range of WCF configuration available to change at run-time, so a switch to using wsHttp or netTcp bindings is just a setting change using the SSO Config Tool, and a change to the endpoint address (which we configure in UDDI in this case). Originally I wanted finer control over what could be configured, so we could limit changes to a known set of properties. But having tried this approach I prefer it – it means switching bindings and adding or changing any settings can easily be done without modifying the solution – and I’ll be recommending it as best practice.
by community-syndication | Mar 6, 2009 | BizTalk Community Blogs via Syndication
Just returned from the Summit, and had a really great time meeting colleagues and seeing interesting content. Without breaking any NDAs, I thought I’d share some thoughts on what I saw this past week.
The CSD/BizTalk MVP group remains a smart bunch of guys with a diverse set of backgrounds. I shared a hotel room with […]
by community-syndication | Mar 5, 2009 | BizTalk Community Blogs via Syndication
Originally posted by Nick Heppleston at: http://www.modhul.com/2009/03/05/using-contextwrite-to-update-a-context-property-value/
This one has been done to death, but I am writing a moniker replacement pipeline component and noticed some interesting behaviour when looking to update Context Properties.
If you need to update an existing Context Property value, there is no Update() method defined in the IBaseMessageContext interface. However it is […]
by community-syndication | Mar 5, 2009 | BizTalk Community Blogs via Syndication
I took a hiatus last month with the interview, but we’re back now. We are continuing my series of interviews with CSD thought leaders and this month we are having a little chat with Jesus Rodriguez. Jesus is a Microsoft MVP, blogger, Oracle ACE, chief architect at Tellago, and a prolific speaker. If you follow […]
by community-syndication | Mar 4, 2009 | BizTalk Community Blogs via Syndication
[Source: http://geekswithblogs.net/EltonStoneman]
Following on from my post on Using the WCF SQL Adapter in .NET: Calling Stored Procedures (see that post for download and installation instructions for the WCF LOB adapter pack), this one looks at using the adapter to execute SQL statements on database objects.
Walkthrough: Executing SQL Statements
Add Adapter Service Reference generates separate entity and client classes for each table you select, and separate request and response classes for each table operation. To generate proxies for executing SQL statements against a table, choose the operations from the Tables view of the hierarchy:
Selecting the Insert and Select operations against the BikeTypes table will generate the following class structure:
The client for connecting to SQL Server is a standard WCF client class, inheriting from ClientBase and specifying the ServiceContract as its channel – in this case the interface is TableOp_dbo_BikeTypes which has operation contracts representing the Insert and Select statements. The entity representing the database table implements IExtensibleDataObject and provides DataMember-flagged properties for each table column; an array of entities is used as the input for the Insert operation, and the return for the Select operation (wrapping the underlying use of generated Request and Response classes).
The Request and Response classes are flagged with MessageContract attributes. Message contracts are less commonly seen than DataContract and ServiceContract, but they allow you finer control over the messages sent and received by the adapter, including the ability to specify whether data is to be serialized in the header or body of the message (see Using Message Contracts). In the generated classes, MessageContract is used to specify the wrapper name and namespace, which puts the message payload within a defined element in the SOAP body. The following attribute:
[System.ServiceModel.MessageContractAttribute(WrapperName=“Select”, WrapperNamespace=“http://schemas.microsoft.com/Sql/2008/05/TableOp/dbo/BikeTypes”, IsWrapped=true)]
– generates a SOAP message for the SelectRequest class which looks like this:
<s:Envelope xmlns:a=”http://www.w3.org/2005/08/addressing” xmlns:s=”http://www.w3.org/2003/05/soap-envelope“>
<s:Header>
<a:Action s:mustUnderstand=”1“>TableOp/Select/dbo/BikeTypes</a:Action>
<a:MessageID>urn:uuid:9a5fe359-e988-4504-96a9-b9b721d00502</a:MessageID>
<a:ReplyTo>
<a:Address>http://www.w3.org/2005/08/addressing/anonymous</a:Address>
</a:ReplyTo>
</s:Header>
<s:Body>
<Select xmlns=”http://schemas.microsoft.com/Sql/2008/05/TableOp/dbo/BikeTypes“>
<Columns>*</Columns>
<Query>WHERE BikeTypeCode LIKE ‘%R%’</Query>
</Select>
</s:Body>
</s:Envelope>
Note that the SOAP action is the Node Id for the operation from Add Adapter Service Reference. The request message contains Columns and Query elements, which are used in the call to refine the size and content of the resultset. In code you specify the Columns property with “*” to return all, or a comma-separated list of column names (which should be listed in the same order as defined in the table). Query can be null, empty or contain a WHERE clause to restrict the results:
TableOp_dbo_BikeTypesClient client = new TableOp_dbo_BikeTypesClient();
client.Open();
BikeTypes[] allBikeTypes = client.Select(“*”, null);
BikeTypes[] bikeTypeDescriptions = client.Select(“BikeTypeDescription”, string.Empty);
BikeTypes[] likeRBikeTypes = client.Select(“*”, “WHERE BikeTypeCode LIKE ‘%R%'”);
A populated Query property will limit the number of items returned. A populated Columns property will limit the number of populated elements in the response, so the typed object will have null values for any unmapped columns.
For the insert, you pass an array of populated entities to the call:
List<BikeTypes> bikeTypes = new List<BikeTypes>();
bikeTypes.Add(new BikeTypes());
bikeTypes[0].BikeTypeCode = “NEW”;
bikeTypes[0].BikeTypeDescription = “New Type”;
TableOp_dbo_BikeTypesClient client = new TableOp_dbo_BikeTypesClient();
client.Open();
client.Insert(bikeTypes.ToArray());
If you want to write identity values for tables which have an identity column, you can specify AllowIdentityInsert in the binding configuration. Otherwise, any specified columns have the value from the entity inserted; null values have NULL inserted. The return contains an array of long values containing the identity of the inserted rows – unless the table does not have an identity column, in which case the return is null.
Similarly the adapter can generate request and response classes for Update and Delete statements, which follow the same pattern. The generated stack has the same benefits as with the stored procedure calls – the code is very light and it uses the standard WCF stack, so if you’d rather generate or hand-craft your own connections, that’s a definite option. I’ll explore it in a later post.
by community-syndication | Mar 4, 2009 | BizTalk Community Blogs via Syndication
Originally posted by Nick Heppleston at: http://www.modhul.com/2009/03/04/ftp-adapter-context-property-oddities/
Interesting to see that the FTP adapter doesn’t capture the full Uri of the file in its ReceivedFileName context property – instead it simply gives us the filename:
Compare this with the FILE adapter where the full name (incl. the path) of the file is provided:
So, if you need to […]
by community-syndication | Mar 4, 2009 | BizTalk Community Blogs via Syndication
Trying to invoke a method when the powershell host is exiting…
How to register an event to make the final callback
till so far I discovered that (get-host)Runspace | get-member
does have an event “StateChanged” that I can maybe use.
To be continued