Anthony Borton Awarded 6th ALM MVP

2012 is off to a great start for QuickLearn’s lead ALM instructor, Anthony Borton. Anthony has been awarded the Microsoft%u00ae MVP Award for Visual Studio ALM for the 6th consecutive year. This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others.

Anthony has developed our leading ALM curriculum that focuses on everything connected to Microsoft Team Foundation Server. He is a popular and highly regarded instructor and international presenter. If you’re eager to learn how to developer software better, please check out our upcoming TFS ALM courses starting again in February.

For those of you eagerly awaiting the upcoming BETA release of the next version of Visual Studio and TFS, make sure you keep an eye out for news of our upcoming early-adopter courses which Anthony is putting the finishing touches on right now. Why not follow us on Facebook or LinkedIn to make sure you don’t miss our announcement.

Happy New Year to all our readers and I hope you’re year has started with some great news as well.

Mapping and Auto-Mapping Objects from IDataReader

[Source: http://geekswithblogs.net/EltonStoneman]

This is one in a series of posts covering my generic mapping library on github: Sixeyed.Mapping.

1. Mapping and Auto-Mapping Objects

2. Mapping and Auto-Mapping Objects from IDataReader

3. Mapping and Auto-Mapping Objects from XML

4. Mapping and Auto-Mapping Objects from CSV

5. Comparing Sixeyed.Mapping to AutoMapper

The mapping library has support for IDataReader objects used as the source. Using data readers, AutoMap will try to populate the target by matching property names to column names. Alternatively a static map can be defined, manually specifying the mapping between column names and properties.

Auto-Mapping

Using a populated data reader as a source, you can auto-map a target object using the DataReaderAutoMap and the same syntax as for an object source:

IDataReader reader = GetReader(id);

User user = DataReaderAutoMap<User>.CreateTarget(reader);

The data reader must be open, and the map will populate from the current row so the data reader should be read to the desired start position before mapping. For a single row, call Read() once on the reader before passing it to the Create() call.

DataReaderAutoMap and AutoMap share the same base class, and the mapping logic is the same for all sources. Specification, caching strategy and naming strategy options still apply. To specify a property mapping, you need to provide the name of the source column:

var addressMap = new DataReaderAutoMap<Address>()

.Specify(“PostCode”, t => t.PostCode.Code);

//or:

var addressMap = new DataReaderAutoMap<Address>()

.Specify((s, t) => t.PostCode.Code = (string)s[“PostCode”]);

You can also use the conversion overloads for converting the source value during population:

var map = new DataReaderAutoMap<User>()

.Specify<DateTime, DateTime>(“JoinedDate”, t => t.JoinedDate, sd =>FromLegacyDate(sd));

Static Mapping

For static maps, extend the base DataReaderMap class. As we’re dealing with string constants for the column names, static maps may be the better option in caseswhere the source and target names can’t be matched by convention – they centralise the constant definition in one place:

public class FullUserFromDataReaderMap : DataReaderMap<User>

{

/// <summary>

/// Default constructor, initialises mapping

/// </summary>

public FullUserFromDataReaderMap()

{

this.AutoMapUnspecifiedTargets = false;

Specify(ColumnName.Id, t => t.Id);

Specify(ColumnName.FirstName, t => t.FirstName);

//etc.

}

private struct ColumnName

{

public const string Id = “UserId”;

public const string FirstName = “FirstName”;

//etc.

}

}

Nested Maps

Maps can include nested maps, populating an object graph from a flattened representation in a single data reader:

var addressMap = new DataReaderAutoMap<Address>()

.Specify(“PostCode”, t => t.PostCode.Code);

var map = newDataReaderAutoMap<User>()

.Specify(“UserId”, t => t.Id)

.Specify((s, t) => t.Address = addressMap.Create(s));

var user = map.Create(reader);

Nested maps can also populate objects from multiple readers – although this requires each reader to be associated with a separate connection:

var addressMap = new DataReaderAutoMap<Address>()

.Specify((s, t) => t.PostCode.Code = (string)s[“PostCode”]);

var map = new DataReaderAutoMap<User>()

.Specify(“UserId”, t => t.Id)

.Specify((s, t) => t.Address = addressMap.Create(addressReader));

var user = map.Create(userReader);

Limitations

Data reader maps can only operate in one direction – populating an object from a reader. They also rely on the column names being available in the IDataReader implementation through the GetOrdinal method. Not all data providers supply this, and if there are no column names available, mapping will fail for auto-maps and static maps, leaving the target object unpopulated.

Performance

Mapping from data readers uses the column names of the source, so only the target is reflected over. For small numbers of reads, there is less of a performance impact than with object-to-object mapping, and the benefits are greater as mapping also takes care of type conversion. Using a SqlServerCe database populated with 250,000 rows (80+Mb of data), the performance was bounded by the speed of the database connection and not the speed of mapping. Up to 1,000 are read and mapped in 0.4-0.6 seconds for all reads, with a small performance hit for the automap:

For larger reads, the performance impact is more significant, with DataReaderAutoMap and a static DataReaderMap taking almost twice as long as manually reading and mapping the data, when you reach 100K or 250K reads:

At 10K reads, performance is still comparable for auto-mapping and manual mapping, so whether you can use the DataReaderAutoMap would need to be looked at in context of your own data, mapping complexity and any upstream caching in your solution.

Ordina BizTalk Innovation Event: Monitoring and Administration

I am organizing an event at Ordina on BizTalk Innovation with the topic “Monitoring and Administration”. This is the first event of a series of events under name “Ordina BizTalk Innovation” that will take place at my company Ordina. This and future events are open for customers, the community and Ordina professionals. The event on the 1st of February three speakers will do their presentations on BizTalk Monitoring and administration. During the event I will be the host.

Wouter Crooy, Senior BizTalk Consultant, will have a session on:

Custom Monitoring solutions for BizTalk, ESB Toolkit & WCF

Wouter will during his talk provide a number of custom solutions for monitoring a BizTalk solution and the ESB Toolkit . Using the standard tooling of BizTalk will get you a long way, still with some of the custom monitoring solutions you can have more insight in your own custom BizTalk solutions.

Saravana Kumar, BizTalk MVP, CEO of BizTalk360 will talk on:

Manage your BizTalk Server environment efficiently using BizTalk360

BizTalk 360 is a web based (Silverlight RIA) application primarily designed for supporting and monitoring Microsoft BizTalk Server environments. It addresses some of the common challenges organizations face on running the day to day operations of a BizTalk environment. Some of the key capabilities of BizTalk360 includes:

  • Fine grained authorization
  • Governance/Audit
  • Proactive Monitoring/Notification capabilities
  • Graphical Message Flow Viewer for Tracking data
  • Various dashboards (Environment, Application, BizTalk Server, SQL Server, Host etc)
  • Advanced Event Viewer
  • Integrated BAM Portal
  • Dynamic topology diagram
  • Message Box Viewer (MBV) integration
  • Knowledge base repository

There are various other features in addition to the above, that makes BizTalk 360 a must have application for any Microsoft BizTalk Server environments.

Lex Hegt, BizTalk Architect, will have a session on:

Lex will talk on BizTalk monitoring in general and provide an overview on existing tooling in context with BizTalk administration. He will also demonstrate the BizTalk Processing Monitor. This is a tool that, among other things, does (near) real-time monitoring of message flows through BizTalk systems enabling the administrator to quickly identifying issues.

You can register for the event here. The talks of Lex and Wouter will be in Dutch and Saravana’s talk in English. Also joining us during this event will be Tord Grad Nordahl BizTalk expert on BizTalk administration from Bouvet ASA (Norway).

Cheers.

Mapping and Auto-Mapping Objects

[Source: http://geekswithblogs.net/EltonStoneman]

This is the first of a series of posts covering my generic anything-to-object mapping library on github: Sixeyed.Mapping.

1. Mapping and Auto-Mapping Objects

2. Mapping and Auto-Mapping Objects from IDataReader

3. Mapping and Auto-Mapping Objects from XML

4. Mapping and Auto-Mapping Objects from CSV

5. Comparing Sixeyed.Mapping to AutoMapper

Enterprise projects typically have entities of the same kind defined multiple times to encapsulate different representations. A User domain entity may be projected into a UserModel, containing a flattened subset of the User properties for display:

With layers for domain entities, data contracts, service contract requests and responses, and presentation models you may have five definitions of a related entity, all of which are under your control, and all of which will (hopefully) have consistent naming conventions. Code to manually map between entity representations looks unnecessary as the source and target are so similar:

User user = GetFullUser();

UserModel model = newUserModel

{

DateOfBirth = user.DateOfBirth,

FirstName = user.FirstName,

Id = user.Id,

LastName = user.LastName

//Intentionally leave out AddressLine1 for now

};

This is time-consuming, error-prone, and can add a huge maintenance overhead when properties are added or removed. Neater to use a generic auto-map, which matches properties between target and source entities, and populates target objects:

User user = GetFullUser();

var model = AutoMap<User, UserModel>.CreateTarget(user);

Sixeyed.Mapping on github provides functionality for auto-mapping, and for creating static maps. (For an alternative, Jimmy Bogard’sAutoMapper on CodePlex, is well established but it has a different approach. I wanted a consistent interface for auto maps and manual maps, the ability to map from different sources, and a smaller performance hit – see Comparing Sixeyed.Mapping to AutoMapper).

Auto-Mapping

Auto-mapping is done at runtime, so when the entity definitions change there are no upstream code changes. AutoMap uses reflection, but the performance hit is relatively small and the map can be cached if it’s going to be used repeatedly. The example above is the simplest, but for cases which aren’t covered by discoverable mappings, you can specify individual property mapping actions:

var map = new AutoMap<User, UserModel>()

.Specify((s, t) => t.AddressLine1 = s.Address.Line1) //flatten

.Specify((s, t) => t.FirstName = s.FirstName.ToUpper()) //convert

.Specify((s, t) => t.Address = AutoMap<Address, PartialAddress>.CreateTarget(s.Address)); //nested map

var model = map.Create(user);

Any properties not explicitly specified are auto-mapped. Mapping degrades gracefully, so any properties which can’t be mapped (either because the names can’t be matched, or the source cannot be read from, or the target cannot be written to) are not populated. (Optionally you can force exceptions to be thrown on mismatches).

AutoMap uses a naming strategy to match properties. By default this uses a simple matching algorithm, ignoring case and stripping non-alphanumeric characters. You can override the default to use exact name matching, aggressive name matching (which acts like the simple match but additionally strips vowels and double-letters), or to supply your own strategy (implementing IMatchingStrategy):

var map = new AutoMap<User, UserModel>(); //matches “IsValid” and “IS_VALID”

var exactMap = newAutoMap<User, UserModel>()

.Matching<ExactNameMatchingStrategy>(); //matches “IsValid” and “IsValid”

var aggressiveMap = newAutoMap<User, UserModel>()

.Matching<AggressiveNameMatchingStrategy>()//matches “IsValid” and “ISVLD”

var customMap = newAutoMap<User, UserModel>()

.Matching<LegacyNameMatchingStrategy>(); //custom, matches “IsValid” and “bit_ISVALID”

Internally, AutoMap uses the naming strategy to generate a list of IPropertyMapping objects which represent maps between source and target properties. By default the list is only cached for the lifetime of the map, so the performance cost of reflecting over the types is incurred every time an AutoMap is instantiated and used. The justification for this is that the mapping cache will grow unknowably large, so a simple dictionary cache could end up with a large memory footprint. Equally the performance hit is small, and .NET uses internal caching for reflected types, so in subsequent generations of the same type of map the performance hit will be smaller.

AutoMap does provide a caching strategy if you do want the mappings cached. You can either use the internal cache (which is a simple dictionary and will never be flushed), the standard .NET runtime cache, or provide a wrapper over your own caching layer with an ICachingStrategy implementation:

var map = newAutoMap<User, UserModel>(); //mappings not cached

var dictionaryMap = newAutoMap<User, UserModel>()

.Cache<DictionaryCachingStrategy>(); //mappings cached in dictionary

var cachedMap = newAutoMap<User, UserModel>()

.Cache<MemoryCacheCachingStrategy>(); //mappings cached in .NET runtime cache

Static Mapping

For complex maps, or for scenarios where you don’t want the reflection performance hit at all, you can define a static map. The interface is the same as AutoMap, except by default all properties have to be specified – there is no auto-mapping of unspecified targets, so additionally the naming and caching strategies are ignored.

Static object maps are derived from ClassMap, with the specifications made in the constructor (FluentNHibernate-style):

public classUserToUserModelMap : ClassMap<User, UserModel>

{

public UserToUserModelMap()

{

Specify(s => s.Id, t => t.Id);

Specify(s => s.FirstName, t => t.FirstName);

Specify(s => s.LastName, t => t.LastName);

Specify((s, t) => t.AddressLine1 = s.Address.Line1);

Specify((s, t) => t.Address.PostCode = s.Address.PostCode.Code);

}

}

There are various Specify overloads, so you can specify mappings in an action, or specify source and target with funcs as you prefer. Execute the map in the same way by calling Create or Populate to map from the source instance to a target:

User user = GetFullUser();

var map = new UserToUserModelMap();

UserModel model = map.Create(user);

You can mix-and-match static and auto-mapping by setting AutoMapUnspecifiedTargets, meaning that the auto-map will be used for any target properties which have not been explicitly specified:

public class UserToUserModelMap : ClassMap<User, UserModel>

{

public UserToUserModelMap()

{

AutoMapUnspecifiedTargets = true;

Specify((s, t) => t.AddressLine1 = s.Address.Line1);

Specify((s, t) => t.Address.PostCode = s.Address.PostCode.Code);

}

}

This also allows your static map to leverage the naming and caching strategies of AutoMap.

Nested Maps

AutoMap doesn’t traverse object graphs, it will only populate properties in the first-level object (except where you have specified a mapping for a child object). To populate full graphs you can use nested auto-maps or static maps, with one of the Specify overloads to supply a conversion which invokes the map on the target property:

Specify((s, t) => t.User = new FullUserToPartialUserMap().Create(s.User));

//or:

Specify(s => s.User, t => t.User, c =>new FullUserToPartialUserMap().Create(c));

//or:

Specify((s, t) => t.User = AutoMap<User, UserModel>.CreateTarget(s.User));

Performance

As always, the generic solution has a performance implication, although the mapping has had a couple of rounds of optimisation done to minimise the overhead. The highest-value AutoMap solution, which removes as much code and maintenance overhead as possible, has the highest impact. Populating 250,000 objects, the static AutoMap<>.CreateTarget() method takes 13 seconds, compared to 5 seconds for manually populating the targets. Caching the map reduces the time to 8 seconds, and generating the map once and reusing it reduces it again to 7 seconds. Using a static map takes 6 seconds:

In a more representative sample, mapping a single object, the disparity is not so pronounced. Manual and AutoMap versions take approximately the same time; in different test runs, one will be quicker than the other. The static map is consistently faster than manually populating the target object (what? Yes. Possibly due to the hard-core reflection optimisation technique from Jon Skeet):

Up to 1,000 objects, the performance hit in using the AutoMap is negligible:

Above 1,000 objects the cost is more pronounced:

Note that the effort in mapping is computational, not memory-bound, so in a higher-spec system the differences will be smaller.

In a production system, adding 0.0x seconds to a service call involving a database lookup or a service call is likely to be acceptable, especially if the map is used for a single object, or the map can be reused – in which case the overhead will be 0.00x seconds. Likewise if you’re populating a single model for a view, it’s likely to be justifiable for the reduction in the solution’s technical debt.

In different scenarios, the computation of the AutoMap may be an unacceptable performance hit, in which case a static map at least isolates the mapping logic and provides some of the benefits, at a lower performance cost.

BizTalk Server MVP 2012

BizTalk Server MVP 2012

So another year starts with a great news from Microsoft. I would like to thank Microsoft, my fellow MVPs, my MVP Lead-Ruari Plint, community members and my wife. Thank you all for your support without you people this would not be possible. Hope this year brings the same success to me as the previous years. […]
Blog Post by: Abdul Rafay

Why BizTalk Server 2010 R2 should be BizTalk Server 2013

That BizTalk Server 2010 will have a successor and that the working title of that release is BizTalk Server 2010 R2 was announced at the BizTalk Server team blog in december 2011 and re-announced by folks like Kent Weare, Charles Young, Saravana Kumar, Steef-Jan Wiggers among others.

What some of them hints at but doesn’t discuss further is that the version naming “2010 R2” might not stick. There are good grounds for such guesses. Historically BizTalk Server 2009 was initially called BizTalk Server 2006 R3 before the renaming was announced and BizTalk Server 2010 was called BizTalk Server 2009 R2 when first announced before it too was renamed.

One might argue that the R2 is a good suffix in this release since it is a minor release, without an abundance of new functionality. That’s true.

One might argue that there is only so much to a name, the important thing is that Microsoft is showing that it will continue to maintain and carefully evolve the product. Not so I say.

There is one very important thing that goes into that name that should not be overlooked or underestimated – the support lifecycle. Microsoft’s support lifecycle policy says that products will have 5+5 (mainstream+extended) support. However, that applies to major versions. An excerpt:

“Minor releases follow the same Support Lifecycle as the major product release.

An example of this is Windows Server 2003 R2 which has the same Mainstream Support phase and Extended Support phase dates as the parent product, Windows Server 2003. Likewise, Windows Server 2008 R2 follows the same Support Lifecycle dates as the initial release of Windows Server 2008”

If the product will be BizTalk Server 2010 R2 (and assuming it will follow the general rule), it will not get an extended support lifecycle end date. Except for what is stated in the general policy you can see examples of this scheme throughout the product chain. BizTalk Server 2006 and 2006 R2 are the closest examples, but also Windows Server 2008 and 2008 R2, and SQL Server 2008 and 2008 R2 both follow suit. In all these cases the R2 version does not start a new 5+5 year period. Where as with BizTalk Server 2006 R2 to BizTalk Server 2009 and again to BizTalk Server 2010, we got new lifecycle support dates.

One of the major asks around BizTalk Server before the announcement was for Microsoft to clearly show its continued support for BizTalk Server as a product – they did that, but here is hoping that they will strengthen that statement further by giving us a BizTalk Server 2013 (or 2012).

HTH
/Johan

Blog Post by: Johan Hedberg

The rogue agent that brought BizTalk to its knees

To help others that might find themselves in a similar situation I am posting this odd experience we had with a BizTalk environment during the fall of 2011.

We had a pretty standard setup with good hardware to back it up all the way, set up after best practices. We were using the BizTalk Benchmark Wizard (BBW) to benchmark our environment and were comming up short at around 70 msg/s.

We should have had values that were around 900 msg/s. Overall, from scrutinizing the performance logs using Performance Analysis of Logs (PAL) as well as our own best judgement, we at first couldn't find anything alarming. Processor, Memory, disk, network etc. All good. We also ran things like the BizTalk Best Practice Analyzer (BizTalk BPA), the MessageBoxViewer tool (MBV), the Monitor BizTalk Server SQL Server Agent job, but it all came back looking good. The environment just seemed… slow.

As it turns out the processor was especially interesting knowing what turned out to be the final finding. The processors (two of them per server each of them with 6 cores per processor) was on an average very low, but as it turns out there was one process that was taking the equivalent of 1 full core of power (its Process % Processor time was at 100), but since it didn't stay on one core it was hard to spot the problem. PAL doesn't have an alert for this, and finding the one process and performance counter among all of them is not so easy.

The process was the "HP Insight Server Agents" (cqmgserv.exe). The theory goes that as it was failing, recovering and retrying it was pumping the machine full of events and clogging up the underlying bus.

The closest we got to a match in the form of a support document from HP was this. Once the service was disabled the tests ran as expected att around 1000 msg/s. Later the service was updated to a newer version and started again without causing the same issues.

 

The purpose of this post is not to lay the blame on HP's door but instead to enlighten readers that similar situations can occur and to highlight the value of a tool like BBW, since without it this exception would have likely never got caught and this server would have gone into production delivering much less value on the investment than it should.

HTH
/Johan

Blog Post by: Johan Hedberg