Ive been doing a little stuff with the Twitterize library and the twitter API.
Ive been doing a little stuff with the Twitterize library and the twitter API.
I just finished reading new book on Microsoft BizTalk 2010 written by BizTalk colleagues Kent Weare, Richard Seroter, Thiago Almeida, Sergei Moukhnitski, and Carl Darski. There are many good books that explore core BizTalk features for beginners, intermediate, and advanced developers. This one stands out by the fact that it covers relatively less documented aspect of BizTalk development – integrating with different line of business applications. The subject is vast and diverse so attempt to fit it in one book is a challenge by itself.
The book opens with the chapter on WCF Adapters SDK which can be good introduction into the world of WCF based adapters. It explains high level design and idea of LOB adapters and WCF bindings. The chapter contains a simple application example that uses custom WCF adapter. While working through this exercise readers will get acquainted with adapter metadata, binding configuration, endpoint URI, SOAP action mapping, and other fundamental concepts of the WCF enabled adapters.
Next chapter explores probably the most frequently used and well-known of the WCF Adapters – WCF SQL Adapter. All features of this adapter including more advanced like typed polling, notification and debatching are covered in details with examples. It makes this chapter great practical WCF-SQL handbook for all levels from beginner to advanced user and an excellent addition to the MSDN documentation.
Then follow few chapters each dedicated to one specific LOB application integration: Microsoft Dynamics CRM, WCF SAP Adapter, SharePoint, Dynamics AX, and SalesForce. Every chapter has overview of the Line of Business application, its role in business processes, and then goes into detailed integration example with explanations, sample code, and screen shots.
Separate consideration deserves chapter on integration with Windows Azure Platform AppFabric. This is a new exciting feature of the BizTalk 2010 that expands its capabilities into the cloud based service bus solutions. This chapter alone makes the book worth having.
Overall, authors and editors has done a great job covering such wide and important area of BizTalk functionality. Sure, it’s impossible to go into very deep details and explore all intricacies of every LOB application in just one book. Yet this one is a very good starting point for any consultant in the field who has to deal with disparate systems integration on a daily basis.
Microsoft today has put Windows Azure Service Bus EAI and EDI Labs on its Windows Azure Platform. These labs provides integration capabilities for the Windows Azure Platform to extend on-premises applications to the cloud, provides rich messaging endpoints on the cloud to process and transform the messages, and helps organizations integrate with disparate applications, both on cloud and on-premises. In other words, Service Bus EAI and EDI Labs provides common integration capabilities (e.g. bridges, transforms, B2B messaging) on Windows Azure Service Bus.
Below you find list of resources (taken from Windows Azure Service Bus EAI and EDI Labs – December 2011 Release page):
Supplies details about what is required to properly install and run Service Bus EAI and EDI Labs.
Start learning the basics of developing Service Bus EAI and EDI Labs solutions using these short tutorials.
Learn how Service Bus EAI and EDI Labs enables business-to-business messaging on Windows Azure
Learn about the basic concepts about of rich messaging endpoints and how to use them in Service Bus EAI and EDI Labs.
Learn how to use and configure transforms with rich messaging endpoints.
Learn how to use Service Bus Connect in an EAI application to extend the reach of cloud-based applications to on-premises LOB applications.
Download the samples available for Service Bus EAI and EDI Labs.
This is great news for us BizTalk professionals and ones that are interested in integration capabilities in the cloud.
I can see the labs through my management portal:
It’s always exciting when a new application you’ve worked on goes live. The last couple of weeks have seen the ‘soft’ launch of a new service offered by the UK government called ‘Tell Us Once’ (TUO). You can probably guess from the name what the service does. Currently, the service allows UK citizens to inform the government (as opposed to Register Officers, who must still be notified separately) just once of two types of ‘change-of-circumstance’ event; namely births and deaths. You can go, say, to your local authority contact centre, where an officer will work through a set of screens with you, collecting the information you wish to provide. Then, once the Submit button is clicked, that’s it! With your consent, the correct data sets are parcelled up and distributed to wherever they need to go – central and local government departments, public sector agencies such as the DVLA, Identity and Passport Service, etc. No need to write 50 letters!
With my colleagues at
, I’m really proud to have been involved with the team that designed and developed this new service. For the past few years, we worked originally on the prototypes and pilots (there was more than one!). Over the last eighteen months or so, we have been engaged in building the national system, and development work in on-going. It’s been a journey! The idea is very simple, but as you can imagine, the realisation of that idea is rather more complex. Look for future enhancements to today’s service, with the ability to report events on-line from the comfort of your own home and the possible extension of the system to cover additional event types in future.
Interaction with government has just got a whole lot better for UK citizens, and we helped make that happen. It’s a pity that I don’t intend to have any more children (four is enough!), and I reallyhope I don’t have to report a death in the near future, but if I do, I’ll be beating a path to the door of my local council’s contact centre in order to ‘tell them once’.
I’ve been looking into a solution recently which has a series of remote nodes publishing events to a central node. The publishers have no logic, they just need to contact the central node and record that an event has happened. Off the back of that, the central node does some work inferring what the event means, and computing its relationship to a bunch of other events. Simple enough, but the solution needs to be scalable and make efficient use of its resources.
I started off building it around NServiceBus, but when I had a better understanding of the components and the physical architecture, decided on a different approach. The central node will most likely be an EC2 cluster; using NServiceBus, cluster nodes can communicate with each other’s queues, but the remote nodes would need a bridging mechanism to talk to the central node, and I’d rather have a consistent communication model. So I started rebuilding the service layer in WCF, but I wanted flexibility on how the central cluster nodes distributed work amongst themselves.
Briefly, I wanted a nice solution for this scenario: remote nodes will always talk to the central cluster over HTTP with a REST service; central nodes will distribute load amongst themselves as efficiently as they can. In a low-load situation, the node receiving the originating message should do all the work in the workflow. In a high-load situation, the node receiving the originating message should farm out work to the other nodes. EC2 will let you set up a nice load balancer, but the option to balance load between nodes OR within a node needs a bit more thought – i.e. when a cluster node receives an event published message, it can call the next service in the workflow either by remotely sending a WCF service request to the load balanced URL, or by executing the code locally.
This is actually very simple, and I’ve published a working example on my github code gallery: DistributedServiceSample. In the sample, all service calls in the central node are run through a generic service invoker:
>(svc => svc.Compute(jobId));
The invoker either builds a WCF client proxy, or instantiates the service implementation locally, and makes the call. In the sample the logic for deciding whether to go remote or local is done in config, but this could be worked into something more complex based on current capacity etc. The sample also allows the invoker to decide whether to make a synchronous call, or farm the work out to an async task (again, through config).
Dynamically building a WCF client proxy is very simple, there are no dependencies outside .NET and you can leave all the endpoint configuration to the normal <serviceModel> config section. When the central node is in remote mode, it gets the proxy like this:
var factory = newChannelFactory<TService>(“*”);
service = factory.CreateChannel();
(The asterisk tells WCF to pull the client endpoint and binding from config based on the contract name and assumes there is only one client entry per contract).
When in local mode, it’s a little bit more involved to do it dynamically. In the sample I have a marker interface (IService) to denote a service contract. In the service application startup I register all service implementations in an IoC container (a wrapper around Unity), and then the central node gets the service like this:
The actual method call on the service is done through functions or actions (depending on whether the service returns a response), so it’s all typesafe.
Running the sample
Build the sample locally, and open http://localhost/DistributedServiceSample.Services/JobService.svc in WCFStorm or soapUI. Call CreateJob with whatever parameter you like, and you will see output similar to this in DebugView:
JobService.CreateJob called with Name: fhwfiy
Service.GetExecutionMode using *Synchronous* for service: DistributedServiceSample.Contracts.Services.IJobService
Service.GetServiceLocation using *Remote* for service: DistributedServiceSample.Contracts.Services.IJobService
JobService.SaveJob called with jobId: fhwfiy
Service.GetExecutionMode using *AsynchronousIgnoreResponse* for service: DistributedServiceSample.Contracts.Services.IComputeService
Service.GetServiceLocation using *Local* for service: DistributedServiceSample.Contracts.Services.IComputeService
ComputeService.Compute called with jobId: 621849676
The workflow is that CreateJob triggers a SaveJob call, which in turn triggers a Compute call. The service decides whether each downstream call will be made locally or remotely, synchronously or asynchronously based on the contents of the <distributedservicesample.invoker> section in Web.config.
The obvious extension is to add an operation name to the config settings, so different operations within the same service can be executed in different ways, which is pretty straightforward. More complex is the idea of dynamically deciding whether to make a local or a remote call. The logic for the decision is all isolated, so it would be a case of swapping out the config stuffwith some environment checks, so calls were made locally unless CPU or private bytes or current connections were above a threshold.
The WCF-SQL adapter provides support for multiple inserts through the Consume Adapter Service feature:
However, sometimes you might want to validate the data on the SQL side before before making the insert. For instance, if you have a collection of Customers, where some of them might already exist in the database, and should only be updated. In such a case, you’d have to first make a database lookup, to determine the state of the Customer and then make either an insert or update.
In such a case, using user-defined table types might be your solution. User-defined tables are similar to ordinary tables, but can be passed in as a parameter.
In my sample, I have a Contacts table, and I’m receiving a collection of Persons where some entities are new and some are to be updated.
The user-defied table type will serve as our contract.
CREATE TYPE [dbo].[InsertContactRequest] AS TABLE ( [PersonNo] [varchar](50) NOT NULL, [FirstName] [varchar](50) NOT NULL, [LastName] [varchar](50) NOT NULL, [Phone] [varchar](50) NOT NULL, PRIMARY KEY CLUSTERED ([PersonNo] ASC)WITH (IGNORE_DUP_KEY = OFF) )
The stored procedure takes the user-defined table type as a parameter (@insertContactRequest), then updates all existing rows and inserting all new once.
CREATE PROCEDURE [dbo].[sp_InsertContacts] @insertContactRequest InsertContactRequest READONLY AS BEGIN UPDATE dbo.Contacts SET Phone = r.Phone FROM dbo.Contacts c JOIN @insertContactRequest r on r.PersonNo = c.PersonNo INSERT INTO dbo.Contacts (PersonNo, FirstName, LastName, Phone) SELECT r.PersonNo, r.FirstName, r.LastName, r.Phone FROM @insertContactRequest r WHERE r.PersonNo not in(SELECT PersonNo FROM dbo.Contacts) END
1. In you Visual Studio, right-click the BizTalk project and select Add->Add Generated Items. Select Consume Adapter Service.
2. In the Consume Adapter Service dialog, click the configure button to set the credentials. Click Ok, and then Connect.
3. In the tree-view, select Strongly Typed Procedures, and select your stored procedure in the right pane. Click Add and Ok to generate the schemas.
4. Make your transformation, and complete your solution.
(Kudos Daniel %u00d6stberg)
Blog Post by: wmmihaa
”There is no Hello World in BizTalk”
– Dan Rosanova (author of the book)
I really want to like this book. Right off the bat the title is awesome with a capital A. Right up my alley as it were. For a long time there has been a lack of literature about the more practical details around patterns and anti-patterns and how they are implemented in BizTalk. I am not saying that there has not been anything about patterns, there is a ton of them, but nothing really about how to implement them in a BizTalk solution, and how to do that in practice. Enter this book.
Oh, how I would like to like his book. I am however somewhat disappointed, but you should definitely buy it.
There are chapters and headings that underline how very insightful the author is about BizTalk and its implementation. Headers that are very helpful in summarizing all the disparate definitions that you might already have about BizTalk. One particular header is “When to use BizTalk”, which does the best job ever to answer that question. Another is “Visual Studio solution structure” that explains in very practical and well-grounded points how the author feels the solution has to be structured. In my opinion; if you do not already have a solution structure document, use this one.
The text oozes with practical knowledge and a lot of humor as well. There is no doubt in my mind what so ever that the author, does not only know what he is talking about, but also that there is quite a lot that has been left out of the book in order to shorten the text. He does a very good job of taking a practical approach to development and architecture. He makes use of the same example solution throughout the book. This makes you feel for the solution as you might do for a real life solution. You started it and have seen it grow from a mere file-copy solution to being a core process handler in the enterprise. Another good point in doing this is that he shows the importance of doing the architecture right from the start. To really think ahead and figure out what might be next in order not to “paint yourself into a corner”. I think there is too little of this in the book. I would have liked more of a discussion about what might be the best solution within the current context. As all of you know; the problem usually is not to find a solution, but to find the best one within a certain context. That is the beauty of BizTalk in my opinion.
Also, once I would like to read a BizTalk book that assumes I know BizTalk and can tell the difference between a pipeline and a map, and know how to use custom xslt. This book does not, and spends few too many pages about the basics. I think this is due to the editor or publisher. They think you need that in order to make the book complete. I would say you do not. Anyone that would pick up this book feels confident about the basics. The title says so.
The author takes a very practical approach, and together with the code supplied you can easily follow along and learn hands on as well. Once again I would like to point out that some parts might focus too much on the practical but some people really like that so it is just an opinion.
Then there is another thing about the book as a whole. Due to the fact that the author wants you to see the solution grow as it might in a real life scenario, the disposition of the book is partly totally off the wall. The same chapter covers Unit testing and BAM, and another mixes configuring WCF-receive and BRE.
So, bottom line: Should you buy this book? Of course you should! The technical aspect of it together with the experience of the author is an opportunity you should not miss. Another point I would like to reiterate is that it fills a gap in the integration literature and I would especially recommend it to people that have worked with BizTalk for two years and changed projects during that time. It might be time to move ahead and have an option during the next ICC meeting.
Though I would like to discuss certain parts about solution structure with the author, there are a lot of tips and tricks I will use in the future.
Blog Post by: Mikael Sand
As you may know, there is an MCTS-exam called 70-595, for us that work with BizTalk. The formal name is: “Developing Business Process and Integration Solutions by Using Microsoft BizTalk Server 2010”. Many seem to think that passing this test is hard and also a challenge, but if you do pass, it gives you a sense of pride and accomplishment. You have also received a sign of approval from Microsoft about your knowledge and abilities. If you are lucky, and work for the right employer, you might even receive a bonus for passing.
How do I book a test date?
Firstly you need to find a Prometric-licensed test center. Microsoft works in collaboration with Prometric worldwide and they are responsible for making sure that everyone gets to take the test under the same conditions. They, in turn, certify locations (or test centers) all over the world. You need to find one of those. If I were you I would check the local education center that your company usually uses.
If you live in Sweden I can recommend AddSkills. You can book a date right from their site, and even send the invoice directly to your employer. The cost for taking the test varies, but at AddSkills the current rate is 1 950 SEK for the Gothenburg and Stockholm test centers. Others might be more expensive.
Is the test hard?
Yes, it is hard. The point of the test to show that you are better than a beginner and that you also have some working experience with the product. I have heard different rumors about different Microsoft exams. Some seem to be easy-peasy, and others are hard as nails. This might be the case but I can tell you that the BizTalk is in the latter category rather than the former.
What is the process of the exam itself?
There used to be practice tests available, that closely mimicked the actual test’s process. I have not been able to find any tests for BizTalk 2010 but I will assume that, since they were available it is OK for me to write a bit about the process.
Most importantly: No cheating! If you are caught cheating, you will be banned from taking further tests and the community will shun you. You will be forced to live out in the forest with only the birds and the voices in your head to keep you company. So please don’t cheat.
You will be placed in front of a computer configured to conform to some sort of minimum standard. You are only allowed to use the tools that the computer supplies and also a scratch-pad, provided by the test center. You are not allowed to bring anything but yourself into the test room. To further reduce the risk of cheating the testing room will be monitored via CCTV.
The test is entirely based on multiple choice. Some of the questions have answers in the form of RadioButtons (one correct answer), and others CheckBoxes (more than one might be correct).
There is a “mark for review” system. This means that you can mark a question for review, if you feel you might want to rethink that later. After you have answered all the questions you have an opportunity to view all the questions again. The ones you have marked for review is very clearly marked.
The total time for the test is two and a half hours, which is ample time in my view. If you are worried that you might not be able to finish on time, you can use the review-system to mark questions you are not so sure about or you feel might take a long time to answer. Then run thru the test once and answer all the “easy” ones. This way you know that you will at least get those.
My method has been to take one question at a time and really focus on just that question. Read it, analyze it and try to get to the bare bone of the question. Give it the answer you feel is the best one and if you are unsure; mark it for review. Then drop that question and focus hard on the next one. I run thru the test in this manor until there are no more questions. By then, I am ready for a short break. After the break I review all the questions marked for review. This time, focusing even harder on the text and the different answers, as there might be pitfalls. Personally I don’t believe in reviewing all the questions, it’s just a waste of time.
When you are done, you submit the test and after 10 agonizing seconds you will receive your score: pass or fail. If you pass, don’t forget to pick up your test result sheet from the test center. This paper will tell you your score and how well you performed in the different categories (see below).
In case of success, don’t forget to celebrate and treat yourself to something as a reward for all that hard work.
What do I need to study?
That actually has an official answer from Microsoft located here. I heartily recommend reading that article thoroughly and several times. Note that they have used the wording “including but not limited to”, so even if the list of things to know might be long, it might not be complete.
Here it is, together with the relative percentage of that subject compared to the whole exam.
Something worth mentioning is that there is a lot of focus on Web Services and Wcf. It is about as important as developing BizTalk artifacts, which is pretty darn “core BizTalk”.
Something else worth noting is the point called “Implementing extended capabilities”. That is worth 13 % but includes RFID, EDI, BRE and BAM. It is safe to say that you might not have to know all the ins and outs of BAM or RFID to pass, but to dismiss them altogether is na%u00efve.
The best advice I can give you about what to focus on when studying for the exam is this:
How do I study for the exam?
There is really no simple solution for this as there is no self-paced learning kit or anything like that. There might be a book on this subject in the making, according to rumors. I have no idea about release date or anything though.
I would also recommend the book Microsoft BizTalk Server 2010 Unleached, partly written by a colleague of mine, Jan Eliasen. You should, of course, read the book from cover to cover but if you are more target oriented you can obviously skip the more extensive aspects about BRE, everything about Windows Azure and the short part about the ESB toolkit.
Also read the blog post about the exam written by fellow blogical-blogger and MCT Johan Hedberg.
I passed my test. Will you prove yourself? Just kidding! Have fun studying; you will learn a lot of useful things about our favorite product.
Also, please provide feedback if you disagree or feel I have said too much about anything. I am only trying to help, not violate the NDA.
Blog Post by: Mikael Sand
Last week. Tellago Studios’ technology evangelist Uri Katsir delivered a super interesting webinar about managing BizTalk Server from you mobile device using Moesion . I am still receiving emails about some of the capabilities available in the current…(read more)
Blog Post by: gsusx
I was sending an email dynamically, from an orchestration.
I set up a nice payload message with all of these nicely distinguished fields, that contained the to/ from, subject and body of the email, so that other parts of my system could send an email, just by generating this message.
My message assignment shape had:
EmailSendMessage = EmailReceiveMessage;
EmailSendMessage(SMTP.EmailBodyText) = EmailReceiveMessage.Body;
EmailSendMessage(SMTP.CC) = EmailReceiveMessage.CopyTo;
EmailSendMessage(SMTP.From) = EmailReceiveMessage.From;
EmailSendMessage(SMTP.Subject) = EmailReceiveMessage.Subject;
EmailSendMessage(SMTP.MessagePartsAttachments) = 0;
EmailSendPort(Microsoft.XLANGs.BaseTypes.Address) = “mailto:” + EmailReceiveMessage.SendTo;
The message itself was not sent, just the properties of this message, so the message I was constructing as receive message didn’t matter, so I made it the same type of the EmailReceiveMessage. It compiled, and I deployed it.
Event Type: Error
Event Source: BizTalk Server 2009
Event Category: (1)
Event ID: 5754
Time: 1:16:31 PM
A message sent to adapter “SMTP” on send port “XXX.Email.Orchestrations_126.96.36.199_XXX.Email.Orchestrations.SendEmail_EmailSendPort_43e93d0db20c465a” with URI “mailto:firstname.lastname@example.org” is suspended.
Error details: Unknown Error Description
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
I then checked and you needed to have the body text charset:
EmailSendMessage(SMTP.EmailBodyTextCharset) = “UTF-8”;
Second try, same error
Several modifications later I get the message: Unknown Error Description
Which I must say is not a great deal of use when you are trying to figure out what’s wrong.
I then decided I’d make a different message type for sending, and used a transform. I used my send message and copied and pasted the schema, I changed the target name space of course.
I was annoyed, and tried a bunch of things, then EUREAKA!
The send schema which I copied and pasted has the same properties promoted as distinguished fields.
This was BAD it seems, the instant I removed these properties from being promoted, and changing nothing else everything worked.
BAD BAD BAD… schema properties, who would have thought
DO NOT HAVE PROMOTED PROPERTIES ON YOUR SCHEMA WHEN SENDING TO THE SMTP ADAPTER.