by community-syndication | May 3, 2008 | BizTalk Community Blogs via Syndication
I’ll be doing a webcast next Friday on Microsoft’s ESB Guidance. It will be recorded if you can’t make it live. This will be the first in a series, but I am not going to commit to future dates yet as I have a ton of things going on and am looking at 80%+ travel for an extended period of time. I will however be doing more webcasts. Future topics will be more advanced, include themes such as a deeper drilldown into SOA governance, extending the ESB Guidance, etc.
Details for this first webcast are:
Title
Introduction to Microsoft’s ESB Guidance
Description
The Microsoft ESB Guidance uses Microsoft BizTalk Server 2006 R2 to support a loosely-coupled messaging architecture, and extends the functionality of BizTalk Server to provide a range of new capabilities focused on building robust, connected, service-oriented applications that incorporate itinerary-based service invocation for lightweight service composition, dynamic resolution of endpoints and maps, Web service and WS-* integration, fault management and reporting, and integration with third-party SOA governance solutions.
This webcast will be an introduction to Microsoft’s ESB Guidance. The goal of this session is to explore the architecture of Microsoft’s ESB Guidance 1.0, and explain how it can be applied to create ESB-based business solutions. We will examine the core components of the ESB Guidance, as well as the built-in extensibility points.
Agenda
%u00b7 Business drivers behind an ESB (Why do I care? Why do I want to do this?)
%u00b7 Architectural overview (What’s in the box? How does it work?)
%u00b7 Demos
Meeting Details
Friday May 9, 2008, 1:00-2:15pm Pacific GMT -8 (75 minutes)
Call for audio: 866-500-6738 or +1-203-480-8000, passcode: 221223#
Live Meeting: https://www.livemeeting.com/cc/microsoft/join?id=BTSBAG&role=attend&pw=35DKTQ
Technorati Tags: BizTalk,ESB Guidance,ESB,SOA
by community-syndication | May 2, 2008 | BizTalk Community Blogs via Syndication
I’m late blogging about this (and others!) as I’ve been traveling (more on what I’ve been doing in a future post). Earlier this week, Neuron-ESB 2.0 shipped.
Neuron-ESB 2.0 is the culmination of a lot of hard work by a lot of smart people. It is an ESB product that is built from the ground up on WCF. The fact that this exists at all is a true testament to the brain trust at Neudesic, and shows just how deeply SOA/ESB thinking runs throughout the entire company. It’s an honor and a pleasure to work alongside these folks.
Best place to go if you want more information is http://NeuronEsb.com. My colleague David Pallman (“father” of Neuron-ESB, ex-Indigo team member, author, and my co-author on a future ESB book), posted a great architectural overview. He’s also did another blog post that talks about the management experience. This is likely the first two posts of many (now that he has a little time to breath :).
To ward off the inevitable question about “does this replace BizTalk?”, the answer is “no, not at all, in fact they work really well together”. I intend to elaborate on this point in a future post as I keep encountering (understandable) confusion on this question.
So, David, Marty, Curt, Brandon and everyone else that has been involved throughout Neuron-ESB’s development cycle: congratulations on a job well done!
Technorati Tags: Neuron-ESB,SOA,Neuron,ESB
———————————————————————————
NEUDESIC RELEASES NEURON-ESB 2.0
New version of Enterprise Service Bus software extends the Microsoft .NET Platform
IRVINE, CALIF. – April 29, 2008 – Neudesic, a leading provider of business solutions that leverage the capabilities of the Microsoft product line, announced today the release of version 2.0 of Neuron-ESB. Neuron-ESB is an Enterprise Service Bus that extends the Microsoft Platform by providing real-time messaging, integration and web service management. Neuron-ESB accelerates SOA adoption by helping companies successfully implement real-time integration across their enterprise, allowing timely response to changing events within their business.
Neuron-ESB is built on the Microsoft Windows Communication Framework (WCF) technology to provide real-time reliable messaging options for companies adopting SOA. Neuron-ESB manages all communication over the bus by sending messages over “Topics” using a publish-subscribe pattern and supports federated, geographic deployments. Neuron-ESB helps companies administer and automate complex tasks and is proven to significantly reduce the infrastructure, development, training and long term support costs for businesses developing SOA solutions.
“Neuron-ESB provides the messaging backbone for all of our critical applications,” said Jeffrey Sullivan, Chief Information Officer of ThinkCash. “Neuron-ESB allowed us to leverage our developers much more effectively while providing us the ability to go to market quickly with new solutions. We were able to shift our service development from the architect role to the more ubiquitous developer role while, decreasing our deployment time of new services by 50%. We started with just 1 developer who received 4 days of Neuron-ESB training. Within 6 months and no additional training, we had a 15X increase in the number of our internal developers who were able to use Neuron-ESB.”
Neuron-ESB 2.0 delivers a unique set of capabilities that extend and combine key strategic Microsoft technologies such as Microsoft BizTalk Server 2006 R2 & RFID, Microsoft Office SharePoint Server 3.0, Microsoft SQL Server, Microsoft Dynamics, Microsoft Office, .NET 3.0/3.5, Windows Communication Foundation (WCF), Windows Workflow Foundation (WF), WCF Line of Business Adapters and MSMQ. The synergy between Neuron-ESB and these products empower companies to develop more robust and business-aware applications with far less effort and complexity.
“Neuron-ESB 2.0 represents a significant evolution for the Microsoft Platform while addressing the Enterprise Service Bus needs of every customer running Microsoft Windows. Our technology allows businesses to effectively leverage their Microsoft investments to deliver real-time solutions,” stated Marty Wasznicky, Vice President of Product Development. “Our product provides a new level of flexibility and ease of use that will help companies increase their productivity while reducing their development and operational costs. Moreover we’ve formed a strategic partnership with SOA Software and achieved certification as a Governed Service Platform through the Open Governance Initiative. Our customers can be confident that Neuron-ESB will enhance the fidelity of their Governance initiatives.”
“The Open Governance Initiative is rapidly gaining momentum amongst platform vendors, Governance solution providers, and end-user customers,” said Frank Martinez, Executive Vice President of SOA Software. “The addition of Neuron-ESB, as a Microsoft .NET and WCF based ESB to the list of Governed Service Platforms highlights the importance of this certification for platform vendors.”
by community-syndication | May 2, 2008 | BizTalk Community Blogs via Syndication
Are you one of "those" ASP.NET web developers that care passionately about not only about writing "good" code, but writing "easily" understood and "readable" code? Are you looked upon as perhaps a little bit "obsessive" about your code? Do you understand what "semantic" really means?
If the answer is yes, have you ever looked closely at the HTML markup your ASP.NET code generates? I mean taken a really, REALLY close-up look?
If you have and you're anything like me, it bugs the hell out of you when adding something as simple as <head runat="server"> produces this mess in the <head> of your otherwise beautiful HTML markup.
Of course, there are ways to fix this mess. You could always forget using Master Pages and code each .aspx page by hand or even write your own base-page class like I've seen done. You could even revert to using "static" .html pages with JavaScript and forget about all the great features .NET brings to web development. Or you could just forget about ever creating truly "semantic" HTML markup using ASP.NET!
However, if you Google (or Live Search) long enough, you'll find a few posts about something called Adaptive Control Behavior in the MSDN Library and three very well hidden posts by Anatoly Lubarsky with some great sample code!
http://blogs.x2line.com/al/archive/2007/01/10/2773.aspx
http://blogs.x2line.com/al/archive/2007/01/31/2814.aspx
http://blogs.x2line.com/al/archive/2007/01/31/2816.aspx
These three posts and the sample code you can download here, turn this code …
into this markup …
which is exactly what the <head> element of any respectable HTML markup should look like! And yes, I know it doesn't matter one hoot to the browser (even IE6) which will faithfully render the web page correctly, but IT MATTERS TO ME.
I use "View Source" and Firebug almost every day to look at my own markup as well as the markup of sites who's authors I respect. I want my markup to look every bit as professional as the markup of a professional web "designer" such as Dan Cederholm, John Gruber or Andy Clarke.
Don't you?
Currently listening to: "Caravan of Dreams" by Peter White
by community-syndication | May 2, 2008 | BizTalk Community Blogs via Syndication
Thought i’d post this in case anyone else runs into this problem. I had worked through the issue with the help of an MSDN Post a while ago on one computer and while upgrading on another machine i ran into this error today and it took a while to remember the solution.
Heres the problem. If […]
by community-syndication | May 2, 2008 | BizTalk Community Blogs via Syndication
We spend a bit of our time interacting with customers, partners and other concerned citizens who take the time to write out some feedback. Microsoft is very customer focused and spends time working through these “verbatims” at different levels. The Connected…(read more)
by community-syndication | May 2, 2008 | BizTalk Community Blogs via Syndication
If you are building Windows Workflow Foundation (WF) applications today (and you aren’t
building Sharepoint Workflows) you should be using WorkflowServiceHost for your hosting
environment. Period, end of discussion (oh – well we can have more decision
about it – but its a fait accomplis at this point).
Especially if you are using WF to implement a service, using .NET 3.5 is a total no
brainer.
I was doing so the other day using a StateMachineWorkflow hosted in IIS and I was
getting an exception. The exception (which I found after I attached the debugger
to IIS) was the dreaded (and pretty common) exception “QueueNotFound for queue X”.
Now – if I was writing the hosting layer myself I would handle the WorkflowRuntime.WorkflowIdled
event and introspect on the WorkflowInstance using WorkflowInstance.GetWorkflowQueueData
method to see what queues where available at different points during the workflow
to help figure out what the problem was. The problem was that I expected the
Queue to be there at that paritcular point of execution, and I wanted to verify that
fact using the WorkflowInstance.GetWorkflowQueueData method like this:
void WorkflowRuntime_WorkflowIdled(object sender,
System.Workflow.Runtime.WorkflowEventArgs e)
{
ReadOnlyCollection<WorkflowQueueInfo> queues;
queues = e.WorkflowInstance.GetWorkflowQueueData();
foreach (WorkflowQueueInfo
qi in queues)
{
Debug.WriteLine(“QueueName
: “ + qi.QueueName);
foreach (string actName in qi.SubscribedActivityNames)
{
Debug.WriteLine(“Activity
subscribed: “ + actName);
}
}
}
This is code I end up writing in just about every Workflow application I build because
it is just super useful to know what Queues a workflow is listening for at a particular
time.
So here was my problem – since I was using WorkflowServiceHost implicitly through
an svc file:
<%@
ServiceHost
Service=“WorkflowArtifacts.CalcWorkflow”
Factory=“System.ServiceModel.Activation.WorkflowServiceHostFactory” %>
using WorkflowServiceHostFactory, I had no place to get the WorkflowRuntime from the
WorkflowServiceHost. If I was creating WorkflowServiceHost myself in code –
I could use the following code to get the WorkflowRuntime:
//sh
is the WorkflowServiceHost
WorkflowRuntimeBehavior wrb = sh.Description.Behaviors.Find<WorkflowRuntimeBehavior>();
wrb.WorkflowRuntime.WorkflowIdled += new EventHandler<System.Workflow.Runtime.WorkflowEventArgs>(WorkflowRuntime_WorkflowIdled);
Using the WorkflowRuntimeBehavior – I can get the WorkflowRuntime and then subscribe
to the WorkflowIdled event – and thus be able to the get data I wanted for debugging
my “QueueNotFound for queue X” exception. But alas – using the svc file I don’t ever
get access to the WorkflowServiceHost.
There is however a solution (at least one) – ServiceHostFactoryBase. One
of the cool extensibility mechanisms with WCF you can take advantage of when hosting
inside of IIS/WAS is creating your own ServiceHostFactory. By default with code-based
services – WCF uses a ServiceHostFactory (which is a derived class from ServiceHostFactoryBase)
to create the WCF ServiceHost (which derives from ServiceHostBase) for hosting a code-based
service. For Workflow Services – they use a WorkflowServiceHostFactory (as you
can see from my svc file) to create the WorkflowServiceHost.
Luckily (and I’m sure intention on the framework team’s part) WorkflowServiceHostFactory
isn’t sealed. What this means is that I can create my own WorkflowServiceHostFactory
class – change the Factory attribute in my svc file to point to my factory – and then
insert the code I wanted to work with the WorkflowRuntime object.
Here’s my WorkflowServiceHostFactory class:
public class MyWorkflowServiceHostFactory
: WorkflowServiceHostFactory
{
public override System.ServiceModel.ServiceHostBase
CreateServiceHost(string constructorString,
Uri[] baseAddresses)
{
ServiceHostBase sh = base.CreateServiceHost(constructorString,
baseAddresses);
//sh
is the WorkflowServiceHost
WorkflowRuntimeBehavior wrb = sh.Description.Behaviors.Find<WorkflowRuntimeBehavior>();
wrb.WorkflowRuntime.WorkflowIdled += new EventHandler<System.Workflow.Runtime.WorkflowEventArgs>(WorkflowRuntime_WorkflowIdled);
return sh;
}
void WorkflowRuntime_WorkflowIdled(object sender,
System.Workflow.Runtime.WorkflowEventArgs e)
{
ReadOnlyCollection<WorkflowQueueInfo> queues;
queues = e.WorkflowInstance.GetWorkflowQueueData();
foreach (WorkflowQueueInfo
qi in queues)
{
Debug.WriteLine(“QueueName
: “ + qi.QueueName);
foreach (string actName in qi.SubscribedActivityNames)
{
Debug.WriteLine(“Activity
subscribed: “ + actName);
}
}
}
}
And my svc file:
<%@
ServiceHost
Service=“WorkflowArtifacts.AccumWorkflow”
Factory=“MyWorkflowServiceHostFactory” %>
And magically my service still works, and I am able to handle the WorkflowRuntime.WorkflowIdled
event – or any other WorkflowRuntime event I want.

Check out my BizTalk
R2 Training.
by community-syndication | May 2, 2008 | BizTalk Community Blogs via Syndication
Introduction
Recently, I was experimenting with Team Foundation Server 2008, setting it up with SSL and running it within a test domain. For the Team Foundation Server, I couldn’t use the host installation as it runs Windows Server 2003 R2 x64 edition and TFS doesn’t support (see the TFS installation guide, under ‘Overview of Team Foundation Architecture’, ’64-bit Support in Team Foundation’) this setup in a single server scenario. So, I installed Virtual Server 2005 R2 SP1, installed TFS on top, configure SSL and voila, a working TFS setup.
When I shut down the host computer, I want the virtual machine of TFS to save state and come back up again when the host is turned on again. This can be done in Virtual Server using an alternative use running the actual Virtual Machine instance. Trying to set this up in a least privilege way proved not to be obvious from documentation, so this blog entry documents what I did for posterity (and myself ;-)).
Outline
Here’s what needs to be done in order to circumvent obscure error messages:
1. We need a new user group so the account we’ll use doesn’t belong to default domain users and inherits no permissions.
2. We need a new user to run the Virtual Machine instance, the user should belong to this group only.
3. The user needs to be given ‘Local on locally’ rights.
4. Permissions need to be set on the folder containing network configuration information for Virtual Server.
5. Permissions need to be set on the folder containing the Virtual Machine and the actual files (*.vhd, .vmc) making up the Virtual Machine.
6. The Virtual Machine needs to be configured to use the new user.
Instuctions
Let’s configure the necessary elements.
1. Create a new group
Create a new group within the Active Directory Users and Computers MMC snapin (found under Administration Tools):
2. Create a new user
Create the user which is to run the specific Virtual Machine (done from the same MMC snap-in), add it to the ’empty’ group, set the ’empty’ group as it’s Primary Group and remove the ‘Domain Users’ group from the list. After this, your user overview should resemble this image:
3. Assign ‘Log on locally’ rights to the user
This step is critical in getting the Virtual Machine running under the new user context. Steps to achieve the appropriate right setting are described here. If the user doesn’t receive the ‘Log on locally’ right, Virtual Server will display an error: ‘The account name and password could not be set. The virtual machine account could not be set. The account has not been granted the requested logon type.‘. Make sure the security policy is updated before using the account.
4. Set permissions for the used virtual network interface
Now that we have the user and it’s group configured, let’s set the appropriate permissions for the user to make use of the configured network. Mind you, these instructions will only allow the user to use the network it’s given access to from the instructions, the ‘local network only’ et al will not work as the user has no rights on the files used for those configurations.
The virtual network configuration files for Virtual Server are stored in %SystemDrive%\Documents and Settings\All Users\Application Data\Microsoft\Virtual Server\Virtual Networks. The user needs permissions as specified in order to use the network. If permissions are set incorrectly, the Virtual Machine will not have network access.
5. Set permissions on Virtual Machine folders and files
In order to start up the Virtual Machine, save state, etc, the new user needs access rights on the folder storing the actual files making up the Virtual Machine as well as specific rights on the Virtual Machine files. The folder structure containing my Virtual Machine files is:
%SystemDrive%\vms\<Virtual Machine>
First, let’s set the appropriate rights on the folder hosting all Virtual Machines:
Now, let’s set the permissions for the appropriate Virtual Machine (‘tfs‘) folder:
Lastly, set up permissions for the Virtual Machine files (my TFS has 3: tfs.vmc, tfs.vhd and sql.vhd):
6. Configure the Virtual Machine
All permissions are set, we’re ready to configure Virtual Server to run the Virtual Machine under the new user context:
Wrap up
That’s it! We’ve configured the Virtual Machine to run under a user context which has the least amount of privileges it needs to function correctly. The Virtual Machine will save it’s state when the host is shut down and will automatically turn back on when the host comes back online.
HTH
by community-syndication | May 1, 2008 | BizTalk Community Blogs via Syndication
WCF: values disappeared in response: derived classes and serialization/deseriazlization order error
This is a second article about errors with getting data from the web-services.
“WCF: Deserialization error, the response elementsdisappeared” ( http://geekswithblogs.net/LeonidGaneline/archive/2008/04/01/wcf-deserialization-error-the-response-elements-disappeared.aspx )
First time the whole elements disappeared from the response messages without any errors. Now only values of the elements disappeared.
We got a very strange error.
Our client code to one of the third-part web-service could not get the values from the response message.
Investigation given us such picture:
The response class has a base class. And the disappeared values are exactly from properties of this class. See the client proxy code that was generated by Visual Studio (Pict.1). Nothing wrong with it.
If we look at the wsdl that was the source for this proxy code (Pict.2) there is nothing wrong too.
The most strange thing is we have the right values in the response message (Pict.3). The errorCode and errorMessage elements have values but then we lost them. We can get them in the code (Pict.4). The output is on Pict.5.
How? Why?
The base class properties are the errorCode and errorMessage, the derived class property is the crn.
The question is in what order those two kind of properties serialize to the elements of the Xml message and then deserialize back to the class properties?
If service and client make serialization and deserialization in the “stack-like” order then everything is OK.
We see that service serialize the derived class properties first and the base class properties last (See Pict.3.1). What about our client? The client is waiting the message with different order of elements (see Pict.3.2)
And this is the source of the problem!
In the “
Data Member Order ” article in the
MSDN Library (
http://msdn.microsoft.com/en-us/library/ms729813.aspx ) we see:
“…The basic rules for data ordering include:
If a data contract type is a part of an inheritance hierarchy, data members of its base types are always first in the order…”
And there is no way to change this behaviour.
That’s exactly how my client proxy is working. It is waiting the base class properties on the first place and have not get them, and because these properties have minOccurs=”0″ and nillable=”true” attributes the proxy just decided that Xml message does not have properties at all and silently goes on to the first derived class property.
Unfortunately the third-part service uses the opposite order (it uses a java Axis2 service), data members of its base types are always last in the order in the serialized Xml messages.
Jeff W. Barnes in (
http://jeffbarnes.net/portal/blogs/jeff_barnes/archive/2007/05/08/wcf-serialization-order-in-data-contracts.aspx) mentioned thet there isthe solution but does not give it.
Is it possible to change the proxy code on my side to follow the service order? I don’t know such options. Seems the WCF-classes where the whole deserialization occurs cannot be tuned up to change this order. Look one more time to the wsdl code (Pict.2). There is nothing about this order, ther is only the complexType the AddNewCreditCardResponse with the extension base=”ax21:ResponseData”.
Seems the only way to fix the problem is to change the auto generated code for the response message classes and force the right order. (Pict.6) OK from the point of view of WCF it is a wrong code, but what else we could do in this situation?
After these changes we’ve got the right result (see Pict.7).
=======================================================================
[Pict.1: autogenerated proxy code]
[System.SerializableAttribute()]
[System.Xml.Serialization.XmlTypeAttribute(Namespace=”http://data.transaction.com/xsd”)]
public partial class AddNewCreditCardResponse : ResponseData {
private string crnField;
[System.Xml.Serialization.XmlElementAttribute(IsNullable=true, Order=0)]
public string crn {
get { return this.crnField; }
set { this.crnField = value; }
}
}
…
[System.Xml.Serialization.XmlIncludeAttribute(typeof(AddNewCreditCardResponse))]
[System.Xml.Serialization.XmlTypeAttribute(Namespace=”http://data.transaction.com/xsd”)]
public partial class ResponseData {
private string errorCodeField;
[System.Xml.Serialization.XmlElementAttribute(IsNullable=true, Order=0)]
public string errorCode{
get { return this.errorCodeField; }
set { this.errorCodeField= value; }
}
private string errorMessageField;
[System.Xml.Serialization.XmlElementAttribute(IsNullable=true, Order=1)]
public string errorMessage{
get { return this.errorMessageField; }
set { this.errorMessageField= value; }
}
}
[Pict.2: wsdl code]
<xs:complexType name=”AddNewCreditCardResponse”>
<xs:complexContent mixed=”false”>
<xs:extension base=”ax21:ResponseData”>
<xs:sequence>
<xs:element minOccurs=”0″ name=”crn” nillable=”true” type=”xs:string” />
</xs:sequence>
</xs:extension>
</xs:complexContent>
</xs:complexType>
…
<xs:complexType name=”ResponseData”>
<xs:sequence>
<xs:element minOccurs=”0″ name=”errorCode” nillable=”true” type=”xs:string” />
<xs:element minOccurs=”0″ name=”errorMessage” nillable=”true” type=”xs:string” />
</xs:sequence>
</xs:complexType>…
[Pict.3.1: original response message]
…
<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/”>
<soapenv:Body>
<ns:addNewCreditCardResponse xmlns:ns=”http://service.transaction.com”>
<ns:return type=”com.transaction.data.AddNewCreditCardResponseData” xmlns:ax21=”http://data.transaction.com/xsd”>
<ax21:crn/>
<ax21:errorCode>TXN056</ax21:errorCode>
<ax21:errorMessage>Process auth with create profile error</ax21:errorMessage>
</ns:return>
</ns:addNewCreditCardResponse>
</soapenv:Body>
</soapenv:Envelope>
[Pict.3.2: response sample with “WCF” order]
<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/”>
<soapenv:Body>
<ns:addNewCreditCardResponse xmlns:ns=”http://service.transaction.com”>
<ns:return type=”com.transaction.data.AddNewCreditCardResponse” xmlns:ax21=”http://data.transaction.com/xsd”>
<ax21:errorCode>TXN0544</ax21:errorCode>
<ax21:errorMessage>Process authentification with profile error</ax21:errorMessage>
<ax21:crn/>
</ns:return>
</ns:addNewCreditCardResponse>
</soapenv:Body>
</soapenv:Envelope>
[Pict.4: test code]
…
Transaction_Ref.TransactionSvcClient client = new TestTxsServiceConsoleApplication.Transaction_Ref.TransactionSvcClient (“TransactionSvcSOAP11port_http”);
Transaction_Ref.addNewCreditCardResponse addNewCreditCardResponse = client.addNewCreditCard(…);
Console.WriteLine(“crn: [{0}]”, (addNewCreditCardResponse.crn == null ? “null” : addNewCreditCardResponse.crn));
Console.WriteLine(“errorCode: [{0}]”, (addNewCreditCardResponse.errorCode == null ? “null” : addNewCreditCardResponse.errorCode));
Console.WriteLine(“errorMessage: [{0}]”, (addNewCreditCardResponse.errorMessage == null ? “null” : addNewCreditCardResponse.errorMessage));
…
[Pict.5: output with original proxy code]
crn: []
errorCode: [null]
errorMessage: [null]
[Pict.6: changed proxy code]
[System.SerializableAttribute()]
[System.Xml.Serialization.XmlTypeAttribute(Namespace=”http://data.transaction.com/xsd”)]
public partial class AddNewCreditCardResponse {
private string crnField;
[System.Xml.Serialization.XmlElementAttribute(IsNullable=true, Order=0)]
public string crn {
get { return this.crnField; }
set { this.crnField = value; }
}
private string errorCodeField;
[System.Xml.Serialization.XmlElementAttribute(IsNullable=true, Order=1)]
public string errorCode{
get { return this.errorCodeField; }
set { this.errorCodeField= value; }
}
private string errorMessageField;
[System.Xml.Serialization.XmlElementAttribute(IsNullable=true, Order=2)]
public string errorMessage{
get { return this.errorMessageField; }
set { this.errorMessageField= value; }
}
…
[Pict.7: output with changed proxy code]
crn: []
errorCode: [TXN0544]
errorMessage: [Process authentification with profile error]
=========================================
Please, give me a feedback!
by community-syndication | May 1, 2008 | BizTalk Community Blogs via Syndication
I have two things to be excited about today. First, we’ve finally had some sunny spring days in Seattle (which means I had plenty of dry time outside this week). Second, there’s more great news about our B2B technology. Today, we’re announcing another important update for our customers-we’ve come to an agreement with our partner Covast to acquire advanced B2B capabilities. This will be incorporated into a new feature pack which will be available as part of Software Assurance benefits to customers who license BizTalk Server (for context on BizTalk Server 2006 R3, see my previous post).
Why is this important? It builds on the most recent B2B improvements in BizTalk Server. Some highlights include:
%u00b7 New standards for specific retail segments such as warehousing, grocery, energy, automotive and air freight
%u00b7 B2B metadata management for EDI ’super’ interchanges, deeper integration with SQL Server repository/Visual Studio (EDI Explorer) and new reporting capabilities
%u00b7 Advanced B2B transports include new file adapters and transports and VAN connectivity
%u00b7 B2B operations monitor that enables role-based viewing, end-to-end tracking/tracing and automatic archiving
For our ISV partners, this increases the breath of B2B solutions that partners can build on top of BizTalk Server. They can now take advantage of both industry-specific protocols in their own vertical solutions as well as advanced B2B scenarios for their customers. Over the past year, this has been a specific customer request and adds the growing set of capabilities BizTalk Server brings to businesses. This will make it easier to address the individual needs of a broad set of customers. As an example, auto component makers need info about parts, sales, customers, employees and branch offices to store, track and use to execute transactions. Meanwhile the grocery store down the street needs unique templates, workflows, rules, etc that save development time and work automatically. These new capabilities will help both of them.
We have had a lot of great news about the BizTalk family recently (BizTalk RFID Mobile, BizTalk Server 2006 R3, etc.). I can’t promise that we will have big updates like this on a weekly basis but it gives you a sense for the investments we are making to continue to improve technologies on which customers heavily rely.
by community-syndication | May 1, 2008 | BizTalk Community Blogs via Syndication
Check out this interesting article from IBM about the principles of multitenant applications….(read more)