by community-syndication | Oct 2, 2008 | BizTalk Community Blogs via Syndication
Here is the latest in my link-listing series. Also check out my ASP.NET Tips, Tricks and Tutorials page and Silverlight Tutorials page for links to popular articles I’ve done myself in the past.
ASP.NET
-
Using ASP.NET WebForms, MVC and Dynamic Data in a Single Application: Scott Hanselman has a nice post that demonstrates how you can have a single ASP.NET application that uses ASP.NET WebForms, MVC, WebServices and Dynamic Data. You have the flexibility to mix and match them however you want, which allows you to always use the right tool depending on the specific job.
ASP.NET MVC
ASP.NET Dynamic Data
Hope this helps,
Scott
by community-syndication | Oct 2, 2008 | BizTalk Community Blogs via Syndication
“Within a month, Microsoft will unveil what Ballmer called "Windows Cloud." The operating system, which will likely have a different name, is intended for developers writing cloud-computing applications, said Ballmer, who spoke to an auditorium of IT managers at a Microsoft-sponsored conference in London.”
by community-syndication | Oct 1, 2008 | BizTalk Community Blogs via Syndication
As we head towards the PDC later this month, Microsoft today pre-announced some of the things you can expect to see there.
To meet the evolving needs of service-oriented applications, Microsoft is extending the capabilities of Windows Server, by adding a set of capabilities, “Dublin”, aimed at making it easier to deploy, manage and monitor WF/WCF applications. For developers creating WF-based solutions, this is great news, because it means you will get an enterprise-grade runtime environment to host your WCF/WF. Prior to this, the only WF host from Microsoft was MOSS, you would have had to write your own host, which is a non-trivial task. Now, Microsoft has solved all those hard problems for you. If you’re a BizTalk developer, then rest assured that you will get a host too that will run in this platform, and that all your investments are protected, BizTalk Server 2009 and the roadmap was announced recently, I blogged about it here.
I’m also excited about the enhancements to WF: workflows become declarative, and are XAML-based. You get a new flowchart workflow style. This is all building towards the future, and gets even more interesting with the Oslo modeling platform. I’ll have a lot more to say about that later, starting after PDC.
I’d also like to clarify something that may be, or could become, a source of confusion for you. There has been a subtle morphing recently of what the code name “Oslo” means. When there was just the vision, “Oslo” was used to refer to the entire spectrum of technologies that needed to be built to support the vision. Now that we are further along in the lifecycle, and bits are becoming real, those bits are naturally migrating towards what will ultimately be their ship vehicles.You can see some of that now, with the enhancements to WF/WCF that will be in .NET 4.0, and the process server capabilities that will be in the OS: these are things that used to be part of what “Oslo” was. So to be clear, when we say “Oslo” today, we are now referring ONLY to Microsoft’s modeling platform. I like this shift, and it makes a lot of sense, although I find myself saying “Oslo and related technologies” a lot now when I refer to the whole vision.
Some early bits will be made available to PDC attendees, with betas to follow some time in the future. You can get an overview here. Steve Martin, Senior Director of Product Management in Microsoft’s Connected Systems Division (CSD), wrote about it this morning here.
We’re in for an exciting ride folks, this is just the start….
Technorati Tags: SOA,ESB,Oslo,BizTalk,Dublin
by community-syndication | Oct 1, 2008 | BizTalk Community Blogs via Syndication
Microsoft's Anders Hejlsberg reveals the history behind one of the most common programming languages, C#, and what the future holds for C#4.0. http://www.computerworld.com.au/index.php/id;1149786074;pp;1 Subscribe in a reader Join my blog network Read More……(read more)
by community-syndication | Oct 1, 2008 | BizTalk Community Blogs via Syndication
Post on Slashdot on Microsoft's plan to release an OS intended for developers writing cloud-computing applications. Read more here: http://tech.slashdot.org/tech/08/10/01/1816208.shtml Subscribe in a reader Join my blog network on Facebook Blog Networks Read More……(read more)
by community-syndication | Oct 1, 2008 | BizTalk Community Blogs via Syndication
For me, this year’s PDC in LA will totally be about the "Cloud". Sure topics that interest me are Mesh and Sql Server Data Services (SSDS), but I’m sure there’s more to come, about things like Oslo and other European cities, perhaps BizTalk Services and a curiously colored and mysterious canine, RedDog.
There are two things that interest me, personally, in these kind of cloud paradigms. First, that there are new application models, new architectures, new colors in the pallete, new tools (modeling is one of them). Just look at all the technologies I mentioned. Most of them are usable to develop enterprise applications, they are not customer-facing new things (Mesh is the partial exception here). The second thing that interests me is precisely the engineering challenge, the new problems we will have to solve in a world where almost nothing can be taken for granted. (Can we communicate at all, if everything is extremely loosely coupled?)
Truth is, however, that I don’t think this will be an easy or widespread shift (regardless of what Nicholas Carr thinks). If you talk to most people working in IT today about “moving to the cloud”, you’ll hear jokes about “fog”, and (legitimate) questions about data ownership, security, trust, cost, SLAs and QoS, etc. These issues will have to be tackled with, or at least enough of them.
Data and Business Logic has been near (“it’s mine, all mine!”) almost since the first days of IT, after all.
so if you are in Portugal or nearby and want a partner company to explore some new ground using these technologies (or just have interesting discussions), get in touch. 🙂

by community-syndication | Oct 1, 2008 | BizTalk Community Blogs via Syndication
A question which has popped up quite a few times over the last couple of weeks – “For an outbound contract, I see RFC, TRFC, IDOC and BAPI nodes in the UI. However, for an inbound contract, the BAPI node is missing. Why?”
Firstly, I’m going to explain what each of the four nodes you see really translate to within the adapter, and how they are “transported” to SAP.
The adapter communicates with SAP using the RFC protocol (it uses the RFC SDK for this purpose – which is written in C). RFC stands for “Remote Function Call” – a mechanism for calling Function Modules (functions) on SAP from external systems (you can also have functions defined on SAP which are not enabled for remote access). Using the RFC SDK, an external client can either invoke a RFC on SAP (outbound calls), or listen for incoming function calls (inbound).
The RFC and TRFC nodes you see in the metadata UI, are the only *pure* manifestations of this in the adapter.
Under the RFC node, we display all the function modules (which have been configured on SAP to allow them to be called from an external client). Invoking an operation under this node – the adapter just turns around and invokes that same operation on SAP.
The same functions also appear under the TRFC node. TRFC stands for “Transactional Remote Function Call”, though it is not really transactional. When you invoke an operation which appeared under the TRFC node (the Action is different for the same function under the RFC and TRFC nodes), the adapter when invoking the function on SAP, associates an identifier with the call. the SAP server makes a note of this identifier, and maps it to the execution status of that function. If, (suppose), the SAP server went down before completion of the function execution, the client can, at a later point, re-execute that call with the same identifier. SAP will realize that this call hasn’t been made before, and will attempt to re-execute the function. On the other hand, if SAP had successfully completed the call earlier, but if the client went down before it could process the response (or a network failure occurred before it could receive the response), the client can, at a later point, re-execute that call with the same identifier. SAP will know that an earlier call with the same identifier already completed successfully earlier, and will not execute it again, but will just send a response back to the client. The client, when it gets back a response (either on the first or later retries) should then clean up its own state with respect to the identifier, and also instruct SAP to do the same (via a RFC SDK API call, which the adapter exposes as an operation named “RfcConfirmTransID”). Once this cleanup/confirmation happens, this identifier is “forgotten”, and the next time this identifier is seen, it is assumed to be a brand new identifier / operation call. NOTE that in a TRFC call, SAP does not return any output values – hence the difference in the operation signatures when you compare them for the same RFC under the RFC node compared to the TRFC node.
What about the operations under the IDOC node? Sending IDOCs to SAP requires the client (the adapter) to execute special RFCs named IDOC_INBOUND_ASYNCHRONOUS / INBOUND_IDOC_PROCESS (depending on the version) on SAP. Receiving IDOCs from SAP requires the external application (now acting as the server) to be able to handle incoming calls to the same two functions. Operations under the IDOC node in the adapter are hence “dummy” operations which the adapter exposes – the adapter gets metadata for all available IDOCs on the SAP system, and exposes “Send” and “Receive” calls with different complex parameters based on the IDOC you want to work with. At runtime, for outbound calls, the adapter takes the individual pieces of data and converts them to the format which the above mentioned two RFCs require. For inbound calls, it splits/parses the data which it received from SAP as incoming parameter values to the above two RFCs, looks at the data to figure out which IDOC is being transmitted, gets metadata for that IDOC, and then re-formats the data to fit that metadata.
Note – for inbound IDOC calls, the adapter exposes a property named ReceiveIdocFormat (which is an enumeration of type IdocReceiveFormat). One of the values in this enumeration is “Rfc” – if you choose this, then, for incoming IDOCs, the adapter won’t peek at the data to figure out which IDOC it is and re-format it to fir the “Receive” operation – instead, it will just expose it as a RFC call to the servicehost – as a call to IDOC_INBOUND_ASYNCHRONOUS / INBOUND_IDOC_PROCESS. Well, actually, it will expose it as a TRFC call (i.e. the WCF message will have the TRFC action) since when SAP sends IDOCs to an external application, it usually does that using transactional semantics – at least SAP’s mechanism of transactions. (Note – the default value of the property/enumeration is “Typed”).
Lastly, BAPIs. SAP allows you to create Business Objects, and with each object, associate methods, events, attributes, etc – its version of OOP. In the current SAP architecture, methods on business objects are just RFCs. Hence, at design time, the adapter looks up the Business Object Repository (BOR), and determines the objects present on SAP. For each object, it figures out what methods were defined for it, and what the actual implementation is – that is, what is the actual RFC to which it maps. The adapter then shows the friendly names of the operations in the UI (since it has that information when it looked up the BOR), but in the Action , it actually uses the RFC name – since at runtime, what it really needs to do is just execute the corresponding RFC.
And now, coming to the question which this post was meant to answer – why is there no BAPI node in the UI for an inbound contract? When the adapter receives an incoming call from SAP, all it has is the function name – since after all, as you can see above, everything just involves execution of an RFC – for both outbound and inbound calls. If SAP sent along a “transactional” identifier with the function invocation, the adapter formats the incoming message (which it gives to the WCF service / BizTalk Receive Location) as a TRFC call. If the function which SAP invoked was named IDOC_INBOUND_ASYNCHRONOUS / INBOUND_IDOC_PROCESS, the adapter recognizes these functions as “special”, and peeks at the data to determine the IDOC being transmitted, and formats the WCF message as a call to “Receive” with the appropriate action (which contains the IDOC type, among other things). Else, the incoming WCF message is now just formatted as a call to an RFC (with the Action containing the RFC name).
The adapter can’t format the WCF message using an Action corresponding to a specific Business Object (i.e., corresponding to an operation that would live under the BAPI node), since strictly speaking, there is no mapping from the RFC name to the Business Object type which implements it. The BAPI node in the UI for outbound contracts was really more of a convenience mechanism – so that a user can navigate the business object hierarchy to find the function of interest. All functions under the BAPI node also appear under the RFC and TRFC nodes too. For inbound calls, if you’re interested in listening for an incoming function like BAPI_SALESORDER_CREATE (which is most probably a function with the friendly name “Create” defined on the “SalesOrder” business object), just search for the RFC named BAPI_SALESORDER_CREATE under the RFC node (or under the TRFC node if you know that SAP is going to execute this “transactionally”). NOTE that in the most common case, only the “special” IDOC RFCs are invoked transactionally by SAP on an external application, so for all other incoming RFC calls, you should just add the operation under the RFC node to your Service Contract.
by community-syndication | Oct 1, 2008 | BizTalk Community Blogs via Syndication
I’ve been re-awarded my BizTalk
MVP – so a big thanks for allowing me to be part of the program for another year
(at least 🙂
A focus of mine is the community – sharing and bettering information sources around
the technologies we work and play with. So thanks guys hope you’re getting value out
of my efforts, and thank you for being part of our growing community.
This year should be a fantastic year in the SOA/ESB/BizTalk/Oslo/WCF/WF/MOSS/BDC/RFID
(did I leave any off?) as we’re going to see the emergence of several of these technologies
play beautifully together.
(we saw this in the last .NET 3.5 Framework – with WCF/WF Services…..stay tuned…for
one of my favourite pieces – Windows Workflow)
So for me lots of things to focus on, but one main area is doing more information
integrating MOSS/SharePoint with BizTalk/InfoPath/RFID……. and of course workflows…….
🙂
Stay tuned……
Thank you linesman and thank you ball boys for your hard efforts and major participation!!!!!
Life is short!
Mick.
by community-syndication | Oct 1, 2008 | BizTalk Community Blogs via Syndication
I mostly stay clear of platform choice debates, not entering very much in “Open Source vs Microsoft” debates, but this one is too hilarious to miss.
There’s a (great) platform game for the Xbox360 called “Braid”, which is the only Xbox Live Arcade game in the Metacritic game Top-10. This game was created by a single developer, Jonathan Blow, who recently posted in his blog some technical questions related to problems he was having with the Linux port of the game. Amidst several problems and the inability to do things with the quality he wants, he eventually drops the idea of the Linux version at all.
It’s an interesting and hilarious discussion (at least the first half of it) between an obviously very frustrated game developer and people telling him how wrong he is.
Two samples:
%u00abWhat is it that you find good about the tools? It appears to me that they are about 12 years behind what I can use on Windows.%u00bb
%u00abMy posting here was not even about Braid. I may have Braid ported to Linux, but I will pay someone else to do it so that I can spend my effort working on my next game. This was about adopting Linux as my primary development platform for all future projects. I wanted to do this because I find Vista to be frankly sickening. However, as bad as it is, Vista is still my best option. I can’t get work done efficiently enough on Linux.%u00bb
Read it here.

by community-syndication | Oct 1, 2008 | BizTalk Community Blogs via Syndication
This announcement came sooner then I expected, I’d assumed PDC was going to be the first time Dublin was mentioned in public! The most detailed information before the PDC can be found here.
A number of internal people in the field (myself included) have been working with the product team for a number of years now to help shape some of these new technologies and ensure they will address real-world customer scenarios based on our customer experience. It’s great to be able to start discussing what’s coming down the line, although the real detail won’t be available until the PDC.
So Dublin! Windows Server is our application server today and we’re now expanding it’s capabilities to deploy and manage .NET based applications (using WCF and WF) – no more roll your own!
If you appreciate the way BizTalk Server provides enterprise capabilities to host your Integration solutions today, you’ll see a lot of similarities in how these hosting capabilities are being introduced in “Dublin” for WCF and WF. For those who might have shied away from WF because of having to roll their own infrastructure, this is great news!
Note that in the press-releases that BizTalk is formally being referred to as an integration server, e.g:
Q: Will “Dublin” work with BizTalk Server’s enterprise connectivity services?
A: Yes. The integration server and application server workloads are distinct but complementary; customers want to be able to deploy them separately as needed to support their distinct requirements. For example, customers that don’t need the rich LOB or B2B connectivity provided by an integration server, will deploy the Windows Server application server to host and manage middle-tier applications. Likewise, customers that need to connect heterogeneous systems across an enterprise, but don’t need to develop and run of custom application logic, will deploy BizTalk Server. When customers need both capabilities, “Dublin” and BizTalk Server will work together nicely.
BizTalk is by no means “dead”, in fact Microsoft committed to future versions including BizTalk Server 2009 recently for the integration server workload, BizTalk = Integration, Dublin = Application.
So if you want to expose [WF] workflows via [WCF] services but ensure performance and scalability (up to enterprise scale), you can now do this without having to write the code required to host these apps on Windows Server. Ensuring performance and scale of WCF services and WF is hard to do today, hence it’s not done very often at least in my experience and sometimes causes a tendency to twist BizTalk into doing something it wasn’t necessarily designed to do which causes problems of their own (coupling Web Sites/UI’s directly to BizTalk for synchronous processing springs to mind).
We don’t want customers in this situation to be forced into writing huge amounts of hard plumbing code to achieve this, we need a server product to do this for you, which is where Dublin comes in. Note some of the server features announced which will be familiar to BizTalk developers (content based routing, compensation, etc.).
If however you need to use the extensive adapter support, B2B, EDI, RFID, BAM, BizTalk Mapper style features then you’ll still be wanting to use BizTalk. Both products will work together seamlessly through the WCF communication options so you can combine as appropriate.
Remember though that a number of BizTalk adapters have been re-written as WCF bindings and are available through the BizTalk Adapter Pack which offers some key LOB adapters. A new SQL adapter is in the works along with new Oracle adapters which you can find information on here. As these new adapters are exposed as WCF bindings you can leverage them with WCF and Dublin.
Dublin will also be the first and best consumer of the Oslo modeling platform, they’ll be more detail on this at the PDC but trust me – this is going to blow your minds!
WCF and WF get a big makeover as already announced, we get new workflow types in WF and an extension library of Activities out of the box. If you want to call a SQL stored procedure, why write half a page of code when you can just configure a database activity? Combine that with the 10x improvement in performance and things are looking good! Imagine a world when a typical software solution can be implemented (modeled) exclusively through a workflow and out-of-the-box activities? 😉
Notice also the subtle new feature in WF, “persistence control”. Low Latency scenarios with BizTalk are achievable but has to be carefully designed, we don’t currently have the ability in BizTalk to control when an Orchestration persists but imagine if we had this feature in WF – low-latency potentially becomes easier to achieve 😉
That’s enough for now, once more detail starts to emerge I’ll post some more information and will also work internally on locking down some clear “where to use what” style scenarios for BizTalk and Dublin using some real-world customer scenarios.
Exciting times!