A few days ago, Daniel Selman of ILOG (now owned by IBM) published a solution to the Einstein puzzle. See http://blogs.ilog.com/brms/2009/02/16/einsteins-puzzle/. He did this in response to a challenge from James Owen who is one of the organisers of the October Rules Fest conference. James invited the various vendors who have involvement in the conference to provide a solution using whatever approach they deemed best. See http://orf2009.blogspot.com/2009/02/puzzle-1-for-conference.html.
Daniel suggests that, if ILOG had used a Rete rules engine, they would have had to implement ‘convoluted rules’ to solve the problem. Daniel is correct, but I thought it would be interesting to look a little deeper at why this is the case.
Read more athttp://geekswithblogs.net/cyoung/archive/2009/02/24/129639.aspx.
With Microsoft's announcement at PDC this fall and with the continued growth of Amazon's EC2 service and Google's AppEngine service, the industry seems to have people's heads up in the clouds. With this shift of focus, though, comes a myriad of questions about reliability, security, and portability. Potential customers of the cloud want to know that it can indeed be depended on. Executives want to know that the security of data in the cloud will not be compromised. Software engineers want to know that if a certain provider evaporates into thin air, minimal effort will be required to move deployed assets and keep mission critical apps moving.
With so many questions about elastic hosted services, and an as of yet unclear track record for the same, I cannot help but wonder if the cloud computing model will really take hold, or if it will just be a bridge to an even more impressive generation of computing architectures to follow. Maybe it will be both. This discussion then begs the question — of what that generation will look like that does follow.
Nearly 10 years ago, a program was created that would compel sci-fi geeks, amateur astronomers, scientists, programmers, and scholars to change their screensaver. SETI@home launched in 1999 and over the next 9 years would bring grid computing into the living rooms and dorm rooms of over 5 million people. The original software was an app and screen saver that would use idle computer time to drive the search for extraterrestrial intelligence. It harnessed the untapped power of millions of computers with unrealized potential. It was built as an experiment, to break free of the constraints imposed by a supercomputer. Even hosted clusters have their limits, and some problems go beyond those limits.
With cloud computing the sky is the limit, but what if this world is not enough? What if a single company's data centers won't cut it? What if you want to maintain your data center, while still being able to tap additional resources on demand? What if you wanted to maximized and monetize under-utilized computational resources, instead of just writing them off as depreciating assets each year?
That seemed to be the aim of now defunct CPUShare. It offered users the opportunity to sell their idle CPU time to people who needed computational resources. What if the spirit of this project was matched with the vision of Windows Azure, or the ease of entry of Amazon's EC2. What if it added storage into the mix, RAM, and even bandwidth? What if each of these was currency in a new economy? This new economy would not be comprised of just one company's slice of the cloud; it would be the whole thing.
Crowd sourcing CPU hours might very well be the future, or it may be a pipe dream that will never be possible. It has the same questions of reliability, security, and portability, and it brings with it the question of control. The way the industry deals with the questions about cloud computing today, could very well pave the way for crowd computing to be the driving force behind Web 4.0 and beyond.
I couldn’t resist commenting on this issue. I was just doing some final prep for my VBUG talk tomorrow and came across Richard Hallgren’s oddly titled post – Does BizTalk have man-boobs?. Richard writes about a QCon webcast of a session done by Martin Fowler and Jim Webber and writes
“Their main point is that we […]
I’m thrilled to announce that Dallas TechFest 2009 is a go, for Friday April 17th, 2009. As such we are on the hunt for speakers willing to come out and share their knowledge. If you’re interested in speaking at Dallas TechFest, then here is what we would like:
- Full Name
- Email Address
- Blog and/or Twitter if you have one
- Short Bio – Tell us about yourself, and make it something we can share with the community if you’re accepted. Please include any qualifications you might have regarding your topics in this.
Then for each session you’d like to present please send:
- Session Title
- Abstract – A simply paragraph explaining your topic in more detail than the title gives.
- Session Length – Our standard length is 75 minutes, but we will accept a limited number of “double length” sessions, which would be 3 hours.
Once you’ve collected all that information, please send it away to dtf-speakers@TimRayburn.net and we’ll review the submissions and let you know. We would like your submissions by March 6th, 2009 so we can finalize the schedule for events. We know this is short notice for speakers, and truly appreciate the great presenters in our region rising to the challenge.
Please remember we welcome all technologies at Dallas TechFest and expect to have tracks on Java, Ruby, PHP, Cold Fusion, Adobe Flex, as well as Microsoft technologies.
We had an issue with one of our BizTalk estates with incoming messages being suspended if they were bigger than the large message threshold:
A response message sent to adapter “WCF-BasicHttp” on Receive Location: “x” with URI:”y” is suspended. Error details: There was a failure executing the response(receive) pipeline: “z” Source: “Unknown ” Send Port: “x” URI: “y” Reason: 0x8004d027
The error message wasn’t particularly helpful but the reason code 0x8004d027 had a few threads and posts relating to DTC. This made sense as we’re in a distributed environment so DTC will be used to create message fragments under a transaction, if the whole message exceeds the threshold size. Using Web Service Studio I found the expected message size, and setting the large message threshold above that figure fixed the problem so that seemed to confirm it.
We finally tracked the issue down to permissions on the cluster where MSDTC was running. Buried in the Event Log for the BizTalk host suspending the messages was this error:
The application could not connect to MSDTC because of insufficient permissions. Please make sure that the identity under which the application is running has permission to access the cluster. Please refer to MSCS documentation on how to grant permissions. Error Specifics: d:\nt\com\complus\dtc\dtc\msdtcprx\src\dtcinit.cpp:652, Pid: 6772 No Callstack, CmdLine: “C:\Program Files\Microsoft BizTalk Server 2006\BTSNTSvc.exe” -group “BizTalk Group” -name “x” -btsapp “y”
This was logged when the BizTalk service started, which had been the day before we found the suspended messages, so I didn’t see the connection. This post from Ben Cops: http://bencops.blogspot.com/2008/12/error-on-installing-sso-on-clustered.html pointed me in the right direction, and the steps for adding permissions are simple – the only oddity is that you set permissions at the cluster level, not at the resource level:
- Open Cluster Administrator
- Right-click the cluster (the topmost node) and select Properties
- Under Security, add the BizTalk user group with Full Control
This and a host restart fixed the issue. Along the way we did notice a couple of other issues in our MSDTC configuration which we tracked down and fixed with DTCPing. Worth noting that in Windows Server 2003 the properties for MSDTC under dcomcnfg may not correctly reflect the registry settings, so while you think DTC security is configured like this:
– it may actually be configured like this:
The DTCPing log will tell you what registry settings are in use, and the BizTalk Best Practices Analyzer will warn you if the DTC security settings are incorrect, so worth double-checking this.
I came across a little nasty, when trying to send email from the smtp adapter. If you look down deep …
Quick note: the slides for the Auckland Connected Systems User Group presentation last week “Introduction to Oslo” with Jeremy Boyd can be downloaded from the meeting page (under attachments):
Also, get the meeting demos from JB’s blog:
This was a great presentation and I can’t thank JB enough for coming up to Auckland to present for […]
Hopefully you’ve been keeping an eye on this in recent times.
This is a Set of Adapters that MS have built on top of the WCF LOB Adapter
SDK (a platform neutral .NET based Adapter framework).
This allows the Adapters to run within the .NET stack almost anywhere – Console apps,
Custom WebServices, BizTalk, SSIS, SharePoint etc.
(Build once – run everywhere 🙂
The public beta is available as a 120-day d/l from MS
What to expect with the Adapter Pack
(in addition to a free migration tool to go from the old adapters to the new – e.g.
%u00b7 64 bit support
%u00b7 Added performance counters
%u00b7 Notification support
%u00b7 64 bit support
%u00b7 Polling stored procedures
%u00b7 Performance counters
%u00b7 64 bit support and improved SQL
%u00b7 Display complex binding properties
%u00b7 Display metadata wsdl in web control
SAP data provider
%u00b7 Support for more operators in Sap Queries
%u00b7 SAP SSRS support in VS2008
%u00b7 New Samples for SQL and Oracle eBiz Adapters
%u00b7 Repackaged samples for BAP 1.0
I’ve written a few queries to get counts from a BizTalk tracking database – grouped by schema, schema per host, ports, and orchestration event tracking – to help me and the team advise out clients on what tracking should be turned off on their BizTalk environment. They go well together with the report you get […]
I will be presenting an introductory session to Biztalk 2006 R2 (and spending some time on the WCF adapters) at the local chapter of VBUG in Bracknell.Of course, we will also spend a little while discussing the impact of Oslo and Dublin and the factors to keep in mind when choosing your integration technologies.
Despite the […]