by community-syndication | Jan 31, 2011 | BizTalk Community Blogs via Syndication
Welcome to the 27th interview in my series with thought leaders in the “connected systems” space. This month, we’re sitting down with Rick Garibay who is GM of the Connected Systems group at Neudesic, blogger, Microsoft MVP and rabid tweeter. Let’s jump in. Q: Lately you’ve been evangelizing Windows Server AppFabric, WF and other new […]
by community-syndication | Jan 31, 2011 | BizTalk Community Blogs via Syndication
Introduction
For the past year or so I have been working on a web development project using Java. Like many Java projects, the project I was working on used 3rd party frameworks like Spring and Hibernate rather than utilizing the official Java EE stack. Many developers have turned to these frameworks because the official Java EE […]
by community-syndication | Jan 31, 2011 | BizTalk Community Blogs via Syndication
Here are a few TechNet Wiki articles that have become available recently. I have usually found it difficult to find relevant content on the TechNet Wiki so I thought it would be good to highlight a few of these. Improving performance when executing a BRE rule: http://social.technet.microsoft.com/wiki/contents/articles/high-cpu-when-executing-a-business-rules-engine-bre-policy.aspx Implement ordered messaging with BizTalk and SSIS: http://social.technet.microsoft.com/wiki/contents/articles/implementing-end-to-end-ordered-delivery-using-microsoft-biztalk-server-and-sql-server-service-broker.aspx […]
by community-syndication | Jan 30, 2011 | BizTalk Community Blogs via Syndication
Today I read a rather interesting and profound statement in the book recently released “BizTalk 2010 Recipes” by Mark Beckner
It was so interesting that I’d like to share it with you, it speaks of the future of which I’d tend to agree with, “in the decade ahead, middleware will be more important and relevant than ever before.”
“Why does middleware like this have such staying power? You’d think that newer advances in technology like web services, SOA, and software as a service (SaaS) would render applications much more inherently interoperable and that the pain and complexity of systems integration would be a thing of the past.
The truth is that enterprises of all sizes still experience tremendous cost and complexity when extending and customizing their applications. Given the recent constraints of the economy, IT departments must increasingly find new ways to do more with less, which means finding less expensive ways to develop new capabilities that meet the needs of the business. At the same time, the demands of business users are ever increasing; environments of great predictability and stability have given way to business conditions that are continually changing, with shorter windows of opportunity and greater impacts of globalization and regulation. These factors all put tremendous stress on IT departments to find new ways to bridge the demanding needs of the users and businesses with the reality of their packaged applications.
This leads back to the reason why middleware-certainly not sexy as technologies go-continues to deliver tremendous value to both businesses and IT departments. As the technology’s name suggests, it sits in the middle between the applications you use and the underlying infrastructure; this enables IT departments to continue to innovate at the infrastructure level with shifts like many-core processing, virtualization, and cloud computing. Instead of having to continue to continually rewrite your LOB applications to tap into infrastructure advances, you can depend on middleware to provide a higher level of abstraction, so you can focus your efforts on writing the business logic, not plumbing code. Using middleware also helps future-proof your applications, so that even as you move ahead to the nextgeneration development tools and platforms (including the current trends toward composite applications and platforms as a service), you can still leverage the existing investments you’ve made over the years.
So, in the decade ahead, middleware will be more important and relevant than ever before. “
Burley Kawasaki
Director of Product Management, Microsoft Corporation
by community-syndication | Jan 30, 2011 | BizTalk Community Blogs via Syndication
Introduction
Welcome to the third and final installment of the jQuery saga. This post will cover some of the more advanced features of jQuery such as Ajax, utility functions and plugins. Along the way I will give examples that show how to use the functionality. I suggest that you read through the first two parts of […]
by community-syndication | Jan 28, 2011 | BizTalk Community Blogs via Syndication
Im currently in the process of migrating our many BizTalk projects from MKS source control to TFS. While we will be using TFS for work item tracking and source control etc we will be continuing to use Cruise Control for continuous integration although im updating this to CCNet 1.5 at the same time.
Ill post a few things as much as a reminder to myself about some of the problems we come across.
Problem
After the first build of our code the next time a build is triggered an error is encountered by the TFS source control block refreshing the source code.
System.IO.IOException: The directory is not empty.
at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive)
at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive)
at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path)
at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result)
at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result)
at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request)
Solution
The problem seems to be with a folder called TestLocations which is created by the build process and used along with the file adapter as a way to get messages into BizTalk. For some reason the source control block when it does a full refresh of the code does not get rid of this folder and then complains thats a problem and fails the build.
Interestingly there are other folders created by the build which are deleted fine. My assumption is that this if something to do with the file adapter polling the directory. However note that we have not had this problem with other source control blocks in the past.
To workaround this I have added a prebuild task to the ccnet.config file to delete this folder before the source control block is executed. See below for example
<
prebuild> exec>executable>cmd.exe</executable>buildArgs>/c “if exist “C:\<MyCode>\TestLocations” rd /s /q “C:\<MyCode>\TestLocations””</buildArgs>exec> prebuild>
<
<
<
</
</
by community-syndication | Jan 28, 2011 | BizTalk Community Blogs via Syndication
Here’s a good ISO image mounter that works on Server 2008
http://static.slysoft.com/SetupVirtualCloneDrive.exe
by community-syndication | Jan 27, 2011 | BizTalk Community Blogs via Syndication
Recently, in October 2010, at our Professional Developers
Conference (PDC) we made some exciting roadmap announcements regarding Windows
Azure AppFabric, and we have already gotten very positive feedback regarding
this roadmap from both customers and analysts (Gartner Names Windows Azure AppFabric “A
Strategic Core of Microsoft’s Cloud Platform”).
As result of these announcements we wanted to have a series
of blog posts that will give a refreshed introduction to the Windows Azure
AppFabric, its vision and roadmap.
Until the announcements at PDC we presented Windows Azure
AppFabric as a set of technologies that enable customers to bridge applications
in a secure manner across on-premises and the cloud. This is all still true,
but with the recent announcements we now broaden this, and talk about Windows
Azure AppFabric as being a comprehensive cloud middleware platform that raises
the level of abstraction when developing applications on the Windows Azure
Platform.
But first, let’s begin by explaining what exactly it is we
are trying to solve.
Businesses of all sizes experience tremendous cost and
complexity when extending and customizing their applications today. Given the constraints of the economy,
developers must find new ways to do more with less but at the same time
simultaneously find new innovative ways to keep up with the changing needs of
the business. This has led to the
emergence of composite applications as a solution development approach. Instead of significantly modifying existing
applications and systems, and relying solely on the packaged software vendor
when there is a new business need, developers are finding it a lot cheaper and
more flexible to build these composite applications on top of, and surrounding,
existing applications and systems.
Developers are now also starting to evaluate newer
cloud-based platforms, such as the Windows Azure Platform, as a way to gain
greater efficiency and agility. The promised benefits of cloud development are
impressive, by enabling greater focus on the business and not in running the
infrastructure.
As noted earlier, customers already have a very large base
of existing heterogeneous and distributed business applications spanning
different platforms, vendors and technologies.
The use of cloud adds complexity to this environment, since the services
and components used in cloud applications are inherently distributed across
organizational boundaries. Understanding
all of the components of your application – and managing them across the full
application lifecycle – is tremendously challenging.
Finally, building cloud applications often introduces new
programming models, tools and runtimes, making it difficult for customers to
enhance, or transition from, their existing server-based applications.
Windows Azure AppFabric is meant to address these challenges
through 3 main concepts:
1.
Middleware Services – pre-built
higher-level services that developers can use when developing their
applications, instead of the developers having to build these capabilities on
their own. This reduces the complexity of building the application and saves a
lot of time for the developer.
2.
Building Composite
Applications – capabilities that enable you to assemble, deploy and manage a composite
application that is made up of several different components as a single logical
entity.
3.
Scale-out Application
Infrastructure – capabilities that makes it seamless to get the benefit of
cloud, such as: elastic scale, high availability, density, multi-tenancy, etc’.
So, with Windows Azure AppFabric you don’t just get the
common advantages of cloud computing such as not having to own and manage the
infrastructure, but you also get pre-built services, a development model, tools,
and management capabilities that help you build and run your application in the
right way and enjoy more of the great benefits of cloud computing such as
elastic scale, high-availability, multi-tenancy, high-density, etc’.
Tune in to the future blog posts in this series to learn
more about these capabilities and how they help address the challenges noted
above.
Other places to learn more on Windows Azure AppFabric are:
If you haven’t already taken advantage of our free trial
offer make sure to click on the image below and start using Windows Azure
AppFabric already today!
Please leave your comments and questions in the comments
section below.
Itai Raz, Product Manager
by community-syndication | Jan 26, 2011 | BizTalk Community Blogs via Syndication
While looking into an authentication problem I discovered this ’new’ header sent back
from a SharePoint 2010 machine.
Health Score? hmmm I thought, what’s the max and what’s the min
values. Is this good/bad? or don’t care?
So SharePoint 2010 has several Throttling features it used such as Client
Auto Back-off which predominately when triggered, prioritises HTTP requests
– such as HTTP POSTS are non delayed or throttled, but HTTP GETs are and new HTTP
connections are throttled.
Here is one MS
page that barely describes the Header – could do with updating that one.
SharePoint 2010 determines the health of a server by initially looking
at system counters.
Let’s dig further.
Upon Reflecting the classic Microsoft.SharePoint.dll, there’s a Microsoft.SharePoint.Diagnostics
section which I thought would be a great place to start. I found a
SPWebFrontEndDiagnosticsPerformanceCounterProvider class (amongst
others there’s a SPDatabaseServer class as well)
The line above collection[0] = . refers to the following collection
So putting all this together, the performance counters are:
-
WebAppPool – “SharePoint Foundation”
-
Global Heap Size
-
Native Heap Count
-
Process ID
-
OWSTimer & W3WP
-
Processor (_total)
It appears the main class behind all of this is
SPHttpThrottleSettings where it appears that the throttling setting
is turned off in ’Single-Server’ deployments.
Digging further I came across the big-daddy class of it all (I think) –
SPPerformanceInspector – notice the method IsInThrottling() and
the other is 2 constants that describe the displayed Throttled messages.
I also noticed another method on this class SetupRegKeyHealthScore.
Where HKLM\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\14.0\WSS\ServerHealthScore is
the actual value you want to assign.
A value of 0 is great, 10 is bad. Over 10 means the server will go into Throttling
(letting your clients know as well).
There’s many other things here, but I’ve got to head swimming.
Hope we unraveled this mystery a little more.
Mick.
by community-syndication | Jan 26, 2011 | BizTalk Community Blogs via Syndication
By default, the following BizTalk jobs aren’t configured and enabled upon installation. Backup BizTalk Server (BizTalkMgmtDb) DTA Purge and Archive (BizTalkDTADb) MessageBox_Message_Cleanup_BizTalkMsgBoxDb If you want these functionalities you must configure and enabled them. How to configure Backup BizTalk Server (BizTalkMgmtDb) This Job consists of four steps: Step 1 – Set Compression Option – Enable or […]