by Rene Brauwers | May 29, 2014 | BizTalk Community Blogs via Syndication
Introduction
Microsoft released a new service to Azure, called API Management. This service was released on May the 12th 2014.
Currently the Azure API Management is still in Preview, but already it enables us to easily create an API façade over a diverse set of currently available Azure services like Cloud Services, Mobile Services, Service bus as well as on premise web-services.
Instead of listing all the features and write an extensive introduction about Azure API Management I’d rather supply you with a few links which contains more information about the Azure API Management service:
Microsoft:
http://azure.microsoft.com/en-us/services/api-management/
http://azure.microsoft.com/en-us/documentation/services/api-management/
Blogs:
http://trinityordestiny.blogspot.in/2014/05/azure-api-management.html
http://blog.codit.eu/post/2014/05/14/Microsoft-Azure-API-Management-Getting-started.aspx
Some more background
As most of my day-to-day work involves around the Microsoft Integration space in which I am mainly focusing on BizTalk Server, BizTalk Services and Azure in general, I was looking to a find a scenario in which I, as an Integration person, could and would use Azure API management.
The first thing which popped in to my mind; wouldn’t it be great to virtualize my existing BizTalk Service Bridges using Azure API management. Well currently this is not possible, as the only authentication and authorization method on a BizTalk Service Bridge is ACS (Access Control Service) and this is not supported in the Azure API Management Service.
Luckily with the last feature release of BizTalk Services included support for Azure Service Bus Topics / queues as a source J and luckily for me Azure Service Bus supports SAS (Shared Access Signatures) and using such a signature I am able to generate a token and use this token in the HTTP Header – Authorization section of my http request.
Knowing the above, I should be able to define API’s which virtualize my Service bus Endpoints. Create a product combining the defined API’s and assign policies to my API operations.
Sounds easy? Doesn’t it. Well it actually is. So without further ado, let’s dive into a scenario which involves exposing an Azure Service bus Topic using Azure API Management.
Getting started
Before we actually start please note that the sample data (Topic names, user-accounts, topic subscriptions, topic subscription rules etc.) I use for this ‘Step by step’ guide is meant for a future blog post 😉 extending an article I posted a while back on the TechNet Wiki
Other points you should keep in mind are
- Messages send to a TOPIC may not exceed 256Kb in size, if a message is larger you will receive an error message from Service Bus telling you that the message is too large.
- Communication to service bus is asynchronous; thus we send a message and all we get back is an HTTP code telling us the status of the submission (200 OK, 210 Accepted etc.) or an error message indicating that something went wrong (401 Access Denied, etc.). So actually our scenario is using the Fire and Forget principle.
- You will have to create a SAS Token, which needs to be send in the header of the message in order to authenticate to Service bus.
Enough set, let’s get started.
For the benefit of the reader I’ve added hyperlinks below such that you can skip those sections involved which you might already know.
Sections Involved
If you don’t already have an Azure account, you can sign up for a free trial here
Once you’ve signed up for an Azure account, login to the Azure Portal and create a new Azure Service Bus Topic by following the steps listed below.
-
If you have logged in to the preview portal, click on the ’tile’ Azure Portal. This will redirect you to an alternative portal which allows for a more complete management of your Azure Services.
-
In the ‘traditional’ portal click on Service Bus
-
Create a new Namespace for your Service bus Entity
-
Enter a name for your Service bus namespace and select the region to which it should be deployed and select the checkmark which starts the provisioning.
-
Once the provisioning has finished, select the Service Bus Entity and click on Connection Information
-
A window will appear with Access Connection Information, in this screen copy the ACS Connection String to your clipboard. (We will need this connection string later on) and then click on the checkbox to close to window
Now that a new Service Bus Entity has been provisioned, we can proceed with creating a TOPIC within the created Service Bus entity, for this we will use the Service Bus Explorer from Paolo Salvatori. Which you can download here. Once you’ve downloaded this tool, extract it and execute the ServiceBusExplorer.exe file and follow the below mentioned steps.
-
Press CRTL+N which will open the “Connect to Service Bus Namespace” window
-
In the Service Bus Namespaces box, select the following option from the dropdown: “Enter Connection String”
-
Copy and paste the ACS Connection String (you copied earlier, see previous section step 6) and once done press “OK”
-
The Service Bus Explorer should now have made a connection to your Service Bus Entity
-
In the menu on your left, select the TOPIC node, right click and select “Create Topic”
-
In the window which now appears enter a TOPIC name “managementapi_requests” in the “Path” box and leave all other fields blank (we will use the defaults). Once done press the “Create button”
-
Your new Topic should now have been created
Now that we have created a TOPIC it is time to add some subscriptions. The individual subscriptions we will create will contain a filter such that messages which are eventually posted to this TOPIC end up in a subscription based on values set in the HTTP header of the submitted messages. In order to set up some subscriptions follow the below mentioned steps:
-
Go back to your newly created TOPIC in the Service Bus Explorer
-
Right click on the TOPIC and select “Create Subscription”
-
The Create Subscription windows will now show in which you should execute the following steps
- A) Subscription Name: BusinessActivityMonitoring
- B) Filter: MessageAction=’ BusinessActivityMonitoring’
- C) Click on the Create button
-
Now repeat Steps 2 and 3 in order to create the following subscription
- Subscription Name: Archive
- Filter: 1 = 1
At this point we have set up our Topic and added some subscriptions le viagra est il efficace. The next step consists of adding a Shared Access Policy to our topic. This policy than allows us to generate a SAS token which later on will be used to authenticate against our newly created Topic. So first things first, let’s assign a Shared Access Policy first. The next steps will guide you through this.
-
-
Go to the Service Bus menu item and select the Service Bus Service you created earlier by clicking on it.
-
Now select TOPICS from the tab menu
-
Select the Connection Information Icon on the bottom
-
A new window will pop up, in this windows click on the link “Click here to configure”
-
Now un the shared access policies:
- A) Create a new policy named ‘API_Send’
- B) Assign a Send permission to this policy
- C) Create a new policy named ‘API_Receive’
- D) Assign the Listen permission to this policy
- E) Create a new policy named ‘API_Manage’
- F) Assign the Manage permission to this policy
- G) Click on the SAVE icon on the bottom
-
At this point for each policy a Primary and Secondary key should be generated.
Once we’ve added the policies to our Topic we can generate a token. In order to generate a token, I’ve build a small forms application which uses part of the code which was originally published by Santosh Chandwani. Click the following link to start downloading the application “Sas Token Generator“. Using the SAS Token Generator application we will now generate the token signatures.
-
-
Fill out the following form data
- A) Resource Uri = HTTPs Endpoint to your TOPIC
- B) Policy Name = API_Send
- C) Key = The Primary Key as previously generated
- D) Expiry Date = Select the date you want the Sas token to expire
- E) Click on generate
-
After you have clicked GENERATE by Default a file will be created on your desktop containing all generated Sas Tokens, the file is named SAS_tokens.txt. Once saved you will be asked if you want to copy the generated token to your Clipboard. Below 2 images depicting the message-prompt as well as the contents stored in the generated file.Perform step 2 for the other 2 policies as well (API_Listen and API_Manage)
At this point we have set up our Service Bus Topic, Subscriptions and have generated our Sas Tokens we are all set to start exposing the newly created service bus topic using Azure API Management, but before we can start with this we need to create a new API Management Instance. The steps below detail how to do this.
-
-
Click on API Management, in the right menu-bar, and click on the link “Create an API Management Service”
-
A menu will pop up in which you need to select CREATE once more
-
At this point a new window will appear:
- A) Url: Fill out an unique name
- B) Pricing Tier: Select Developer
- C) Subscription: Check your subscription Id (Currently it does not show the subscription name, which I expect to be fixed pretty soon)
- D) Region: Select a region close to you
- E) Click on the right arrow
-
You now will end up at step 2:
- A) Organization Name: Fill out your organization name
- B) Administration E-Mail: Enter your email address
- C) Click on the ‘Checkmark’ icon
-
In about 15 minutes your API management Service will have been created and you will be able to login.
Now that we have provisioned our Azure API management service it is time to create and configure an API which exposes the previously defined Azure Service Bus Topic such that we can send messages to it. The API which we are about to create will expose one operation pointing to the Azure Service Bus Topic and will accept both XML as JSON messages. Later on we will define a policy which will ensure that if a JSON message is received it is converted to XML and that the actual calls to the Service Bus Rest API are properly authenticated using our SAS token created earlier.
So let’s get started with the creation of our API by Clicking the Manage Icon which should be visible in the menu bar at the bottom of your window.
Once you’ve clicked the Manage icon, you should automatically be redirected to the API Management portal.
Now that you are in the Azure API Management Administration Portal you can start with creating and configuring a new API, which will virtualize your Service Bus Topic Rest Endpoints. In order to do so follow the following steps.
-
Click on the API’s menu item on your left
-
Click on ADD API
-
A new window will appear, fill out the following details
-
A) Web API title
- Public name of the API as it would appear on the developer and admin portals.
-
B) Web Service Uri
- This should point to your Azure Service Bus Rest Endpoint. Click on this link to get more information. The format would be: http{s}://{serviceNamespace}.servicebus.Windows.net/{topic path}/messages
-
C) Web API Uri suffix
- Last part of the API’s public URL. This URL will be used by API consumers for sending requests to the web service.
-
D) Once done, press Save
-
Once the API has been created you will end up at the API configuration page
-
Now click on the Settings Tab
- A) Enter a description
- B) Ensure to set authentication to None (we will use the SAS token later on, to authenticate)
- C) Press Save
-
Now click on the Operations Tab, and click on ADD Operation.
- Note: detailed information on how to configure an operation can be found here
-
A form will appear which will allow you to configure and add an operation to service. By default the Signature menu item is selected, so we start with configuring the signature of our operation.
- A) HTTP verb: Choose POST as we will POST messages to our Service Bus Topic
- B) URL Template: We will not use an URL template, so as a default enter a forward-slash ” / “
- C) Display Name: Enter a name, which will be used to identify the operation
- D) Description: Describe what your operation does
-
Now click on the Body item in the menu bar on the left underneath REQUESTS (we will skip caching as we don’t want to cache anything), and fill out the following fields
- A) Description: add a description detailing how the request body should be represented
-
Now click on the ADD REPRESENTATION item just underneath the description part and enter Application/XML
-
Once you’ve added the representation type, you can add a representation example.
-
Now once again click on the ADD REPRESENTATION item just underneath the description part and enter Application/JSON
-
Once you’ve added the representation type, you can add a representation example.
-
Now click on the ADD item in the menu bar on the left underneath RESPONSES (We will skip Caching, Parameters and Body as we don’t need it)
-
Start typing and select a response code you which to return once the message has been send to the service operation.
-
Now you could add a description and add a REPRESENTATION, however in our case we will skip this as a response code 202 Accepted is all we will return.
- Press save
Now that we have defined our API we need to make it part of a Product. Within Azure API Management this concept has been introduced as a container containing one or more API definitions to which consumers (developers) can subscribe. In short if your API is not part of a product definition consumers can not subscribe to it and use it. More information regarding the ‘Product’ definition can be found here.
In order to create a new product, we need to perform the following steps:
-
Select the Products menu item from within the API Management Administration Portal and click on it.
-
A new window will appear listing all products currently available, in this window click on the ADD PRODUCT item
-
Fill out the following form items in order to create a product.
- A) Title: Enter the name of the product
- B) Description: Add a description of the product
- C) Require subscription approval: Ensure to check this, as this will require any subscription requests to this product to be approved first
- D) Press Save
-
Now that we have created our Product it is time to see if there are any other things we need to configure before we add policies to it and publish it. In order to check the settings by clicking on the newly created product
-
On the summary tab, click on the link ADD API TO PRODUCT
-
A new window will pop up, select the API you want to add to the product and once done click Save
At this point we have created a product but have not yet published it. We will publish it in a bit, but first we need to set up some policies for the API and the operation we’ve created earlier, In order to do this follow these steps
-
From the menu bar on your left select the Policies item and in the main window in the policy scope section make the following selections
- A) For the API select the API you created earlier
- B) For the Operation select the operation you created earlier
-
Now in the Policy definition section, click on ADD Policy
-
At this point the Empty Policy definition is visible
For our API operation to correctly function we are going to have to add a few policies. These policies should take care of the following functionality
- Authenticate to our Service Bus Topic using our previously created SAS Token
- Automatically convert potential JSON messages to their equivalent XML counterpart
- Add some additional context information to the inbound message which are converted to brokered message properties when passed won to Azure Service Bus.
General information on which policies are available to you within the Azure API Management Administration Portal and how to use them can be found here
The next few steps will show you how we can add policy statements which will ensure the above mentioned functionality is added.
-
In the Policy definition section, ensure to place your cursor after the <inbound> tag
-
From the Policy statements, select and click on the “Set HTTP Header” statement.
-
A “Set Header” Tag will be added to the Policy Definition area, which will leverage the Authorization Header containing our SAS token we created earlier. The steps required are listed below:
- A) Put in the value “Authorization” for the attribute “name”
- B) Put in the following value “skip” for the attribute “exists-action”
- C) Now get the SAS Token you created earlier, wrap the token string in between a CDATA element and put all of this in between the “value” element
Textual example:
<set-header name=”Authorization“ exists-action=”skip“>
<value><![CDATA[YOUR SAS TOKEN STRING]]></value>
</set-header>
-
Place your cursor just after the closing tag </set-header>
-
From the Policy statements, select and click on the “Convert JSON to XML” statement.
-
A “json-to-xml” Tag will be added to the Policy Definition area, which contains the instructions resulting in JSON messages to be converted to XML. Ensure that the tag is configured as mentioned below:
- A) Put in the value “content-type-json” for the attribute “apply”
- B) Put in the following value “false” for the attribute “consider-accept-header”
Textual example:
<json-to-xml apply=”content-type-json“ consider-accept-header=”false“/>
-
Now add “set-header” policy statements adding the following headers
-
A) Header name: MessageId
- exists-action: “skip”
- value:“00000000-0000-0000-0000-000000000000”
-
B) Header name: MessageAction
- exists-action: “skip”
- value=“Undefined”
textual example:
<set-header name=”MessageId“ exists-action=”skip“>
<value>00000000-0000-0000-0000-000000000000</value>
</set-header>
<set-header name=”MessageAction“ exists-action=”skip“>
<value>Undefined</value>
</set-header>
-
Once you have added all the policy statements, press the save button
Now we have created a new product and assigned policies we need to perform some group/user related actions. This way we can set up a dedicated Group of users which is allowed to use our Product and it’s containing API.
The steps below will guide you through this process
-
Now select the Visibility tab, and click on the MANAGE GROUPS link
-
You will be redirected to the GROUPS page, on this page click on the ADD GROUP Link
-
A new windows will pop up, fill out the following fields
- A) Name: Unique name of the group
- B) Description: General description of the group and its purpose
- C) Click on Save
-
After you’ve created the new Group, select the developers menu item and in the main window click on ADD User
-
Once again a new window will pop up. In this window fill out the following fields:
- A) Email
- B) Password
- C) First and last name
- D) Press Save
-
Now that we have created a new user, we need to make it a member of our group. In order to do so follow the following steps
-
- A) Ensure to select the new user
- B) Click on the ADD TO GROUP item and add the user to the earlier created Group
-
Now go back to the PRODUCTS menu item and select the product you created earlier
-
In the main window follow these steps
- A) Click on the Visibility Tab
- B) Allow the new group to subscribe to your product
- C) Click Save
-
Now Click on the summary tab and click on the PUBLISH link
-
Now select the Developers menu item and click on the user you created earlier
-
The main window will now change, in this window click on ADD Subscription
-
A window will pop up, in this window ensure to put a checkmark in front of the product you want the user to subscribe to. Once done press the subscribe button
At this point you have set up your API and now you can proceed with testing it. In order to test we will use the Azure API Management Developer Portal and we will log-on to it using the user account we set up previously.
The steps below list the steps involved:
-
First log out of the Azure API Management Administration Portal
-
Now login using the email and password of the user you defined earlier
-
In the top menu, select APIS
-
Click on the API you created earlier
-
Click on the button “Open Console”
-
A form will appear which allows you to send a message to the API. Follow the steps below to send a JSON formatted message to the API.
- A) From the dropdown select a subscription-key (used to authenticate to the API)
- B) Add two http headers
- Content-type: application/json [indicates that the message you are about to send is formatted as JSON]
- MessageAction: NonExisting [this will ensure that the message ends up in our Azure Service Bus subscription named Archive, as this subscription is our catch-all
- MessageId: 11111111-1111-1111-1111-111111111111
- MessageBatchId: SingleMessageID
- C) Add some sample JSON
- D) Press HTTP POST
-
Now open up the Service Bus Explorer and connect to your service bus instance
-
Right Click on the Archive subscription and select the option “Receive all messages”
-
One message should be received, which should meet the following test criteria:
MY TEST RESULT: PASSED
- Now we will perform another test, but this time we will send an XML formatted message to the API.
- A) From the dropdown select a subscription-key (used to authenticate to the API)
- B) Add two http headers
- Content-type: application/xml [indicates that the message you are about to send is XML]
- MessageAction: BusinessActivityMonitoring [this will ensure that the message ends up in our Azure Service Bus subscription named BusinessActivityMonitoring and it will end up in our Archive subscription (as it is our catch-all)]
- C) Add some sample XML
- D) Press HTTP POST
- Right Click on the BusinessActivityMonitoring subscription and select the option “Receive all messages”. One message should be received, which should meet the following test criteria:
- The message should be formatted in XML
- The following custom message properties should be available
- MessageAction: BusinessActivityMonitoring
- MessageId: 00000000-0000-0000-0000-00000000000
- MessageBatchId: SingleMessageID
-
MY TEST RESULT: PASSED
-
Now let’s receive all message from our Archive subscription (it should contain a copy of the previous message). Reason for this, is the fact that the archive subscription is our Catch-All subscription and thus all messages send to the topic end up in this subscription as well.
MY TEST RESULT: PASSED
- Ensure to document your API well, this makes life easier for the consumers
- Using SAS Tokens you can fairly easy integrate fairly with Azure Service Bus Queues / Topics
- If using a policy to set Custom Headers (which you use for setting the Authorization header) ensure to enclose the SAS token within a <![CDATA[ …… ]]> tag
- The Azure API Management Key can be found on the developer page of the Azure API Developer Portal (f.e https://{namespace}.portal.azure-api.net/developer)
- Code examples on how to consume an API are available from the Azure API Developer Portal, by clicking on the menu item APIS and then clicking on the API you want to test
- Logging into the Azure API Management Administration Portal must be done using the Azure Management Portal
- Azure Management API, in my opinion, could easily be used to virtualize your on premise internet-faced web services (BizTalk generated web services f.e). This way you have one central place to govern and manage them.
I hope this walkthrough contributed in gaining a better understanding of how we as integrators can leverage the Azure API Management Service to expose Service Bus entities. Once you’ve grasped the concepts you could easily take it a step further and for example involve Azure BizTalk Services which would process messages from certain subscriptions, do some transformations and deliver it to for example another Azure Service Bus Topic, the topic endpoint could then be incorporated into a new API which would allow your API consumers to retrieve their processed messages.
Ah well you get the idea, the possibilities are almost endless as Azure delivers all these building-blocks (services) which enable you to create some amazing stuff for your customers.
I hope to publish a new post in the coming weeks; I’ve already worked out a scenario on paper which involves Azure API Management, Azure Service Bus, Azure BizTalk Services, Azure File Services, and an Azure Website; however implementing it and writing it down might take some time and currently my spare-time is a bit on the shallow side. Ah well, just stay tuned, check my Twitter and this blog.
Until next time!
Cheers
René
by Rene Brauwers | Jul 30, 2013 | BizTalk Community Blogs via Syndication
So you want to sysprep your BizTalk Server 2013 Development Environment? Well this might just be your lucky day. This post will detail the steps required in order for you to create your sysprep BizTalk Server 2013 Development environment.
Update2! Due to several requests; I’ve created a zip file containing all required script files and a document highlighting the required steps in order to sysprep an BizTalk Server 2013 (Standalone) image. You can download it here
Update! I’ve made a small change to the sysprep.xml file; fixing an issue with regards that you had would be prompted to agree with the fact that you were changing the powershell execution policy; it now includes the additional switch -force which will take care of this prompt. Besides this I added an additional script which will remove the product key and will prompt you at then end to enter a new product key. Please note that this last functionality is by default uncommented in the sysprep.xml file
Ingredients
There is actually only one pre-requisite and it consists of the that you have already ‘build’ your own (Single-Server) BizTalk Server 2013 Development Environment; in short a clean, lean and mean VM which contains:
- Windows Server 2012 installed and configured
- SQL Server 2012 installed and configured
- Visual Studio 2012 + updates
- BizTalk Server 2013 installed and configured
- Other tools you might find handy
In case you don’t have an Environment available or need assistance creating one; I suggest to check out this link which will redirect you to Sandro Pereira BizTalk Blog.
BizTalk Server Un-configuration
This chapter will explain how to export the current BizTalk Server Configuration settings; once these settings have been exported we will proceed with a complete un-configuration of BizTalk Server (Including dropping all BizTalk Databases). If you already know how to do this, feel free to skip this chapter, if not well read on…
1. Start your VM and log on with your Administrator Account
2. Once logged on, press the windows key and once in the ‘ type “BizTalk Server Configuration”, select it and press enter
3. The BizTalk Server Configuration screen will pop up. Now choose the option Export Configuration and save the resulting xml file to your desktop (I named it: BizTalk_Standalone_Configuration.xml)
4. Next step is to unconfigure all installed BizTalk Features. In order to do so click on ‘Unconfigure Features’. Select all features and once done press ok and follow the subsequent instructions displayed
5. Now wait, and once completed press Finish and close the Configuration Wizard
6. Now press the windows key and type ‘SQL Server Management Studio’. Select it and press enter.
7. Now connect to your LOCAL SQL Server using ‘Database Engine’ as Server-Type
8. Expand databases, and notice the databases related to BizTalk
9. Now delete all the BizTalk related databases one by one. This is done by right-clicking on one of the databases
“(in my case: BAMArchive, BAMPrimaryImport, BAMStarSchema, BizTalkDTADb, BizTalkMgmtDb, BizTalkMsgBoxDb, BizTalkRuleEngineDb, SSODB ) “
10. A popup will show ensure you’ve checked the ‘Close existing connections’ checkbox and then press OK which will result in the database removal.
11. Now repeat step 9 and 10 for the other BizTalk databases
12. Now that all databases have been deleted, open the Security folder and it’s Logins folder
13 Now delete all the BizTalk related user groups and accounts one by one. This is done by right-clicking on a group/(service)account and selecting delete
in my case: BizTalk Application Users,BizTalk Isolated Host Users, BizTalk Server Administrators, BizTalk Server B2B Operators, BizTalk Server Operators, SSO Administrators, svc-bts-bam-ws and svc-bts-bre
14 A popup will show press OK which will result in the database removal of the database group or user
15 Now repeat step 13 and 14 for the other BizTalk Groups and Service Accounts
16 Now that all groups and accounts have been deleted, open the SQL Server Agent node and it’s Jobs folder
17 Now delete all the BizTalk related jobs one by one. This is done by right-clicking on a job and selecting delete
in my case: BizTalk Backup Server, CleanupBTFExpiredEntriesJob_BizTalkMgmtDb, DTA Purge and Archive, MessageBox_DeadProcesses_Cleanup_BizTalkMsgBoxDb, MessageBox_Message_Cleanup_BizTalkMsgBoxDb, MessageBox_Message_ManageRefCountLog_BizTalkMsgBoxDb, MessageBox_Parts_Cleanup_BizTalkMsgBoxDb, Monitor BizTalk Server, Operations_OperateOnInstances_OnMaster_BizTalkMsgBoxDb, PurgeSubscriptionsJob_BizTalkMsgBoxDb, Rules_Database_Cleanup_BizTalkRuleEngineDb, TrackedMessages_Copy_BizTalkMsgBoxDb
18 A popup will show press OK which will result in the database removal of the job in question
19 Now repeat step 17 and 18 for the other BizTalk Jobs
20 Now disconnect
21 Now press the connect tab and select Analysis Services and logon.
22. Expand the folder Databases and right click on the BizTalk related Analysis Dbs and select delete
“(in my case: BAMAnalysis)”
23. A popup will show ensure you’ve checked the ‘Continue deleting objects after error’ option and then press OK which will result in the database removal.
24. Now close SQL Server Management Studio
Hacking the BizTalk Export File
This chapter will explain how to manually change the settings in the exported BizTalk configuration file. If you already know how to do this, feel free to skip this chapter, if not well read on…
1. Before you un-configured your BizTalk Server you’ve exported the configuration settings, well now it’s time to open this file and make some modifications to it. Below you will see a ‘Large’ screenshot highlighting the pieces which should be modified and below the image the changes are highlighter per line number. (or alternatively download my exported configuration settings)
Line 12: Set the SSO backup file location
Line 15: Set the Password used to protect the secret backup file
Line 18: Set the Password used to protect the secret backup file
Line 21: Set a reminder for the password
line 27: Should be already filled in with the username used for the SSO Service
line 28: Ensure this field is empty
line 29: Enter the password belonging to the service account mentioned in line 27
line 32: Ensure the Server field contains the value .
line 41: Ensure the Server field contains the value .
line 47: [CREATE OR JOIN BIZTALK GROUP] Ensure that the attribute Default has the value “Create”
line 48: [CREATE OR JOIN BIZTALK GROUP] Ensure that the Answer element contains the attribute SELECTED with value YES
line 50: Ensure the Server field contains the value .
line 57: Ensure the Server field contains the value .
line 73: [CREATE OR JOIN BIZTALK GROUP] Ensure that the Answer Element does not contain an attribute SELECTED
line 94: Should already be filled out with the username used for the NT Service for the In-process Host Instance
line 95: Ensure this field is empty
line 96: Enter the password belonging to the service account mentioned in line 94
line 118: Should already be filled out with the username used for the NT Service for the BizTalk Isolated Host Instance
line 119: Ensure this field is empty
line 120: Enter the password belonging to the service account mentioned in line 118
line 128: Ensure the Server field contains the value .
line 135: Should already be filled out with the username used for the NT Service for the BizTalk Rule Engine
line 136: Ensure this field is empty
line 137: Enter the password belonging to the service account mentioned in line 135
line 143: Ensure the Server field contains the value .
line 150: Ensure the Server field contains the value .
line 159: Ensure the Server field contains the value .
line 166: Ensure the Server field contains the value .
line 175: [BAM ALERTS CONFIG] Ensure that the attribute Default has the value “No”
line 176: [BAM ALERTS CONFIG] Ensure that the Answer Element does not contain an attribute SELECTED
line 195: [BAM ALERTS CONFIG] Ensure that the Answer element contains the attribute SELECTED with value YES
line 197: [BAM TOOLS] Ensure that the attribute Default has the value “No”
line 198: [BAM TOOLS] Ensure that the Answer element contains the attribute SELECTED with value YES
line 199: [BAM TOOLS] Ensure that the Answer Element does not contain an attribute SELECTED
line 204: Should already be filled out with the username used for the NT Service for the BAM Management Web Service User
line 205: Ensure this field is empty
line 206: Enter the password belonging to the service account mentioned in line 204
line 209: Should already be filled out with the username used for the NT Service for the BAM Application Pool Account
line 210: Ensure this field is empty
line 211: Enter the password belonging to the service account mentioned in line 209
2. Once you’ve made the changes, save the file.
3. Now open the BizTalk Configuration Tool
4. Fill out the initial details (Caution: use custom configuration) and once done press Configure
5. Now click on Import Configuration
6. Select the ‘modified file’ and press ok. The configurations will now be imported, and once done you should see a message box stating that same fact
7. Next step is to validate all settings (see screenshots)
8. Now Press Apply Configuration to test if everything works
9. If it worked than hurrah! Now go and un-configure everything once again J (see beginning of this blog-post, yes this includes SQL Server etc..)
10. If it failed, bummer! Well go and un-configure and make modifications to the import file and retry steps 1 till 10
Golden Image
Well we are almost done, however now we need to care of a mechanism which will ensure that our BizTalk can be automagically configured, and for this we will use the all so mighty and sometimes underrated Windows Task Scheduler!
Well the following trick I picked up in this blog-post, and I used the same trick in my last blog-post. This trick involves the following actions which you need to perform
1. Download and unpack this archive
2. Copy the contents to c:Program Files (x86)Microsoft BizTalk Server 2013
3. Create the following directory c:Scripts
4. Download and unpack this archive and copy the contents to the newly created directory c:Scripts
5. No press the windows key, type ‘Scheduled Tasks’ and double click the application
6. The Task Scheduler will now open.
7. From the menu-bar select Action -> Import Task…
8. Now in the file-picker browse to c:Scripts and select the following file to import “BizTalk Configuration Scheduled Task”
9. The scheduled task will open, now click on the button “Change User or Group…:”
10. Enter the administrator user name and press ok
11. Ensure the checkbox “Run with highest privileges” is checked and the administrator user name is used when the task is run and press Ok
12. Now enter the password for the administrator user.
13. Close the Task Scheduler
14. Congratz your ‘Golden Image’ is ready. Now might be a good time to shutdown the server and copy this VHD and store it in a safe place
Sysprep
Now we are ready for our last step, so after you have made a copy of your ‘Golden Image’ it is time to fire-up your VM and perform these last few steps.
1. Once the VM has started and you logged on as Administrator go to c:Scripts
2. Open the sysprep.xml file and make a few changes. Once more I’ve highlighted the lines which require a change.
Line 22: Replace with a valid Windows Server 2012 product Key!
Line 23: Replace with your Organization name
Line 24: Replace with the registered owner
Line 29: Enter the name of your Time Zone
Line 64: Replace with your Organization name
Line 65: Replace with the registered owner
Line 68: Replace with the administrator user password
Line 74: Replace with the administrator user password
Line 80: Replace with the administrator username
Line 99: Replace with the current computer name
Line 125: Enter the name of your Time Zone
3. Now save your file and go back to the c:Scripts directory and click on startSysprep.bat
4. Your image will be SYSPREPPED and once done your VM will shutdown
Enjoy!
by Rene Brauwers | Apr 8, 2013 | BizTalk Community Blogs via Syndication
Note: Please be advised, this post most likely contains grammar mistakes I am sorry for that but I was in a hurry and wanted to share this with you all. Anyway; enjoy…
there was this BizTalk Integrator who was wondering how hard it would be to get LinkedIn profile information and store this in Microsoft Dynamics CRM 2011. He didn’t waste a lot of time and within no time he produced a diagram showing on a high level how he intended to tackle this issue.
-
Redirect user to LINKEDIN to get Authorization Code.
-
Authorization code is send back.
-
Request OAUTH 2.0 Access Token using obtained authorization code
-
OAUTH Access token is send back.
-
Establish Service Bus Connection. Create a Topic, Subscription and Filters if they don’t exists. Create message containing OAUTH Access token and set brokered message properties. Once done publish message.
-
BizTalk retrieves message from Topic Subscription.
-
BizTalk creates a GET Request including the OAUTH Token and sends it to the LinkedIn api to request the user’s information.
-
BizTalk receives a response from LinkedIn, and creates a new Atom Feed message containing the Contact Properties required to add a new contact in CRM
-
BizTalk sends a POST request to the CRM 2011 XRMServices/2011/OrganizationData.svc/ContactSet/ endpoint such that a new Contact is created
Okay all kidding aside; in this blog post I will dive deeper in to the above mentioned scenario, and will highlight the ins-and-outs as mentioned in steps 1 to 9. So let’s get started.
For all those, not interested in reading about it; but rather want to see the pre-recorded demo. Click here
One of the first challenges we have to tackle consist of obtaining a token from LinkedIn which allows us to perform authorized requests against a certain user profile. In English; in order to get LinkedIn profile information we need to acquire approval from the profile owner. Once we have obtained this permission we need to obtain an OATH 2.0 token which can then be used to authorize our requests to the LinkedIn API.
So how did I solve this ‘challenge’, well I build this small website based on MVC 4* and hosted it for free in Azure. This website uses the default MVC 4 template, and all I did is change some text and added a new controller and subsequent view. The controller, which I gave a self-explanatory name ‘LinkedInController’ takes care of all logic required (getting the OAUTH token and sending it so Windows Azure Service Bus).
Read all about MVC here, and in order to learn more about MVC and other cool stuff go to Pluralsight)
Below a more detailed overview, on how I took care of steps 1 to 4.
Note: I gathered the information on how to authenticate using OAUTH 2.0 from this page
1) Obtaining profile user permission.
On the default action (Index) in my case, I perform a good-old fashioned redirect to a LinkedIn site in order to obtain the user’s authorization (permission) to query his/hers LinkedIn profile. On this site the user is presented with the following screen.
2) Obtaining linkedIN’s authorization code.
Once the user has allowed access; he or she will be redirected back to the mvc website. This redirection request from LinkedIn contains a Authorization_Code parameter and points to a specific Action within the LinkedIn Controller.This action will extract the Authorization Code and this code will be stored internally.
3) Request OATH 2.0 token.
Next a HTTP Post to the LinkedIn API will be executed, containing the users authorization code and some other vital information
4) Process JSON response.
As a result we retrieve a JSON response which is serialized back into a custom object.
5) Publish message to windows azure service bus.
Once we’ve filled out custom object, I send it to a custom build class*, which then sends this object to a Windows Azure Topic.
* The custom class I’ve build is my own implementation of the REST API for creating TOPICs, Subscriptions, Filters and sending messages to Windows Azure Service Bus (see Service Bus REST API Reference). I’ve build this class a while back and the main reason I;’ve build it, is such that I can reuse it in my Demo Windows Phone Apps and Demo Windows Store Apps). I will not list the code here, however if you are interested just send me an email and I’ll send it to you
The above mentioned class requires that some specific information is passed in, this is explained below
Payload
The first required parameter is of type Object and should contain the message object which is to be sent to service bus.
Brokered message Properties
This parameter contains the key and value pairs adding additional context to the message send to service bus. In my demo I’ve added the following key value pairs to the message. The importance of these properties will be made clear in a little while.
Source=linkedIN
Method=select
Object=personalData
Auth=[DYNAMIC]
Note: Auth will be replaced in code with the actual OAuth 2.0 token received earlier and is added to the context of the message. Later on this is used by BizTalk (but more on this later)
Service bus name space
One of the required parameters is the service namespace which is to be used. In my demo I created a namespace called brauwers
A service namespace provides a scoping container for addressing Service Bus resources within your application.
Service bus Management Credentials
Two other parameters which are required, consist of the so-called issuer-name and issuer-token. These credentials are required if we want to perform actions on the service bus resource like (posting messages, or performing management operations). My component requries that you supply the owner (or full access) credentials as this component will try to perform some management operations (creating subscriptions) which needs these elevated privileges
Service bus Topic name
This parameter should consist of the name of the Topic to which the message will be published. In my demo I’ve chosen the value “SocialAccessTokens”.
Note: in case the topic does not exists the custom class will create it (hence the reason we need to use an issuer who has elevated (create) privileges
Service bus Topic subscription
This parameter holds the desired topic subscription name, which we want to create. In my demo I’ve chosen the value “LinkedINSubscription”
Note: Eventually a filter will be added to this subscription, such that messages matching a this filter or rule (see brokered message properties mentioned earlier) will be ‘attached’ to this subscription. The importance of this, will be made clear further down in this blog post when we will explain how BizTalk is configured). On another note; in case this subscription does not exist a the custom class will create it (hence the reason we need to use an issuer who has elevated (create) privileges.
Service bus Topic Filter name
This parameter contains the name to use for the filter, which will be attached to the topic subscription.
Service bus Topic Filter expression (Sqlfilter)
This parameter is actually pretty important as this parameter defines the rule which applies to the subscription it is attached to. The filter I used in my example –> Source = ‘linkedIN’ (keep in mind that we set a brokered message property earlier containing these values)
Note: the filter in my example should be interpreted as follows: messages send to the topic (SocialAccessTokens) which contain a brokered message property key named source, and of which the value matches the string “linkedIN” should be ‘attached’ to the subscription “LinkedINSubscription”.
Well the previous steps; ensured that a message containing the OAUTH 2.0 token, required to perform LinkedIn profile requests on behalf of the profile owner. The next steps should result in a new contact in Dynamics CRM 2011 based upon the data retrieved from LinkedIn. These steps involve BizTalk and some of it’s ‘new’ (or let’s reformulate this: out of the box) capabilities, consisting of the SB-Message adapter and the WCF-WebHttp Adapter.
So let’s get started.
6) Setting up BizTalk such that is connects to windows azure service bus.
At this point a message has been delivered to a Windows Azure Service Bus topic. However now we need to pull this message into BizTalk and for this to work we need to perform a few steps like
- a) defining a property schema used to capture the custom brokered message properties.
- b) define a schema which reflects the message send to windows azure service bus
- c) deploy the artifacts
- d) configure a receive port and location to get the service bus message(s) in BizTalk
These steps will be explained further below
Defining a property schema used to capture the custom brokered message properties.
Note: Before proceeding ensure you’ve created a new solution and BizTalk project and don’t forget to sign it with a strong key and give it an application name.
In step 5; we added a few custom properties to the message context consisting of the keys “Source,Method,Object and Auth” . In my example I want to use these properties as Promoted Properties within BizTalk, such that I can do some nice stuff with them (more about this in step 7). So in order to cater for this we need to create a new Property Schema in BizTalk and all this schema needs to contain are elements which are named exactly like the brokered message property key names.
Define a schema which reflects the message send to windows azure service bus
In order for BizTalk to recognize a message it’s message definition needs to be known internally. In English; if we want to work with the message as send to windows azure service bus in BizTalk, we need to ensure that BizTalk can recognize this message and thus we need to define. How this is done is explained below
- In my example I send a message to the service bus; this message is actually a representation of an internal defined class. This class is defined as mentioned below:
- So how do we translate this message to a BizTalk Schema? Well we have a few options, and I’ll use the not so obvious approach and that’s the manual approach. In order to do so, we need to create a new schema in BizTalk which has the following properties and structure:Target Namespace should match the DataContract Namespace, thus in my case : http://brauwers.nl/socialauthenticator/linkedin/v1.0/
The Main Root should match the class name, thus in my case :ApiAccessInformation
The main root should contain the following elements, all of them of type string
1) apiName
2) apiClientId
3) apiClientSecret
4) apiAccessScope
5) authorizationKeyThe end result should look like depicted in the image below
Deploy the artifacts
So far so good, but first let’s deploy the BizTalk project, and once it is deployed we can go ahead and start configuring it. (I named my BizTalk Application CRM2011)
Set up BizTalk Receive Port and Receive Location
So far so good, but now we need to put everything together such that we can configure BizTalk to read the Service Bus Topic Subscription and process any messages published to it. How this is done is explained below.
- Go to your just deployed BizTalk Application and create a one-way receive-port and name it appropriately.
- Created a new receive location, choose the SB-Messaging adapter, select your receive host and select the xml_receive pipeline component.
- Configured the SB-Messaging adapter. Enter the appropriate subscription url. In my case this would be: sb://brauwers.servicebus.windows.net/socialaccesstokens/subscriptions/LinkedINSubscription
sb://[NAMESPACE].servicebus.windows.net/[TOPIC_NAME]/subscriptions/[SUBSCRIPTION CONTAINING MESSAGES]
- Fill out the required authentication information, consisting of
the Access Control Service uri and a valid Issuer Name and Issuer Key
- Select the property schema to use such that the Brokered Message Properties defined earlier are treated as Promoted Properties within BizTalk.
- Press OK
7) Setting up BizTalk such that the LinkedIn API can be queried.
At this point we’ve set up BizTalk such that it can retrieve the messages published to windows azure service bus and tread the brokered message properties as Promoted properties. This section will now list and explain the next steps which are needed in order to query the LinkedIn API using the wcf-webhttp adapter.
Creating a new send port leveraging the new wcf-webhttp adapter
So how do we configure the wcf-webhttp adapter such that we can send a GET request to the linkedIn API, well the steps below will show how this is achieved.
- Create a new static solicit-response port and name it appropriately, choose the WCF-WebHttp adapter, choose your prefered send handler and select XML_Transmit as Send Pipeline and XML_Receive as Receive Pipeline
- Now click on configure, such that we can configure the WCF-WebHttp adapter.
- First configure the Endpoint address; I’ve used https://api.linkedin.com/v1/people
- Now we set the HTTP Method and URL Mapping, for this I use the BtsHttpUrlMapping element, which contains an operation method which is set to GET and the rest of the Address Uri which consists of a list of profile fields we want to select and an oauth2 access token which value we will set using variable mapping (step 5 below)
- In our BtsHttpUrlMapping we have set a dynamic parameter which was named {Auth}. The value we want to assign to this, will be the oAuth 2.0 token which is available to us as a promoted property (see explanation step 6 “Setting up BizTalk such that is connects to windows azure service bus“). So for Propery Name we select the element name containing the promoted propery value (in my case Auth) and for property namespace we enter the namespace name of our created property schema.
- Once done press OK.
- Now select the security tab and ensure that the security mode is set to Transport and the Transport client credential type is set to NONE
- Now go to the Messages Tab and ensure that the following Outbound Http Headers are set
Accept: application/atom+xml;
Content-Type: application/atom+xml;charset=utf-8
Accept-Encoding: gzip, deflate
User-Agent: BizTalk
- Make sure that the outbound message is surpressed, by indicating the GET verb-name
8) Dealing with the response message received from LinkedIn.
Well at this point we have send a GET request to linkedIn, however we haven’t set up the logic yet to cater for processing the response message send back to BizTalk by the LinkedIn API. So let’s dive in to it straight away.
Figuring out the response format from LinkedIn
In order to figure out the response message which is sent back to BizTalk, I have used Fiddler and composed a GET request as it would be sent by BizTalk.
- Open fiddler, and go to the compose tab, and use the following Uri, and replace [TOKEN] with the OAUTH token (note: you can get your token by going to the web application, but be aware if you allow access to your profile, you will be part of my DEMO and as a result your profile information will be stored in my Demo CRM environment )
https://api.linkedin.com/v1/people~:(first-name,last-name,email-address,date-of-birth,main-address,primary-twitter-account,picture-url,headline,interests,summary,languages,skills,specialties,educations,certifications,courses,three-current-positions,three-past-positions,following,job-bookmarks,phone-numbers,bound-account-types,twitter-accounts,im-accounts)?oauth2_access_token=[TOKEN]
- Add the following request headers
Accept: application/atom+xml;
Content-Type: application/atom+xml;charset=utf-8
Accept-Encoding: gzip, deflate
User-Agent: BizTalk
- Press Execute, and double click on the 200 Response, and select the RAW tab
- Copy and past the response message, and store it as an xml file
- Now go back to your BizTalk Solution and right click in your project and select –> Add new generated item)
- Select Generate Schema
- Select Well-Formed XML and browse to your saved XML file containing the response and press OK
- A new schema should be created and it should look similar to the one displayed below
9) Transforming the LinkedIn Profile RESPONSE and constructing a CRM POST request.
Well at this point we have the response definition, and now we are ready for constructing our POST request which eventually will ensure that we store the data in CRM. So let’s get started with these final pieces.
Figuring out the request message structure needed for inserting data into CRM
In order to figure out the request or payload message which is sent as body in the POST request to CRM, I have used this CRM Codeplex project CRM ODATA Query Designer Below the steps I performed to obtain the correct XML structure (note you need to have installed the codeplex project)
- Open up CRM in your browser and select the CRM Odata Query Designer
- Select ContactSet
- Press Execute and switch to the results (ATOM) tab
- Now look for the first ID and copy the url
- Copy this url in your browser and then select source
- Copy the result xml, and store it locally
- Open the xml file and let’s clean it up, such that only those fields are left which matter to you. See below my cleaned-up xml structure (please note the elements named new_ ,these are custom fields I’ve added)
- Now open up your BizTalk Solution, right click and select the “Add new generated item” like you did before and generate a schema using the well-formed xml.
- An xsd schema has been created, and it should look similar to the one depicted below. Note that I manually added the min and max occurance
Create a BizTalk Mapping which creates the CRM Insert Request
At this point we can create a mapping between the LinkedInResponse message and the CRM Insert message; I’ve done so and named this map “LinkedInPersonResponse_TO_AtomEntryRequest_CrmInsert.btm”
Build the BizTalk solution and Deploy
Now that we have all artifacts ready, we can deploy our BizTalk Solution once again and finish up the last part of our BizTalk configuration.
Creating a new send port leveraging the new WCF-WebHttp adapter to send a POST request to CRM
Our final step, before testing, consists of adding one last send port which leverages the WCF-WebHttp adapter once more; but this time we will be performing a HTTP POST to the crm endpoint (..XRMServices/2011/OrganizationData.svc/ContactSet/) which will result in a new contact record in CRM 2011. Below I’ll show you how to configure this last sendport.
- Create a new static One-Wayport and name it appropriately, choose the WCF-WebHttp adapter, choose your prefered send handler and select XML_Transmit as Send Pipeline
- Now click on configure, such that we can configure the WCF-WebHttp adapter.
- First configure the Endpoint address, which should point to an address ending on OrganisationData.svc/ContactSet .The full endpoint I’ve used is http://lb-2008-s07/Motion10/XRMServices/2011/OrganizationData.svc/ContactSet/
- Now we set the HTTP Method and URL Mapping, this is pretty straightforward as we only have to add the verb POST.
- Now select the security tab and ensure that the security mode is set to TransportCredentialOnly and the Transport client credential type is set to NTLM
- Now go to the Messages Tab and ensure that the following Outbound Http Headers are set
User-Agent: BizTalk
Content-Type: application/atom+xml
- Once done press OK
- Now select the outbound maps section, and select the map you’ve created and deployed earlier.
- Last but not least, add a Filter, stating “BTS.MessageType = person” such that the we can route the LinkedIn Response message (Person) to this sendport.
- At this point, we are done and we can now start the application and see if everything works.
Note: in order to be able to add records to CRM, you need to ensure that you’ve added the Host Intance User Account used for the selected Send Handler in CRM and granted this account the correct permissions. Failure doing so, will result in a 400 Bad Request which initially will throw you off.
Below I’ve added a few screenshots
The account svc-bts trustetd is the account used by the LowLatency_Host which I use to connect to the XRM service from CRM.
Subsequently I’ve created an account in CRM2011 for this user
and added this account (for testing purposes) to the System Administrator role.
Click Here to see a video showcasing above mentioned scenario
by Rene Brauwers | Mar 23, 2013 | BizTalk Community Blogs via Syndication
As most of you, I downloaded the final pieces of BizTalk Server 2013 as soon as it was available on MSDN. Once I downloaded it, I decided to setup a clean BizTalk 2013 Development Machine. This post will highlight the issues I encountered during installation and configuration and how I resolved them.
First of a brief highlight of my machine (Single-Server) configuration
* Windows Server 2012
* SQL Server 2013
* One instance for BizTalk MessageBoxDb
* One instance for all other BizTalk databases
* Visual Studio 2012
* BizTalk Server 2013
* Server Memory 8GB
* Disk Space 40Gb
Below the issues I encountered and how I resolved them.
Issue #1: Installing BizTalk Server
Error encountered: Missing file ‘MSVCP100.dll’
Resolution
- Download and install Microsoft Visual C++ 2010 Redistributable Package (both x64 as x86)
Issue #2: BizTalk Configuration – BAM Tools
Error encountered: Could not install BAM Tools
Resolution
- install SQL Server 2005 Notification Services
x64 – http://download.microsoft.com/download/4/4/D/44DBDE61-B385-4FC2-A67D-48053B8F9FAD/SQLServer2005_NS_x64.msi
x86 – http://download.microsoft.com/download/4/4/D/44DBDE61-B385-4FC2-A67D-48053B8F9FAD/SQLServer2005_NS.msi
- set up database to use database-mail
- configure BAM Alerts
Issue #3: BizTalk Configuration – BAM Portal
Error encountered: Could not install BAM Portal -> error with regards to “BAM Management Web Service User”
Error thrown: “Attempted to read or write protected memory. This is often an indication that other memory is corrupt.”
Actual exception: Log indicated ‘Cannot alter the role ‘NSSubscriberAdmin’, because it does not exist or you do not have permission.’
Resolution
- Manually Add NSSubscriberAdmin DatabaseRole to BAM Alert Application Database
END Result
Well this was a quick post, but I hope that it might help you out; if you encounter the same issues
Cheers
René
by Rene Brauwers | Jan 17, 2012 | BizTalk Community Blogs via Syndication
First things first, at this point in time I assume
- you’ve read the previous post
- downloaded and installed the CRM2011 SDK
- have a working CRM2011 environment to your proposal.
- you have an account for CRM2011 with sufficient rights (I’d recommend System Administrator)
- have visual studio 2010 installed.
- downloaded and extract my visual studio example solution
So you’ve met all the requirements mentioned above? Good; let’s get started.
Note: all code should be used for Demo/Test purposes only! I did not intent it to be Production Grade. So, if you decide to use it, don’t use it for Production Purposes!
Building your Custom Workflow Activity for CRM2011
Once you’ve downloaded and extracted my visual studio example solution, it is time to open it and fix some issues.
Ensure your references are correct
Go to the Crm2011Entities Project and extend the references folder and remove the following two references
Once done, we are going to re-add these references; so right click on the References folder of the Crm2011Entities Project and click ‘Add Reference’
Now click on the ‘browse’ button, in the add reference dialog window
Now browse to your Windows CRM 2011 SDK BIN folder (in my case: B:InstallMicrosoft CRMSDK CRM 2011bin) and select the following two assemblies:
- microsoft.xrm.sdk
- microsoft.xrm.sdk.workflow
Now repeat the above mentioned steps for the other project
Generate a strongly typed class of all your existing CRM entities.
Open op the “Crm2011Entities Project”, and notice that it does not contain any files except a readme.txt file.
Q&A session
Me: Well let’s add a file to this project, shall we?
You: Hmmm, what file you ask?
Me: Well this project will hold a class file which contains all the definitions of your CRM 2011 Entities.
You: O no, do I need to create this myself?
Me: Well lucky you, there is no need for this.
You: So how do I create this file then?
Me: Well just follow the steps mentioned below
So let’s fix this, and open up a command prompt with administrator privileges.
Now navigate to your CRM 2011 SDK Folder (in my case this would be: B:InstallMicrosoft CRMSDK CRM 2011bin)
Note: Before you proceed, ensure that you know the url of the the CRM2011 OrganizationService. Just test it, by simply browsing to this address, and if everything goes right you should see the following page:
now type in the following command (and replace the values between <….> with your values (see readme.txt)):
Once completed, you should be presented with the following output:
The actual file should be written to the location you set and in my case this is: c:Temp
Once the actual class has been generated, open Visual Studio and right click on the CRM2011Entities project and select ‘Add Existing Item’
Browse to the directory in which the generated class was saved, and select the generated class.
At this point you should be able to compile the complete solution, so go ahead and do so.
Note: The source code; includes comments which should be self-explanatory
Making the custom workflow activity available in CRM2011.
So you’ve successfully compiled the solution, so what’s next? Well now it’s time to import this custom created activity in CRM2011.
In order to do this we will use this nifty application which comes with the CRM2011 SDK. This application is called ‘pluginregistration’ and can be found in the subdirectory tools/pluginregistration of the CRM2011 SDK (in my case the location is
B:InstallMicrosoft CRMSDK CRM 2011toolspluginregistration)
Note: As you will notice, only the source code of the pluginregistration is available; so you need to compile it; in order to use it.
In the pluginregistration folder, browse to the bin folder and either open the debug or release folder and double click the application PluginRegistration.exe
You will be presented with the following GUI:
Now click on “Create New Connection”
Fill out the connection information, consisting of:
- Label: Friendly name of connection
- In my case I named it: CRM2011
- Discovery Url: Base url of CRM
- User Name: Domain Account with sufficient rights in CRM 2011
- In my case I used: LABAdministrator
Once everything is filled in, press Connect and wait until the discovery is finished. Once finished double click on the organization name (in my case: Motion10 Lab Environent ) and wait for the objects to be loaded.
Once the objects have been loaded; you should see a screen similar to the one depicted here below:
Now let’s add our ‘Custom Activity or plugin’. Do this by selecting the ‘Register’ tab and clicking on ‘Register new Assembly’
The ‘Register New Plugin’ screen will popup and click on the ‘Browse (…)’ button.
Now browse to the bin folder of the example project “SendCrmEntityToEndPoint“ (the one you compiled earlier) and select the SendCrmEntityToEndPoint.dll file and click on ‘Open’
Once done, select the option “None“ at step 3 and select the option “Database“ at step 4 and press the button ‘Register Selected Plugins’
Once done you should receive feedback that the plugin was successfully registered.
Creating a workflow in CRM2011 which uses the custom activity.
Now that we have registered our ‘plugin’, it is time to put it to action. In order to do so; we will logon to CRM2011 and create a custom workflow.
Once you’ve logged on to CRM2011, click on ‘Settings’
Now find the ‘Process Center’ section and click on ‘Processes’
In the main window, click on ‘New’
A dialog window will pop up; fill in the following details and once done press OK:
- Process Name: Logical name for this workflow
- I named it: OnAccountProspectStatusExport
- Entity: Entity which could trigger this workflow
- I used the Account Entity
- Category: Select WorkFlow
A new window will pop up; which is used to define the actual workflow. Use the following settings:
- Activate as: Process
- Scope: Organization
- Start When:
- check Record is created
- check Record fields change and select the field RelationShipType
- Now add the following step: Check Condition
- Set the condition to be
- Select “Account”
- Select Field “RelationshipType”
- Select “Equals”
- Select “Prospect”
- Now add our custom activity the following step: SendCrmEntityToEndPoint
- Configure this activity like this:
- Export to disk: True
- EndPoint location: <Path where entity needs to be written>
- In my case I used: c:temp (note this will be written on the c drive on the CRM server!)
- Now once again add our custom activity the following step: SendCrmEntityToEndPoint
- Configure this activity like this:
- Export to disk: False
- EndPoint location: Url path to your BizTalk webservice
- In my case I used: the endpoint which points to my generated BizTalk WebService (which we will cover in our next blogpost)
Well at this point your workflow should look similar to this:
Now click on the ‘Activate’ button
Confirm the ‘Activation’
Save and close the new workflow
Test if everything works
So now it is time to see if everything works; in order to do so we will create a new Account and if everything went ok; we should see
- An Account.xml file somewhere on disk
- An Routing Error in BizTalk (as we send a document which was not recognized by BizTalk)
In CRM2011 click on the ‘Work Place’ button
Subsequently click on ‘Accounts’
And finally add a new ‘Account’, by clicking on ‘NEW’
A new window will pop-up; fill in some basic details
and don’t forget to set the Relationship type to ‘Prospect’
Once done click on the ‘Save & Close’ button
After a few minutes we can check both our output directory and the BizTalk Administrator, and we should notice that in the output directory a file has been written
and we should have an ‘Routing Failure’ error in BizTalk.
Closing Note
So this sums up our first part in which we build our own Workflow activity, imported it into CRM2011, constructed a workflow and last but not least saw that it worked.
Hope you enjoyed the read
Cheers
René
by Rene Brauwers | Jan 13, 2012 | BizTalk Community Blogs via Syndication
Well it has been a while since my last post; however as I stated in my first post. “I’ll only try to blog whenever I have something which in my opinion adds value”, and well the topic I want to discuss today might just add that additional value.
Please note: This post will supply you with background information, the actual implementation of the solution will be covered in the next blog posts. However the sample files which are mentioned in this post can already be downloaded.
Scenario sketch
Let’s say one of your customer’s are considering to replace their current CRM with Microsoft CRM2011.
Now one of the company’s business processes dictates that whenever a new customer or contact has been added to their CRM system, this data has to be send to their ERP system near-real-time. This customer or contact is then added into to ERP system and is assigned an unique account number. This account number then needs to be send back to the CRM system. As an end result the corresponding customer in CRM2011 is updated with the account number from the ERP system.
Their current CRM solution already takes care of this functionality however this has been implemented using a point-to-point solution and therefore replacing their current CRM with Microsoft CRM2011 would break this ‘integration-point’. The customer is aware that in the long-term it would be best to move away from these kind of point-to-point solutions and move more to a Service Oriented Architecture.
At the end of the day it is up to you to convince your customer that it is no problem at all with Microsoft CRM2011 to setup a solution which includes an integration with their ERP system and as you are aware of the fact that the customer wants to move to a Service Oriented Architecture, you see the opportunity fit to introduce the company to BizTalk Server 2010 as well.
So eventually you propose the following Proof of Concept scenario to your customer: ‘You will show to the customer that it is possible with almost no effort to build a solution which connects Microsoft CRM 2011 to their ERP system, whilst adhering to the general known Service Oriented Architecture principles’; once you tell your customer that this POC does not involve any costs for them except time and cooperation; they are more than happy and agree to it.
Preparing your dish
In order to complete the solution discussed in this blog post you will need the following ingredients:
A test environment consisting of:
- 1 Windows Server 2008R2 which acts as Domain Server (Active Directory)
- 1 Windows Server 2008R2 on which Microsoft CRM2011 is installed and configured
- 1 Windows Server 2008R2 on which Microsoft BizTalk Server 2010 is installed and configured.
- One Development Machine with Visual Studio 2010 installed
Step 1: How do I get data out of Microsoft CRM2011?
Well in order to get data (let me rephrase; an entity) out of CRM for our Integration scenario we will need to build a custom activity which can be added as a workflow step within CRM2011.
So which ingredients are required to do this?
- We need to download the CRM2011 SDK; so go and fetch it here
So what are we going to build?
- We will build a custom activity and deploy it to CRM2011 such that it can be used in a workflow, or download my example and install it
Step 2: How do I get data in my custom ERP?
Well for this step I’ve build a custom application which will act as a stub for our custom ERP. This custom ERP system will be exposed by means of a WCF service.
So which ingredients are required to do this?
So what are we going to build?
- Well you could build your own application, or download my example and install it.
Step 3: How do I get data into CRM2011?
Well in order to get data into CRM; we will use the out of the box web services which are exposed by CRM2011.
So which ingredients are required to do this?
- Well if you have not yet downloaded the CRM2011 SDK; go and fetch it here
So what are we going to build?
- Well in order to make our life easier we will build a proxy web service; which will talk directly to CRM2011 this way we will make our integration efforts go smoother.
Step 4: How do I hook it all together?
Well for this part we will use BizTalk, BizTalk will receive the ‘Create Customer’ event from CRM and subsequently logic will be applied such that this data is send to the custom ERP application. Once the insert was successful the ERP system sends back an customer account number and subsequently we will update the corresponding Entity in CRM2011 with the account number obtained from the ERP system.
So which ingredients are required to do this?
- Well if you have not yet downloaded the CRM2011 SDK; go and fetch it here 🙂
So what are we going to build?
- Well we need to make a customization to our Account Entity in CRM2011, to be more specific; we will add a custom field to the Account entity and call it Account Number.
- We will build a BizTalk solution which will hook all the bits together.
Closing Note
So this sums up the introduction part. Be sure to check back soon for the follow up part in which I’ll discuss how to build our CRM Trigger