Friday, December 30, 2011

Allow Silverlight to access clipboard after having clicked “NO” before

Silverlight applications can put information into the computer’s clipboard. But only when the user grants the application access to it.

So while using Silverlight apps online you might have come across this dialog:

image

I accidentally clicked “no” once and had lots of trouble to reverse this step because I haven’t been prompted this question again.

Even google was no help in this case. Neither was re-installing Silverlight.

Finally a heroic search in the registry gave a solution. There is a place in the registry where Silverlight places the information about clipboard access.
HKEY_CURRENT_USER\Software\AppDataLow\Software\Microsoft\Silverlight\Permissions

Inside that Key is a subfolder for each Silverlight application that has prompted this question. If you delete the proper folder, you will be prompted again the next time.

image

Monday, November 14, 2011

Setting environment variables on Windows Azure: easy way

A few hours after writing the last post, I discovered that one can specify environment variables in the .csdef file:

<Runtime>
  <Environment>
    <Variable name="a" value="b" />
  </Environment>
</Runtime>

 


Don't know yet, if this does the same as my code in the last post. I might investigate on that...

Setting environment variables in Windows Azure

At the latest when you want to run java applications on Windows Azure you need to set several environment variables. Since there are startup tasks you may perform before your Role is being started, this obviously is the place where to set these variables. But there are some difficulties:

Setting environment variables Intentional-Style

Setting environment variables using the well known set command line tool works on Windows Azure like on premise:

set MYPATH=C:\Directory\

BUT:


  • This only works for one command shell session

  • This only works for the current user

The variables set should be available all time and for all users. So they need to be set as system variables. And system variables may only be set by an administrator.


Setting environment variables correctly



  1. Make sure your startup script is run with elevated permissions.
    Check your .csdef file to look similar to this:
    <Startup>
      <Task commandLine="Startup.cmd" 
            executionContext="elevated" 
            taskType="simple" />
    </Startup>

  2. Use the setx command line tool with the /M parameter to set system variables:
    setx MYPATH C:\Directory\ /M

  3. If you need the variables set to use them in your startup script, you need to set them with the set command again.
    - set works for the current session (the startup script)
    - setx
    works for all session started after setting the variables

References:

How to use the setx command: http://ss64.com/nt/setx.html

Tuesday, October 25, 2011

Sending e-mails using IIS SMTP Server on Windows Azure

If you want to send emails from a Windows Azure Role, there’s the possibility to use the built-in SMTP Server of IIS 6.0. Here is a guide how to use startup scripts and PowerShell commands to set up a SMTP server in the cloud.

Before you start make sure that you’re running a Windows 2008 Server R2 in the cloud by setting the osFamily Attribute to osFamily=2 in the ServiceConfiguration.cscfg file.

Setup the SMTP

Here is a guide that shows how to set up the SMTP server manually. Those are the same steps we’re going to do here using scripts. It helped me to understand what’s necessary to do. Just deploy any Azure Project to the Cloud, RDP in, install the SMTP-Server feature and do the steps described in that guide.

But now to the automated setup scripts:

1. Create a new startup task in the ServiceDefinition.csdef

<Startup>
  <Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple" />
</Startup>


2. The startup file first needs to install the SMTP-Server feature automatically. To gain this it uses the PowerShell 2.0. After that it calls a VBScript that configures the SMTP-Server.



The Startup.cmd:

powershell -command "Set-ExecutionPolicy Unrestricted"

powershell .\InstallSmtp.ps1

cscript ConfigSmtp.vbs



The InstallSmtp.ps1 PowerShell script:

Import-Module Servermanager
Add-WindowsFeature SMTP-Server


3. Now we need to create the VBScript that configures the SMTP server: We need to add all IP addresses that are allowed to send mails through this SMTP server to the grant list. Assuming only applications running on the same server are allowed to send emails we’re going to add 127.0.0.1 to that list.



The ConfigSmtp.vbs

Option Explicit
Dim smtpServer, relayIpList
' Get the default instance of the SMTP server
Set smtpServer = GetObject("IIS://localhost/smtpsvc/1")
' Get the IPList
Set relayIpList = smtpServer.Get("RelayIpList")
' Add localhost to that list
relayIpList.GrantByDefault = false
relayIpList.IpGrant = "127.0.0.1"
' Save changes
smtpServer.Put "RelayIpList",relayIpList
smtpServer.SetInfo


4. Deploy to the cloud



Use the SMTP from C# code



To use the local SMTP server from within your C# code use the following lines:

var client = new System.Net.Mail.SmtpClient("localhost");
client.Send("from@domain.tld",
            "to@domain.tld",
            "This is my subject",
            "Hello, this is a mail from the cloud!");


Don’t get blacklisted



Even Steve Marx recommended not to use the SMTP server feature on Windows Azure instances because they soon would be blacklisted. To avoid getting on a blacklist you could use a smart host to deliver your emails with.



If you want to use a smart host in your deployment, you need to extend the ConfigSmtp.vbs:

' set the outbound connector to a smart host
smtpServer.SmartHostType = 2
smtpServer.SmartHost = "smtp.mysmarthost.tld"
' use basic authentication
smtpServer.RouteAction = 264
smtpServer.RouteUserName = "myName"
smtpServer.RoutePassword = "myPassword"
' save changes
smtpServer.SetInfo


In this case I have been using the SMTP relay services offered by http://dyn.com which worked just fine. Depending on what service you’re about to use, the settings might differ.





References



Monday, October 24, 2011

Installing updates manually on Windows Azure

While trying to manually install the PowerShell 2.0 update on a Windows Azure Role I ran into the following error message:

“Installer encountered an error: 0x80070422”

image

In order to solve this problem:

  1. Open the Server Manager: Start > Right Click on “Computer” > Manage
  2. Navigate to Configuration > Services
  3. Enable the Background Intelligent Transfer Service  and start it
  4. Enable the Windows Update Service and start it
  5. Now you can manually install updates.

I Got this hint from: Came Too Far

Sunday, September 25, 2011

How To: Look inside the Service Package file

If you ever wanted to see what’s inside a published .cspkg file: here is a small guide how to achieve this…

First there is to say that a .cspkg-file is only a ZIP file with extensions changed. But the contents are encrypted. But there is a way to create unencrypted Service Packages:

  1. Close Visual Studio
  2. Shut down Windows Azure Compute Emulator
  3. Go to Control Panel > System and Maintenance > System
  4. Click on “Advanced Settings” in the pane on the left
  5. Switch to the “Advanced” tab and click on the “Environment Variables” Button.
    The “Environment Variables” dialog appears.
  6. Check for a System Variable named _CSPACK_FORCE_NOENCRYPT_
    If there is no such variable, create it.
  7. Set the value of the _CSPACK_FORCE_NOENCRYPT_ variable to “true”
    image_thumb_4
  8. Start Visual Studio again and publish a Cloud Service Project.

Note: When publishing without encryption, Visual Studio warns you with the following message in the output window
CloudServices44 : Forcing creation of unencrypted package...

To sneak inside the package do the following steps:

  1. Rename your <cloudservice>.cspkg file to <cloudservice>.zip
  2. Unpack that .zip folder
  3. Inside the uncompressed folder there is a file named after your web role <WebRole><Guid>.cssx
  4. Rename the .cssx file to .zip
  5. Since you performed the steps before, this .zip file is unencrypted now. Unzip it.
  6. Find the content of your package in the “sitesroot” folder.

Caution: Although either encrypted and unencrypted packages can be deployed to the cloud, it’s highly recommended to only use encrypted packages for security issues.

Issue: change physical path of site named “Web”

In earlier posts I showed how to use multiple sites within a single WebRole. This required to use the physicalDirectory attribute within the ServiceDefinition file. If you tried to use this attribute you might have run into this issue:

The default site in a WebRole is named “Web”. When you set the physicalDirectory attribute to point to any location – no change will happen.

It seems as if a site named “Web” ignores the physicalDirectory attribute.

But if you change the name to any other name: the physicalDirectory attribute will be used!

For further explanation have a look at the following xml snippets from the .csdef file:

<Site name="Web">
  <Bindings>
    <Binding name="Endpoint1" endpointName="Endpoint1" />
  </Bindings>
</Site>
This will use the default content.
<Site name="Web" physicalDirectory="C:\Code\WebApplication2">
  <Bindings>
    <Binding name="Endpoint1" endpointName="Endpoint1" />
  </Bindings>
</Site>
This will still use the default content.
<Site name="Web1" physicalDirectory="C:\Code\WebApplication2">
  <Bindings>
    <Binding name="Endpoint1" endpointName="Endpoint1" />
  </Bindings>
</Site>
This will use content from WebApplication2.

Friday, August 26, 2011

msshrtmi.dll - Hybrid Applications: Running in the cloud AND on premise

Recently I tried to deploy an application that was originally designed for the cloud on an in-house server. The only cloud capability the application used was the Windows Azure Blob Storage.

So in advance I changed the application to differ between

  • being run in the cloud and using the Blob Storage and
  • being run on premise and using the local disk storage

by using the following flag as an indicator:

  1: RoleEnvironment.IsAvailable

In both scenarios the deployment contains the Microsoft.WindowsAzure.StorageClient.dll. This works perfectly in the cloud but raised the following error on premise:


Exception type: FileNotFoundException
Exception message: File or assembly "msshrtmi, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35", or one of its dependencies, was not found.
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.InitializeEnvironment()
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment..cctor()


There are two possible ways to solve this issue:



  1. Copy the msshrtmi.dll manually to your on premise deployment.
    For example in the \bin folder next to the WindowsAzure.StorageClient.dll.

  2. Install the Windows Azure SDK on your in-house server.

Monday, July 18, 2011

Windows Azure Web Role Accelerator

Deploying new WebSites to Windows Azure or updating existing ones used to take a lot of time. And the Windows Azure Team worked hard on this issue making it more comfortable for developers.

And the result of this effort is the “Windows Azure Accelerator for Web Roles” announced a few days ago. That is a new Project Template generating an Azure Project containing a WebRole and a website project for management purposes. Here you can define how many instances of your website are to be created in the cloud. After deploying this project template Windows Azure will create the instances you requested. Then you can create websites in the management website and publish other websites in less than 30 seconds to the cloud. Those websites will immediately be available on all instances. The template uses your Windows Azure Storage account for this feature.

Here is a short demo video of the Web Role Accelerator:

Download the Web Role Accelerator from codeplex here: http://waawebroles.codeplex.com/

Backlink to the Windows Azure Team Blog: NOW AVAILABLE: Windows Azure Accelerator for Web Roles

Sunday, July 3, 2011

tangible.AzureIO - small sample

Because I’m short on time at the moment, I’ll just post a tiny little sample on the AzureIO library. I promise to publish a larger sample soon. Maybe I can think of something really useful, we’ll see…

For the moment it’s just a simple WPF application that allows the user to select a file (#1 in the picture) and upload it to the Windows Azure BlobStorage (#2). As a proof that the file was really uploaded, the application displays the url of the blob where the file can be viewed in the browser.

sample

There’s not a lot of code behind this sample. At first it’s to mention that the AzureIO library requires a configuration setting in the app.config file. The connection string to the StorageAccount needs to be set there.

  1: <configuration>
  2:   <appSettings>
  3:     <add key="StorageCredentials" value="UseDevelopmentStorage=true"/>
  4:   </appSettings>
  5: </configuration>

This configuration makes the application use the development storage of the Windows Azure SDK. To upload the file to the blob storage those three lines of code suffice:

  1: var fileName = System.IO.Path.GetFileName(txtFile.Text);
  2: var content = System.IO.File.ReadAllBytes(txtFile.Text);
  3: tangible.AzureIO.File.WriteAllBytes(fileName, content);

VoilĂ  – that’s it. Copy the displayed URL to your browser and get your file from the cloud!


Download the sample code here: AzureIOSample.zip

Friday, July 1, 2011

tangible.AzureIO - Accessing BlobStorage like System.IO

Writing and reading blobs to and from the Windows Azure BlobStorage isn’t as comfy as one could wish. So why not dealing with Files transferred to the cloud as if they were files we are used to work with?

Nothing looks smoother than this simple call to create a new file and write some bytes in it:

  1: System.IO.File.WriteAllBytes("MyFile.ext", new byte[] { ... });


Doing the same in the cloud requires connections to the CloudStorage, creating a BlobClient, dealing with containers and different BlobTypes … A huge overhead to what one really needs in this case. But how about this call doing exactly the same as shown above in the local file system – only in the cloud:

  1: tangible.AzureIO.File.WriteAllBytes("File.ext", new byte[] { ... });

Easy, isn’t it?


The tangible.AzureIO library contains nearly all functionality offered by System.IO for the cloud. A must have for developers who want to move their applications to the cloud with least changes! Find pendants to System.IO.Path in tangible.AzureIO.Path, System.IO.Directory in tangible.AzureIO.Directory and so on…


I spent a lot of time on this project and finally may publish a first beta version here. The library for download below is delivered “as is” – without any warranty and claims neither completeness nor faultlessness. All right reserved. You may use it in any of your projects – but it’s still a beta and subject to change.


I appreciate any comments and reviews on this library! So feel free to leave a comment to this blog post or send me an email to nico[at]zehage.net.


Download tangible.AzureIO here: tangible.AzureIO.zip

Sunday, June 19, 2011

CloudNinja project: Multi tenancy and metering sample

Few days ago version 2.0 of the Cloud Ninja project was published on the CodePlex platform. This sample project demonstrates how some of Windows Azure’s capabilities can be used to implement common cloud features.

The features shown in the sample are:

  • Multi tenancy where each tenant has his own SQLAzure database
  • Metering the usage of each tenant’s resources
  • Auto scaling methods
  • Task scheduling
  • Federated Identity to allow customers to adapt the look of the application

To draw interest on this project have a look at the application design of CloudNinja as shown on CodePlex…

On CodePlex sample source code, application design, documentation and a user guide are downloadable. Visit the project site here: http://cloudninja.codeplex.com

Thursday, June 16, 2011

Kinect for Windows SDK beta now available!

For all of you who awaited this moment as longing as I did: Since today the beta of the Kinect for Windows SDK is available for download. Enjoy!

Download here

Friday, June 10, 2011

Ensuring only one WorkerRole instance performs a task at a time

As a developer in a cloud environment one has to deal with several issues. Using multiple instances that can be shut down and transferred to other devices, up- and downscaling make it impossible to identify a single instance and be sure that it’s up and running. The cloud environment only ensures that the necessary amount of instances is available all the time.

Thinking of a scenario where special tasks are to be performed regularly or where those tasks may only be performed by a single machine at a time, it is essential to identify a “master” that will execute the tasks. So how to deal with this?

This example shows how to solve this problem with a file lock approach. Each instance wakes up after a given period of time and checks if it can become the “master” instance that will perform the necessary tasks. If this instance can be the “master” it puts a lock file into the cloud storage, performs the tasks and deletes this lock file again.
If an instance finds an existing lock file when checking for becoming the “master”, it does not perform any task.

Here is how you can implement this behavior step by step:

  1. Create a new Windows Azure Project in Visual Studio 2010 and add a single WorkerRole to this project.
  2. Inside the WorkerRole.cs, prepare to connect to the cloud storage by declaring three static members containing information about where to place the lock file and how this file is to be named:
      1: public class WorkerRole : RoleEntryPoint
      2: {
      3:     /// <summary>
      4:     /// Determines the container where the block file will be placed
      5:     /// in the cloud storage
      6:     /// </summary>
      7:     private static string blockFileContainer = "tasksample";
      8:     /// <summary>
      9:     /// Determines the name for the block file
     10:     /// </summary>
     11:     private static string blockFile = "block.ext";
     12:     /// <summary>
     13:     /// Represents the full path to the block file
     14:     /// </summary>
     15:     private static string blockFilePath = blockFileContainer + "/" + blockFile;
     16: 
     17:     ...
     18: }

  3. In the same file create an CloudBlobClient to access the storage.
      1: public class WorkerRole : RoleEntryPoint
      2: {
      3:     ...
      4:     /// <summary>
      5:     /// Client to access the blob storage
      6:     /// </summary>
      7:     private CloudBlobClient blobClient = CloudStorageAccount.DevelopmentStorageAccount.CreateCloudBlobClient();
      8:     
      9:     ...
     10: }

  4. Now we need to define for each instance in what interval it will check for becoming the “master” and performing the tasks. Since the development fabric is quite fast and all instances will startup nearly at the same time, initializing a randomizer by time won’t work here. In the real cloud it might be different.
    As a workaround we’re going to initialize a randomizer depending on the ID of the instance it belongs to. The typical ID of an instance in the development fabric is for example “deployment(19).MyProject.WorkerRole.0” where the 0 determines that this instance is the first one in the deployment. So the randomizer for this instance will be initialized with 0 as a seed.
    Then we choose a random value between 10 and 30 seconds.
    The code looks as follows:
      1: public class WorkerRole : RoleEntryPoint
      2: {
      3:     ... 
      4: 
      5:     /// <summary>
      6:     /// Numeric ID of this instance (might only work in development fabric)
      7:     /// </summary>
      8:     private static int instanceID = int.Parse(RoleEnvironment.CurrentRoleInstance.Id.Substring(RoleEnvironment.CurrentRoleInstance.Id.LastIndexOf(".") + 1));
      9:     /// <summary>
     10:     /// Milliseconds this instance needs to wait until trying to perform task
     11:     /// </summary>
     12:     private static int millisecondsToWait = new Random(instanceID).Next(10000, 30000);
     13: 
     14:     ...
     15: }

  5. Visual Studio prepares the WorkerRole.cs file so the the method “OnStart”  already is overwritten in the template. In this method we need to make sure that the container where we want to store the lock file exists.
      1: public class WorkerRole : RoleEntryPoint
      2: {
      3:     ...
      4: 
      5:     public override bool OnStart()
      6:     {
      7:         ...
      8: 
      9:         // make sure that the file lock container exists!
     10:         blobClient.GetContainerReference(blockFileContainer).CreateIfNotExist();
     11: 
     12:         return base.OnStart();
     13:     }
     14:     
     15:     ...
     16:  }

  6. The “Run” method is also already implemented. Here we need to execute our logic: At first the instance goes to sleep for the determined amount of time. After waking up, it will check if it can perform the tasks. If yes it will block other instances, perform the tasks and then delete the lock again.
      1: public class WorkerRole : RoleEntryPoint
      2: {   
      3:     ... 
      4:     public override void Run()
      5:     {
      6:         // This is a sample worker implementation. Replace with your logic.
      7:         Trace.WriteLine("ProcessWorker entry point called", "Information");
      8: 
      9:         while (true)
     10:         {
     11:             // wait
     12:             Trace.WriteLine("Waiting for " + millisecondsToWait + "ms", "Information");
     13:             Thread.Sleep(millisecondsToWait);
     14: 
     15:             // check if this instance should perform the task
     16:             // by trying to get a file lease in the cloud storage
     17:             if (CanPerformTask())
     18:             {
     19:                 // block other instances from performing the task
     20:                 BlockOtherInstances();
     21: 
     22:                 // perform the task
     23:                 PerformTask();
     24: 
     25:                 // release block to allow other instances to perfrom the task
     26:                 ReleaseBlock();
     27:             }
     28:             else
     29:             {
     30:                 // we are not allowed to perform this task
     31:                 Trace.WriteLine("May not perform task!", "Information");
     32:             }
     33:         }
     34:     }
     35:     ...
     36: }

  7. The function “CanPerformTask” checks if the given lock file exists in the blob storage by trying to fetch its attributes. If the attributes can be retrieved this file exists, otherwise an exception will be thrown.
      1: /// <summary>
      2: /// This function determines if this instance can perform the task
      3: /// by checking if any other instance is 
      4: /// </summary>
      5: /// <returns></returns>
      6: private bool CanPerformTask()
      7: {
      8:     Trace.WriteLine("Checking...", "Information");
      9:     // check if the locking file exists
     10:     try
     11:     {
     12:         // try to get the attributes from the lock file
     13:         // to check if the file exists
     14:         blobClient.GetPageBlobReference(blockFilePath).FetchAttributes();
     15:     }
     16:     catch
     17:     {
     18:         // the lock file does not exist -> this instance may perform the task
     19:         return true;
     20:     }
     21: 
     22:     // the blob exists -> this instance may not perform the task atm
     23:     return false;
     24: }

  8. In the method “BlockOtherInstances” we create new PageBlob with the size of 0 bytes and store in a metadata attribute which instance created this blob. That way we make sure that only the same instance that created a lock file can delete it again.
      1: /// <summary>
      2: /// This method blocks other instances from performing the task
      3: /// by creating the file lock.
      4: /// </summary>
      5: private void BlockOtherInstances()
      6: {
      7:     Trace.WriteLine("Blocking other instances", "Information");
      8: 
      9:     // create a new blob at the lock file url with size 0
     10:     // and note in the properties that this instance created the lock file
     11:     var block = blobClient.GetPageBlobReference(blockFilePath);
     12:     block.Create(0, new BlobRequestOptions() { BlobListingDetails = BlobListingDetails.All });
     13:     block.Metadata["CreatingInstance"] = RoleEnvironment.CurrentRoleInstance.Id;
     14:     block.SetMetadata();
     15: }

  9. When releasing the lock again in the “ReleaseBlock” method, we check if the instance that intends to delete the lock is the same that created this lock. If the instances match, we delete the page blob again.
      1: /// <summary>
      2: /// This method releases the lock file so that other instances
      3: /// can perform the task.
      4: /// </summary>
      5: private void ReleaseBlock()
      6: {
      7:     // get the block file and its attributes
      8:     var block = blobClient.GetPageBlobReference(blockFilePath);
      9:     block.FetchAttributes();
     10: 
     11:     // check if this instance created the block
     12:     if (block.Metadata["CreatingInstance"] == RoleEnvironment.CurrentRoleInstance.Id)
     13:     {
     14:         Trace.WriteLine("Deleting block file", "Information");
     15:         // this instance created the block > delete it
     16:         block.Delete();
     17:     }
     18: }

  10. Last but not least: performing a task in this example means writing a message to the trace and waiting for 5 seconds.
      1: /// <summary>
      2: /// This method represents the task that may only be performed by a single
      3: /// instance at a time.
      4: /// </summary>
      5: private void PerformTask()
      6: {
      7:     // for demonstration purpose as a task
      8:     // we only write an information to the trace.
      9:     Trace.WriteLine(String.Format("Performing the task at {0}", DateTime.Now.ToString()), "Information");
     10:     Thread.Sleep(5000);
     11: }

Now feel free to set up as many instances as you wish and see how this example works. Here is a screenshot running two instances. The first one has an interval of 24,5 seconds and the second one waits 14,9 seconds before trying to perform the task. Both instances perform the tasks until the first one finds an existing file lock from the other instance, which is already performing the task at 13:27:22. So it goes to sleep again…


blocking


You can download the source code of this example project here: PerformingSingleTask.zip

Saturday, June 4, 2011

Multiple Sites in a single CloudProject: Using Host Headers

The last and – for me – most interesting way to host multiple websites in a single Windows Azure deployment is to use Host Headers. This allows you to map for example www.mydomain.tld and www.anotherdomain.tld to the same cloud deployment (http://myproject.cloudapp.net) via DNS and host their content in the cloud.

Create a new Cloud Project in Visual Studio and add a ASP.NET WebRole to this project.

CreateNewCloudProject

Then, add a new ASP.NET Application to the same solution. Your solution explorer should then look like this:

HostProject.

The key is again the ServiceDefinition.csfg file: But this time we need to add another <Site> element to the default <WebRole>’s <Sites> tag. The new <Site> needs an arbitrary name in its attribute name.
The physicalDirectory attribute needs to contain the path to where Visual Studio can find the files of this website to publish it to the cloud. This path can either be set as an absolute path or as an relative one. A relative path must be set relative to the ServiceDefinition.csdef file.

This new <Site> needs to be bound to the proper host header and the default <Site> also needs a host header to be set. So copy the <Binding> from the default <Site> and paste it inside the new <Site> element. Make sure that those bindings have different host headers. Otherwise you’ll get a compilation error.

After all the ServiceDefinition-file looks like this:

  1: <ServiceDefinition name="MultiSitesByHostHeader" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  2:   <WebRole name="WebRole1">
  3:     <Sites>
  4:       <Site name="Web">
  5:         <Bindings>
  6:           <Binding name="Endpoint1" endpointName="Endpoint1" hostHeader="www.mydomain.tld" />
  7:         </Bindings>
  8:       </Site>
  9: 
 10:       <Site name="TheOtherApplication" physicalDirectory="../WebApplication1">
 11:         <Bindings>
 12:           <Binding name="Endpoint1" endpointName="Endpoint1" hostHeader="www.anotherdomain.tld"/>
 13:         </Bindings>
 14:       </Site>
 15:     </Sites>
 16: 
 17:     <Endpoints>
 18:       <InputEndpoint name="Endpoint1" protocol="http" port="80" />
 19:     </Endpoints>
 20:     ...
 21:   </WebRole>
 22: </ServiceDefinition>

But before you can run this Cloud Project and test it, you need to map the hostheaders to the cloud project in your DNS. For testing locally in the Azure computing simulator, you need to add the following lines to your HOSTS-file:

127.0.0.1     www.mydomain.tld
127.0.0.1     www.anotherdomain.tld

Open a notepad with elevated permissions (in administrator mode) and open in C:\Windows\System32\drivers\etc the HOSTS-file. Insert the two lines and save the file again.


Now run the cloud project. Don’t panic when the default URL http://127.0.0.1:81 returns a 404. Browse to http://www.mydomain.tld:81/ and find the first application there. On http://www.anotherdomain.tld:81/ there is the other one.


Download the sample code here:  MultiSitesByHostHeader.zip