Hosting ASP .Net Core websites in IIS using AspNetCoreModule

This blog post describes how to configure the AspNetCoreModule for hosting ASP .Net Core applications in IIS. The MSDN-documentation did not point me in the direction of a successful configuration on my first attempt, and it was hard to tell from the error logs what exactly was wrong.

Errors encountered for a invalid module configuration

When the AspNetCoreModule configuration is wrong, IIS will respond with a 502.5 error and an entry will be logged to Window’s System Event Log. Since the DotNet Core app fails before starting up, you will not see anything in the app log files.

HTTP Error 502.5 – Process Failure

The browser will return the following error page

Common causes of this issue:

  • The application process failed to start
  • The application process started but then stopped
  • The application process started but failed to listen on the configured port

Troubleshooting steps:

  • Check the system event log for error messages
  • Enable logging the application process’ stdout messages
  • Attach a debugger to the application process and inspect

For more information visit: https://go.microsoft.com/fwlink/?LinkID=808681

Event log message

The Windows Event Viewer will show a message similar to this, indicating that the commandline in web.config hasn’t been set correctly:

Application ‘MACHINE/WEBROOT/APPHOST/MYAPP’ with physical root ‘c:\MYAPPROOTPATH’ failed to start process with commandline ‘binIISSupportVSIISExeLauncher.exe -argFile IISExeLauncherArgs.txt’, ErrorCode = ‘0x80070002 : 0.

Failed request tracing log

Enabling failed request tracing for the web site will show a “Bad Gateway” error:

Web.config example

Assuming that the web server has been configured for hosting ASP .Net Core apps, the next step is to make sure that the aspNetCore module parameters are correct. For the example below, dotnet.exe must be available in the path for the user which runs the application pool. In the aspNetCore configuration element, processPath should be set to “dotnet.exe” and arguments should be set to the web application’s entry assembly name.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="dotnet.exe" arguments="myApp.Web.dll" stdoutLogEnabled="false" >
<environmentVariables>
<environmentVariable name="ASPNETCORE_ENVIRONMENT" value="Prod" />
</environmentVariables>
</aspNetCore>
</system.webServer>
</configuration>
view raw web.config hosted with ❤ by GitHub

 

Hosting an application targeting the full .Net framework

The aspNetCore module can also be used when hosting an application which targets the full .Net framework. The processPath parameter must then be set to the web application’s executable assembly and the arguments parameter can be set empty:

 

Updating multiple site bindings in IIS with new SSL-certificate

This blog post describes how to use a PowerShell script to update multiple IIS site bindings with a new/renewed SSL/TLS sertificate. But first, some background information on why and when this may be useful.

Example scenario for using multiple site bindings

A site binding in IIS may be configured with a host name. IIS will then use the host header in the HTTP request to route a requests to the correct web site. With Server Name Indication (SNI) enabled, multiple sites and host names can share the same port for incoming SSL/TSL requests.

2017-10-17-Edit_Site_Binding[1]
IIS site binding with host name and SNI configured
When hosting web applications on template-based virtual machines, it may be useful to configure multiple bindings for each hosted application. For instance, imagine that you have a web application hosted at https://myportal.mycompany.com and that you add multiple host name bindings to this web site, eg. by appending the numbers 1-10 to “myportal” or by appending tag names like “-qa”, “-preprod”, “-failover”, etc. The web site will then be able to process any requests with a matching host name, given that the DNS records point to the virtual machine.

2017-10-17-Site_Bindings[1]
Multiple hosts names configured for a site
Next, consider that we have multiple virtual machines running the same application, all having the same IIS binding configuration. The machines may have different roles and may be running different versions of the application, or they may be identical clones placed behind a load balancer for scale out.

PowerShell script for updating multiple site bindings with new certificate

The following powershell script updates certificates for all bindings matching the domainNameMatchPattern regex pattern. The script has been designed to be an Octopus Deploy script module and reads the certificate friendly name to use from an Octopus Deploy variable. The certificate must exist in Octopus Deploy’s certificate store.

# Example of usage:
# Update-Certificates -domainNameMatchPattern "mycompany.com" -variableNameForCertificateToUse "CurrentMyCompanyDotComCertificate"
function Write-Info ($message) {
Write-Host "Info:" $message
}
function AssignCertificate([string] $friendlyName, [string] $hostName, [int] $port) {
$matchingCertificates = (Get-ChildItem cert:\localmachine\my) | Where-Object {$_.FriendlyName -eq $friendlyName}
$matchCount = ($matchingCertificates | Measure-Object).Count
if ($matchCount -ne 1) {
Write-Info ("Found " + $matchCount + " certificates matching friendly name " + $friendlyName + " (Expecting 1 match).")
Write-Info "The following certificates are installed: "
(Get-ChildItem cert:\localmachine\my) | Format-Table -Property Thumbprint, FriendlyName, Subject
}
else {
$certificate = $matchingCertificates[0]
$existingBinding = Get-WebBinding | Where-Object { $_.bindingInformation -match ":$($port):$($hostName)" }
if ($existingBinding) {
if ($existingBinding.certificateHash -ne $certificate.Thumbprint) {
Write-Info "Found existing binding with different thumbprint $($certificate.Thumbprint), will remove old certificate binding"
"netsh http delete sslcert hostnameport=$($hostName):$($port)"
$command = "& netsh.exe http delete sslcert hostnameport=$($hostName):$($port)"
Write-Info "Executing: $command"
Invoke-Expression $command
}
else {
return
}
}
$appIdGuid = [guid]::NewGuid().ToString("B")
$command = "& netsh.exe http add sslcert hostnameport=$($hostName):$($port) certhash=$($certificate.Thumbprint) certstorename=MY appid='$($appIdGuid)' "
Write-Info "Executing: $command"
Invoke-Expression $command
}
}
function Update-Certificates([string] $domainNameMatchPattern, [string] $variableNameForCertificateToUse)
{
Write-Info "Update-Certificates starting"
$certificateFriendlyName = $OctopusParameters["$($variableNameForCertificateToUse).Name"]
Write-Info "Certificate variable name is $certificateFriendlyName"
Import-Module WebAdministration
$bindingsToUpdate = Get-WebBinding | Where-Object { $_.protocol -eq "https" -and $_.bindingInformation -match $domainNameMatchPattern }
Write-Info "Found $($bindingsToUpdate.Length) binding(s) to update:"
Write-Info $bindingsToUpdate
[regex]$bindingInfoRegEx = "\*:(?<portNo>\d+):(?<hostName>.+)"
foreach ($binding in $bindingsToUpdate) {
$bindingInfoMatch = $bindingInfoRegEx.Match($binding.bindingInformation)
[int]$portNo = $bindingInfoMatch.Groups["portNo"].Value
$hostName = $bindingInfoMatch.Groups["hostName"].Value
AssignCertificate -friendlyName $certificateFriendlyName -hostName $hostName -port $portNo
}
}

The script consists of the helper function AssignCertificate and the main function Update-Certificates which will be invoked from a Octopus Deploy project step.

2017-10-17-Octopus-library-script[1]
Add the script as a Octopus Deploy script module
The script module can then be invoked from a Octopus Deploy project by using a “Run a script” step:

2017-10-17-Octopus-script-process-step[1]
Invoke the Update-Certificate function located in the previously created script module
The previous step assumes that the new certificate already has been installed on the relevant hosts. Otherwise, the “Import Certificate” Octopus Deploy step template can be used to install certificates to the hosts.

Optimistic concurrency with MySQL and Entity Framework

Implementing optimistic concurrency for a table in SQL Server is extremely simple. Just add a column of type TIMESTAMP and configure it like this in the Entity Framework model:

A SQL Server TIMESTAMP column is guaranteed to be updated with a new value each time a row is changed, and is a perfect fit for use with optimistic locking. For MySQL the situation is different because it has no datatype or functionality which replaces SQL Server’s TIMESTAMP type. You can get something similar by declaring a column like this:

The problem is that the MySQL TIMESTAMP type is a datetime value with second as the highest precision. This makes it useless for this purpose. Even if the datatype and CURRENT_TIMESTAMP had microseconds as highest precision, it would not be guaranteed to be unique for each update.

I ended up using GUID as the datatype for the version column, and assign new values to the version property before Entity Framework performs updates to the database.

Version property definition;

Version property configuration:

Next, I declared an interface which will be added to each versioned entity:

Last, in my DbContext class I override SaveChanges and update the Version property for all required entities before calling base.SaveChanges():

var concurrencyTokenEntries = ChangeTracker.Entries<IVersionedEntity>();
foreach (var entry in concurrencyTokenEntries)
{
if (entry.State == EntityState.Unchanged)
{
continue;
}
entry.Entity.Version = Guid.NewGuid();
}
view raw savechanges.cs hosted with ❤ by GitHub

An exception of type DbUpdateConcurrencyException will now be thrown in case the optimistic concurrency check fails when updating a row in the database.

Logging errors in NServiceBus

NServiceBus logs a generic error message when a message is sent to the error queue, and a generic warning message is logged when a message is sent to first level retry. The problem is that these log messages don’t contain the original exception messages and stack traces. However, the exception details are stored in the message header data, but it may be cumbersome to inspect the message headers instead of just viewing the exception in the log output.

The recommended solution to this issue from Particular Software is to use ServicePulse for health monitoring.

The client I currently work for is using a custom made centralized logger, and we want NServiceBus to log to this log store when messages are forwarded to the error queue.

NServiceBus has a class named NServiceBus.Faults.ErrorsNotifications which contains the following observables:

– MessageSentToErrorQueue

– MessageHasFailedAFirstLevelRetryAttempt

– MessageHasBeenSentToSecondLevelRetries

You can subscribe to these observables when the endpoint starts, like in the following example which logs an error when messages are sent to the error queue:

public class GlobalErrorHandler : IWantToRunWhenBusStartsAndStops
{
private readonly ILogger _logger;
private readonly BusNotifications _busNotifications;
readonly List<IDisposable> _notificationSubscriptions = new List<IDisposable>();
public GlobalErrorHandler(ILogger logger, BusNotifications busNotifications)
{
_logger = logger;
_busNotifications = busNotifications;
}
public void Start()
{
_notificationSubscriptions.Add(_busNotifications.Errors.MessageSentToErrorQueue.Subscribe(LogWhenMessageSentToErrorQueue));
}
public void Stop()
{
foreach (var subscription in _notificationSubscriptions)
{
subscription.Dispose();
}
}
private void LogWhenMessageSentToErrorQueue(FailedMessage message)
{
var properties = new
{
MessageType = message.Headers["NServiceBus.EnclosedMessageTypes"],
MessageId = message.Headers["NServiceBus.MessageId"],
OriginatingMachine = message.Headers["NServiceBus.OriginatingMachine"],
OriginatingEndpoint = message.Headers["NServiceBus.OriginatingEndpoint"],
ExceptionType = message.Headers["NServiceBus.ExceptionInfo.ExceptionType"],
ExceptionMessage = message.Headers["NServiceBus.ExceptionInfo.Message"],
ExceptionSource = message.Headers["NServiceBus.ExceptionInfo.Source"],
TimeSent = message.Headers["NServiceBus.TimeSent"]
};
_logger.Error("Message sent to error queue. " + properties, message.Exception);
}
}

The observable is implemented by using Reactive Extensions, so you will have to install the NuGet package Rx-Core for this to work.

NServiceBus fails to connect to RavenDB during high load

Recently I experienced a serious issue with NServiceBus and RavenDB where the NServiceBus endpoints no longer were able to connect to RavenDB. The following error message was written to the Windows event log:
System.Net.Sockets.SocketException (0x80004005): An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full
It turned out that this is a known issue with using RavenDB. The solution is to enable the HttpWebRequest.UnsafeAuthenticatedConnectionSharing setting on the RavenDB client connection. Be aware of the security implications related to using this setting, as described in the MSDN documentation.
A configuration setting named EnableRavenRequestsWithUnsafeAuthenticatedConnectionSharingAndPreAuthenticate which could be set in order to avoid the issue was added to the configration API in version 4.0.0. However, the NServiceBus configuration API for RavenDB has later been rewritten, and there is currently no documentation available on how to enable this setting using the new API.

Follow these two steps to enable the setting:

Step 1:
Add a reference to RavenDB.Client for your NServiceBus project. You should use the same version as is referenced by your NServiceBus version.
Step 2:
Use the CustomiseRavenPersistence method on the configuration API to register a callback which can be used to configure the RavenDB client connection:
public class NServiceBusConfigurator : IConfigureThisEndpoint, AsA_Server, IWantCustomInitialization
{
public void Init()
{
Configure.Instance.CustomiseRavenPersistence(ConfigureRavenStore);
}
private static void ConfigureRavenStore(Raven.Client.IDocumentStore store)
{
store.JsonRequestFactory.ConfigureRequest += (sender, args) =>
{
var httpWebRequest = ((HttpWebRequest) args.Request);
httpWebRequest.UnsafeAuthenticatedConnectionSharing = true;
httpWebRequest.PreAuthenticate = true;
};
}
}

Wireshark to the rescue

Wireshark is a free and open source network protocol analyzer which can be really useful when analyzing a wide range of network related issues.

Recently it turned out to be a real life saver on the project I currently work on. A web service client application which had been developed by a consultant company in India was going to call a web service hosted on a test server by my company in Oslo. But no matter how much the developers working in India tried, it would not work and they would always get the following exception:

System.Net.WebException was caught
Message=”The underlying connection was closed: The connection was closed unexpectedly.”
Source=”System.Web.Services”
StackTrace:
at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request)
at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request)
at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)

However, everything worked fine when we tested our service from external and internal networks in Oslo.

When tracing the incoming request from India in Wireshark we could see the following:

IncomingTraffic_thumb-25255B1-25255D[1]

The request reached our server, but we were unable to send the “100 Continue” response back to the client. It was possible to reach our web server through a browser on the client machine, so there should be no firewalls blocking the communication. It seemed like the connection had been closed by the client.

Next we got the developers in India to try the same request in SoapUI, and then it worked! This made us think that the problem was in the client application and not at the infrastructure level. So we spent several hours trying to troubleshoot the client environment, without any success. Google gave us numerous reports (1, 2, 3) of other people experiencing the same issues, but the suggested solution neither didn’t work nor did they explain the exact reason for the problem. Most of the suggestions involved excluding KeepAlive from the HTTP header and to use HTTP version 1.0 instead of version 1.1.

The next step was to log the request by using Fiddler Web Debugger on the calling server in India and then try to replay the request. The first replay of the request failed, as expected:

HTTP/1.1 504 Fiddler – Receive Failure
Content-Type: text/html; charset=UTF-8
Connection: close
Timestamp: 22:17:14.207

[Fiddler] ReadResponse() failed: The server did not return a response for this request.

So there was no reply from our server. Next we tried to remove the HTTP KeepAlive header as suggested by some of the blog posts we found on Google, and then resubmitting the request in Fiddler:

FiddlerHeader_thumb-25255B3-25255D[1]

And now the request worked in Fiddler! Once the TCP connection was established, we could even replay the original request which failed, and it would work.

But why did this work?

Based on the test results in Fiddler we arrived at the conclusion that the problem was not in the client application, but rather at the infrastructure level.

So we installed Wireshark on the calling server and did some more tracing. Finally we could see what was causing us problems:

WireSharkFragmentationNeeded_thumb-25255B2-25255D[1]

 

A router is telling us that the size of our IP datagram is too big, and that it needs to be fragmented. This is communicated back to the calling server by the ICMP message shown in the picture above.

By inspecting the ICMP message in Wireshark we can find some more details:

WireSharkIPDetails_thumb-25255B1-25255D

There are several interesting things to observe in the picture above:

  1. The problem occurs when the router with IP address 209.58.105.21 tries to forward the datagram to the next hop (this is a backbone router located in Mumbai)
  2. The router in the next hop accepts a datagram size of 1496 bytes, while we are sending 1500 bytes.
  3. The router at 209.58.105.21 sends an ICMP message back to the caller which says that fragmentation of the datagram is needed

By executing the “tracert” command on the remote server we could get some more  information about where on the route the problem occurred:

[…]
3    26 ms    26 ms    26 ms  203.200.137.9.ill-chn.static.vsnl.net.in [203.200.137.9]
4    31 ms    31 ms    31 ms  59.165.191.41.man-static.vsnl.net.in [59.165.191.41]
5    66 ms    66 ms    66 ms  121.240.226.26.static-mumbai.vsnl.net.in [121.240.226.26]
6    70 ms    70 ms    70 ms  if-14-0-0-101.core1.MLV-Mumbai.as6453.net [209.58.105.21]
7   184 ms   172 ms   171 ms  if-11-3-2-0.tcore1.MLV-Mumbai.as6453.net [180.87.38.10]
8   174 ms   173 ms   194 ms  if-9-5.tcore1.WYN-Marseille.as6453.net [80.231.217.17]
9   175 ms   176 ms   175 ms  if-8-1600.tcore1.PYE-Paris.as6453.net [80.231.217.6]
10   191 ms   176 ms   229 ms  80.231.154.86
11   174 ms   174 ms   213 ms  prs-bb2-link.telia.net [213.155.131.10]
[…]

Conclusions

A white paper is available at Cisco which describes the behaviour which we could observe above. The router which requested fragmentation of the datagram did not do anything wrong, it just acted according to the protocol standards. The problem was that the OS and/or network drivers on the calling server did not act on the ICMP message and did not try to either use IP fragmentation or to reduce the MTU size to a lower value which wouldn’t require fragmentation.

According to the Cisco white paper it is a common problem that the ICMP message will be blocked by firewalls, but that was not the case for our scenario.

And what about the request we could get working in Fiddler by removing “Connection: Keep-Alive” from the header? It worked because the datagram would become small enough to not require fragmentation (<= 1496 bytes) when we removed this header.

Resources

Wireshark homepage: http://www.wireshark.org/

Resolve IP Fragmentation, MTU, MSS, and PMTUD Issues with GRE and IPSEC: http://www.cisco.com/en/US/tech/tk827/tk369/technologies_white_paper09186a00800d6979.shtml

First experiences with using WinRM/WinRS for remote deployment

What is WinRM/WinRS?

Windows Remote Management (WinRM) is a remote management service which was first released with Windows 2003 R2.

WinRM is a server component, while Windows Remote Shell (WinRS) is a client which can be used for executing programs remotely on computers which run WinRM.

The following example shows how to remotely list the contents of the C: folder on a computer with host name Server01:

WinRS –r:Server01 dir c:

Using WinRM for remote deployment

My first encounter with WinRM/WinRS was to execute some PowerShell scripts for automatic remote deployment of a test environment. The commands were executed from an MSBuild script in a CruiseControl.Net build.

The scripts would first uninstall any old versions of the components, and then renew databases and install new component versions. Finally a set of NUnit tests would be executed on the environment.

WinRS failing to execute remote commands due to limited quotas

It was very easy to get started with WinRS, and in the beginning everything seemed to work fine. But now and then the execution failed with System.OutOfMemoryException or with the message “Process is terminated due to StackOverflowException.”.

The reason for these problems was not obvious since there was no mention of quotas in the error messages, but after some investigation it turned out that they were caused by a too low memory quota on the server. The default memory quota is 150 MB, and can be changed by executing the following command on the remote server (will set memory quota to 1 GB):

WinRM set winrm/config/Winrs @{MaxMemoryPerShellMB = “1000”}

Multi-Hop configuration

In one of my scripts i tried use a UNC path to access a remote share from the target computer, but got “Access is denied”. It turned out that the Credential Security Service Provider (CredSSP)  had to be configured on the client and on the server in order to achieve this: http://msdn.microsoft.com/en-us/library/windows/desktop/ee309365(v=VS.85).aspx

Resources

Configuring WinRM

Quota Management for Remote Shells

Using Gendarme with CruiseControl.Net for code analysis

Gendarme is being developed as a part of the Mono project and is a tool for code analysis. It comes with a wide range of predefined rules and can easily be extended with you own custom rules which you can write in C# or other .Net languages.

Configuring the CruiseControl.Net buidl task

CruiseControl.Net has been delivered with the Gendarme task since version 1.4.3. However, the Gendarme executable must be downloaded and installed separately. The binary can be downloaded from this link: https://github.com/spouliot/gendarme/downloads

Gendarme is designed for processing the build output assemblies in ONE directory. I.e. it does not support recursive search for assemblies, which fits well if you have one CruiseControl.Net build project per service/application, but in my case I wanted to generate a report for an entire product branch with multiple services and applications.

This can be achieved by using the configuration element, which lets you specify a file that contains the full path to each assembly which should be analysed.
In order to generate the file, I execute the following PowerShell command:

Get-ChildItem -Path 'D:SomeDirWork' -Recurse 
	-Include MyCompany*.dll 
	-Exclude *.Test*.dll,*Generated.dll | 
	sort -Property Name -Unique | 
	sort -Property FullName | 
	foreach {$_.FullName} | 
	Out-File -FilePath 'D:SomeDirArtifactAssembliesForCodeAnalysis.txt' -Width 255

The PowerShell command above will recursively scan through the directory “D:SomeDirWork” and include all DLL files starting with “MyCompany” excluding those which ends with “.Test.dll” or “Generated.dll”. Next it will select distinct files regardless of paths (in order to filter out shared assemblies which are duplicated), before it sorts by full path name and write the output to file.

Using the PowerShell command as an executable step, the project configuration in ccnet.config turns into this:

Configuring the Dashboard

The stylesheets which are needed for showing the formatted reports in the CruiseControl.Net dasboard are included with the CruiseControl.Net installation, and just need to be referenced in dasboard.config:

Resources

Gendarme home page: http://www.mono-project.com/Gendarme

Gendarme CCNet task configuration: http://confluence.public.thoughtworks.org/display/CCNET/Gendarme+Task

Intellisense for CruiseControl.Net configuration files

Editing the CruiseControl .Net configuration file ccnet.config may be a cumbersome process. The XML configuration elements are documented at http://ccnetlive.thoughtworks.com/ccnet/doc/CCNET/Configuring%20the%20Server.html, but it would be more convenient to have intellisense available when editing the configuration file.

Intellisense for CCNet configuration files can be added to Visual Studio by using the schema definition file ccnet.xsd. Unfortunately this file is not distributed by the CCNet installation package, but it is included in the source distribution. For the current version the file is located at “projectccnet.xsd” in the downloadable source distribution zip file.

Your can also get it from the the source code repository at SourceForge (link is to version 1.5).

Adding the XSD schema to Visual Studio

Once you have gotten your hands on the ccned.xsd file, it must be copied to the schema folder of your Visual Studio installation, e.g. to  “C:Program Files (x86)Microsoft Visual Studio 10.0XmlSchemas”.

Note: Copying the file to the folder “Microsoft Visual Studio 10.0Common7Packagesschemasxml” will not have any effect!

Configuring the namespace

Which namespace should be used for the CCNet configuration files? A namespace must be specified in order for Visual Studio to know which schema to use for intellisense.

CCnet.xsd defines the target namespace “http://thoughtworks.org/ccnet/1/5&#8221;:

… which means that the following namespace must be defined in the CCNet configuration files:

The schema file seems to favor using XML elements instead of attributes for many configuration options, which contradicts many of the example configurations which are distributed with CCNet, but I don’t consider this as being a big issue.