Could not find a base address that matches scheme net.pipe for the endpoint with binding NetNamedPipeBinding – WAS and IIS hosting

In most of the cases the error is indicating a missing host/base address section in the config file of the server.

So, simply add a host section to the services section and the define base addresses for http and net pipe like here:

<services>
   <service name="xxxx">
       <host>
           <baseAddresses>
               <add baseAddress="http://localhost:8000/MyService" />
               <add baseAddress="net.tcp://localhost:8100/MyService" />
               <add baseAddress="net.pipe://localhost/" />

But, you can also host a service that is only providing net pipes binding in IIS and benefit from the Windows Process Activation Service (WAS). Using WAS you let IIS control your service and IIS ensures that the service is running and restarted if necessary, all things you need to address in self hosting.

In the case of WAS services you can spare the above, endpoint protocol bindings are in fact defined in IIS and the subject’s exception message is indicating that the net.pipe protocol is not enabled in in the advanced settings of the IIS application. Most probably only the http protocol is enabled, which is the default. Just add net.pipe to the comma separated list and try to browse the service.

SNAGHTML31f444

Server Name Indication in IIS 8

Server Name Indication (SNI)

With Windows 2012 server Microsoft’s Internet Information Services 8.0 supports Server Name Indication (SNI). SNI is an extension to the TLS protocol that allows the web server to host multiple virtual domains in combination with HTTPS.

Although you could have registered different host names for different web sites in IIS < 8 things got tricky if you needed to have them secured by SSL/TLS. Actually it is impossible, because the web server is unable to extract the host name information from the HTTPS request header, because the packet is encrypted already on the transport layer before it arrives on the HTTPS stack of the web server. Routing the HTTP request to virtual domains is only possible if you assign different IP addresses to the web sites.

SNI fixes this problem by a extending TLS in a way that the client sends the requested virtual domain as part of the TLS negotiation.The server keeps the information in the TLS session and is later on capable to route the HTTPS requests to the correct domain.

Server and client support

Because SNI extends the TLS negotiation both parties, the client and the server, need to support Server Name Indication. Forntunately almost all browsers do that allready (the wikipedia article about SNI provides a detailed list).

Configuration sample in IIS 8

Let’s say we have two web sites on our server, called foo.com and bar.com.

image

If we now add an HTTPS binding to the foo.com site we have a new option in IIS 8, called Require Server Name Indication.

image

All we need to do is to apply the certificates to both web sites and check the Require SNI option and then we can access both web sites using HTTPS on the same server. Magic.

image

DSMX Data Relation to consume RSS 2.0 feeds 1/2

A Data relation that consumes an RSS 2.0 XML data stream is pretty straight forward to implement. All it needs is basically an URL pointing to the RSS Feed provider and a .net WebClient object that downloads the XML from the server and deserializes it into a data relation table.

But let’s start by implementing the data relation. To do so, create a new class library project in Visual Studio, doesn’t matter if C# or VB.NET, I’m using vb.

Add a reference to the DataRelationInterfaces.dll and create a new class, named RssFeedRelation, which implements IDataRelation.

GetMetaData

The GetMetaData method is called by DSMX to get metadata information from the relation. This is where we add two tables, one that will provide basic information like link, description and name of the RSS feed. A second table that contains a list of all available RSS feed items.

We also want to add to parameters, one for the server URL and another for the provided encoding. Because in most cases the encoding is utf-8, we set this as the default. I also added a default for the Feed URL as well.

Public Sub GetMetaData(ByVal accountID As Integer, ByVal Language As String, ByVal MetaData As IhtDataProviderMetaData) Implements IDataRelation.GetMetaData
     MetaData.AddTable(RssFeedPropertiesTable)
     MetaData.AddTable(RssFeedTable)
     MetaData.AddParameter("Encoding", "RSS Feed XML content encoding", "utf-8", False)
     MetaData.AddParameter("RssFeedUrl", "Url pointing to the RSS feed XML.", "http://support.directsmile.de/support/rss.aspx", False)
End Sub

LoadData

Implementing the LoadData method that is actually called by DSMX to get the feed data is also no black magic at all.

 Public Sub LoadData(ByVal table As IhtDataTable) Implements IDataRelation.LoadData
        Dim url As String = table.GetParameterValue("RssFeedUrl")
        Dim enc As String = table.GetParameterValue("Encoding")
        If table.Query.TableName = RssFeedTable Then
            AddFeedItemTableFields(table)
            Dim itms = LoadFeedItems(url, enc)
            If Not itms Is Nothing AndAlso itms.Count > 0 Then
                Dim pagedResult As IEnumerable(Of RssItem) = GetPagedResultSet(table, itms)
                AddFeedItems(table, pagedResult)
            End If
        ElseIf table.Query.TableName = RssFeedPropertiesTable Then
            AddFeedPropertiesTableFields(table)
            Dim itms = LoadFeedItems(url, enc)
            If Not itms Is Nothing AndAlso itms.Count > 0 Then
                AddFeedProperties(table, itms)
            End If
   Else
            Throw New ArgumentException("No valid table name found.")
End If

That method does two things, first it checks what table was requested by DSMX. It contacts the Feed providing server and downloads the feed XML to return either the list of items or just the feed header.

LoadItems

The LoadItems method instanciates a WebClient and downloads the XML.

Public Function LoadFeedItems(url As String, enc As String) As IEnumerable(Of RssItem)
        If String.IsNullOrEmpty(url) Then
            Throw New ArgumentException("RSS feed URL must have a value.")
        End If
        Dim wc As New WebClient()
        wc.Encoding = Encoding.GetEncoding(enc)
        Dim result = wc.DownloadString(New Uri(url))
        If String.IsNullOrEmpty(result) Then
            Throw New ArgumentException("RSS feed URL returned no valid RSS data [" & url & "]")
        End If
        Dim rssFeedItems = New RssItems
        rssFeedItems.Deserialize(result)
        Return rssFeedItems
End Function

If the download was successful we can deserialize the XML data into a CLR object. Very convenient using XML literals in VB.NET.

Public Sub Deserialize(xml As String)
        If String.IsNullOrEmpty(xml) Then
            Throw New ArgumentException("Feed xml contains to data.")
        End If
        Dim doc As XDocument = XDocument.Parse(xml)
        If doc Is Nothing Then
            Throw New Exception("Failed to parse rss feed xml.")
        End If
        Title = doc...<channel>.<title>.Value
        Link = doc...<channel>.<link>.Value
        Description = doc...<channel>.<description>.Value
        For Each elem In From element In doc...<channel>.<item>
            If elem.Name = "item" Then
                Add(New RssItem With {.Author = elem.<author>.Value,
                                      .Description = elem.<description>.Value,
                                      .Link = elem.<link>.Value,
                                      .PubDate = TryConvertToDate(elem.<pubDate>.Value),
                                      .Title = elem.<title>.Value,
                                      .EnclosureUrl = elem.<enclosure>.@url,
                                      .EnclosureType = elem.<enclosure>.@type})
            End If
        Next
End Sub

Conclusion

Basically that’s all it needs to consume RSS feeds. In part 2 of this short series I will create a little mobile RSS feed reader to show you easy it is to integrate this data relation into a cross media server.

Have fun

Oliver

Debugging WCF services

Introduction

Microsoft established with the Windows Communication Foundation (WCF) a sophisticated way to communicate between different components and devices in a service orientated world. A fundamental effort was taken to create a model that is not following a monolithic approach to enable communication between two applications, but a new form that is flexible and configurable in the way modern applications initiate communication and what kind of, more or less secure, channels they use to transmit data. Those different bindings and behaviours are often based on industrial standards and need to be interoperable. That adds complexity, unfortunately.

WCF Exception, faults and security

An important aspect in WCF messaging is security. If you express the actual error, exception or fault message to the client, it’s like expressing the reason of the problem to the rest of the world. In other words you could twitter the error message. Okay, this could be embarrassing for the developer, but what else, you might ask? The actual problem is actually, that those error messages can provide cautious information about the server system. Imaging the exception message would include the connection string to the database server there is used in the backend.

That is the reason, why WCF is so less verbose if you use the default settings. I hope the following list might help you out if you are struggling with WCF services. The list is escalating, from easy things you can do, to configuration changes you can do to optimize tracing.

Use Fiddler

The correct HTTP status code is important. If you see that Fiddler is reporting a 500, then you know that the service is generally accessible, but for some reason not working. Reasons for that might be a corrupt installation or out of date files in the C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files folder.

If Fiddler is reporting a 404, then maybe WCF Components are not installed or not correct registered in IIS. Run ServiceModelReg.exe from C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation.

Check date/time on server and client if services are SSL/TLS secured

Before I start I search for differences in date, time and daylight time saving between server and client. The client and server time is taken independently and mapped to UTC, but if it differs more than 5 minutes, what is the WCF default value, the SSL session key cannot be negotiated and securing the communication with https is not possible. Even if time and date is in sync on server and client, and one machine is adjusted to use daylight savings and the other machine not, communication will fail!

Check Eventlog

The first thing I do always on the server is to look into the Windows Eventlog and especially into the Application Eventlog. Unfortunately, in case of WCF you possibly won’t find any entry if WCF is configured with default settings. But, it’s worth to have a look anyway.

Service call test on Server

Try the service URL within a browser on the server. Often you get a different error message, providing more detail on the server than on the client. If you receive a service activation error, then possibly the files in C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files are out of date. Stop IIS, delete the subfolder that has the application name you are interested in and restart IIS. The folder will be recreated and the fresh assemblies are compiled. Usually that fixes this issue.

Adding a ServiceSecurityAudit element to the service behaviour

WCF adds typically three sections to the application or web.config files of the application that is initiating the service. Those three sections are: Bindings, Behaviors and Services. For us the behaviors are of interest, because we need to manipulate the service behaviour to get more information.

First we search for the behaviour that is responsible for the service. Let’s say we have problem in the borderservice, then the behaviour definition looks like:

<behavior name="MyServiceBehavior"> 
<serviceMetadata httpGetEnabled="true"/> 
<serviceDebug includeExceptionDetailInFaults="false"/> 
<serviceSecurityAudit auditLogLocation="Application" serviceAuthorizationAuditLevel="Failure" messageAuthenticationAuditLevel="Failure" suppressAuditFailure="true" /> 
</behavior>

The yellow line is what we need to add to the behavior. That tells WCF to write error messages to the Windows Application Eventlog. That doesn’t change the information the client will receive, but you can now logon to the server and open the Eventlog.

Enable WCF tracing and diagnostics

Okay, if you are still struggling, then you can enable tracing for WCF, what completely logs any action into a file. Really, any action, as long as its WCF specific. What you need to do is to add Diagnostics section to the System.ServiceModel section in the application config file.

<diagnostics performanceCounters="All" wmiProviderEnabled="true"> 
<messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog="100000" /> 
</diagnostics>

And you need to add System.Diagnostics section to the Configuration section in the application config file, like this

<system.diagnostics> 
<sharedListeners>
<add name="sharedListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData="c:\temp\service.svclog" />
    </sharedListeners>
    <sources>
      <source name="System.ServiceModel" switchValue="Verbose, ActivityTracing">
        <listeners>
          <add name="sharedListener" />
        </listeners>
      </source>
      <source name="System.ServiceModel.MessageLogging" switchValue="Verbose">
        <listeners>
          <add name="sharedListener" />
        </listeners>
      </source>
    </sources>
  </system.diagnostics>

Finally you need to tell the trace listener where to store the log file (In the sample above it’s c:\temp\service.log)

If you now run the application and reproduce the error, you can open the log file, using the WCF Diagnostics tool (what you can get from the Windows SDK) and search for the concrete error.

UPDATE ! Other pitfals you might fall into

Keyset does not exist
On the server you might stumble over the following error message: Keyset does not exist. This indicates that application pool identity that is accessing the certificate to secure the web application or WCF service is not allowed to access the private key of the certificate.

To allow the app-pool identity to access the private key, open the certificate store and navigate to Local Computer/Personal/Certificates and right click on the certificate. From the context menu select All Tasks/Manage Private Keys… In the next dialog you can set the permission for user by adding the user first to the ACL and then granting the user read permissions.

Missing behaviorConfiguration section in client config
If the WCF service requests client authentication by a certificate don’t forget to add an endpointBehavior to the client config that provides information what certificate to send to the server.

   <behaviors>
        <endpointBehaviors>
          <behavior name="ClientCertEndpointBehavior">
            <clientCredentials>
              <clientCertificate storeLocation="CurrentUser"  storeName="My" x509FindType="FindBySubjectName" findValue="YOUR CERTIFICATE" />
            </clientCredentials>
          </behavior>
        </endpointBehaviors>
      </behaviors>

Here I add a new endpointBahvior to the endpointBehaviors in the system.serviceModel section of the client config file. I named this endpointBehavior ClientCertEndpointBehavior, the name is important because the endpointBehavior needs to be assigned in the client/endpoint definition:

      <client>
        <endpoint address="whateverService.svc" binding="whateverBiding" bindingConfiguration="whateverConfiguration" contract="whateverContract" name="whateverName"
        behaviorConfiguration="ClientCertEndpointBehavior" />
      </client>

have fun!

Continuous deployment of DirectSmile products

We at DirectSmile love our products

Because we love our products everyone of our team wants to get the latest version installed to benefit from latest changes or simply to play around with new features.

This is great! But it means a lot of maintenance though, doing all those setups and configuration in the morning when there’s a new nightly build available.

From continuous builds to continuous deployments

Generally, an installation is a process that has quite simple rules. You provide all the necessary information the installer file needs, beginning with the target directory, database connection strings or IIS website names and the windows installer services does the rest.

The good thing is, that those information normally don’t change if you do a software update. All installers support a parameter system that allows you to add those settings as arguments to the installer execution call.

That’s a good start for the DirectSmile Installation Service. The general purpose of this service is to run installations and uninstallation of a DirectSmile product in an automated way.

In development we coupled the DirectSmile Installation Service with Jenkins, the continuous integration server, we are using. Using this technology for a while now enables us to do builds on each check-in and run tests immediately on staging servers, where even the new installer is deployed to.

Application architecture

The DirectSmile Installation Service application consists of two parts. A server component which is installed on the machine that should be automatically maintained. And a client component that has all the client specific configuration data and automates the installation service remotely.

The client and the server communicate through SSL secured tunnel. The communication can only be established if the client identifies the server and (much more important) if the server identifies the client. This is covered by client and server certificates.

The installation file itself can be either downloaded by the service from a trusted http download server, or from the local file system.

image

Figure 1. DirectSmile Installation Service

While the installation service is a windows service component the client is a windows command line application.

Installation remote commands

The client comes with a few commands that initiate an installation, uninstallation or simply shows you the current installed version of an installation. Here’s a short description of the commands:

Usage: DSMInstallationClient <version|install|uninstall|installfont|getlog|backup|help>
Params:   [/url:<
http://downloadurl>]
[/productCode:<{GUID}>]
[/processesToKill:<Process1;Process2>]
[/servicename:<servicename>]
[/source:<src ath to backup>]
[/destination:<backup dest path>]
[/endpoint:<service endpoint url]

About ProductCodes

Usually a installation product code is a GUID. To make it more convenient to deal with product code we created some shortcuts for the DirectSmile products and you can use Product Name representational string instead.
Those are: dsmi, dsmg, dsmx, dsmstore, smartstream.

Version

Here’s an example how to retrieve the version number of DSMX remotely:

version /ProductCode=dsmx

 

Installation

 

Running an installation is quite easy. All you need to provide the Url, where the installation file can be downloaded and all parameters that are needed by the installation process. 

install /url=http://... /WEBSITES=”c:\inetpub\wwwroot\dsmstore” /watchdog=yes

This command would download the installation file for the DirectSmile Card & Giftshop and execute the installation. While this installation is running the DirectSmile Watchdog would be stopped.

Uninstallation

To do an uninstallation the product code is needed, but we can use our product name shortcut here again. You can also pass a list of processes and services that should be stopped before the uninstallation.

uninstall /ProductCode=dsmi
/ProcessesToKill=”DirectSmile Generator;DSMWatchDog”
/ServiceName=”DSMOnlineBackend”

This command would do an uninstallation of DSMI, but first it would stop all running ProductionServer instances and the DirectSmile Watchdog. It also would stop the DSMOnlineBackend service.

 

Doing a backup of directory remotely

The installation service can do a backup a directory. This directory will be zipped automatically and place i n the destination directory you name.

backup /source=”c:\inetpub\wwwroot\dsmstore” /desitination=”c:\temp”

The command would create a backup of the web application directory of the DirectSmile Shop and copy the zipped archive to c:\temp.

Endpoint parameter

The client comes with a config file that includes a default endpoint url to the installation service. Usually you don’t want to change that, but if you have several products installed on different machines, you might want to use the same client to handle all of those installationes. In this case you could pass the service endpoint url as an argument in the installation client call.

DSMInstallationClient version /ProductCode=dsmx /endpoint=”https://<SERVER>/DSMInstallationSevice.svc”

This sample would would retrieve the DSMX product version from a specific service running on a server called <server>.

Log files

The DirectSmile Installation service writes a log file. This log file can be downloaded to the client.

getlog > filename.log

This command would download the log file from the server and stores it locally on the client machine.

A little round trip using TDD (Part 2)

In the first part of this series we saw how to create tests and used those tests while implementing production code. We also saw that writing tests mustn’t be an inefficient approach at all.

Next, I would like to move along and use the code in the test methods as a basis for XML comment based documentation of the production code.

XML Documentation

VB.Net allows code documentation in XML on several places. You can add XML comments to classes, methods, properties and more. Here’s a snippet from a WCF contract, showing the XML documentation comments on to of the method signature.

image

You can see, that I just copied the test code from the method into the code XML element. BTW, I set the language in the lang attribute to VB.NET, which later is recognized by the help file viewer and syntax highlighting for VB.NET is supported.

Read more about Documenting your Code with XML on MSDN here.
And the recommended XML Tags for Documentation Comments here.

Sandcastle

As mention earlier I’m using the Sandcastle documentation engine to compile help files based on the XML documentation in the project. Here’s a screenshot of the Sandcastle Help File Builder UI.

image

Sandcastle has quite a bunch of settings, but it comes with a very handy UI application. And if all settings are done, you generally don’t have to touch the project file again.

I usually add a msbuild task to the post-build event and run the compilation of the help project every time I do a Release build.

Or, you simply call msbuild from the command line to compile the Sandcastle project file, like

C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe /p:Configuration=Release <documentaion-project-file>.shfbproj

I recommend to compile the help file only if you do a Release build, because it can take some minutes, depending on the amount of classes containing in the assemblies you like to get documented.

If the documentation is compiled you get a typical Windows compiled help file *.chm or a MSDN documentation like website.

Read more about Sandcastle – Documentation Compiler for Managed Classes here.

Have fun!

VB.Net secretly in love with XML

I know this is an old topic, but it always fascinates me. Actually, I don’t care what language in the .Net universe you use. If it’s C# or VB.Net, it doesn’t matter in the end, both languages are only two different dialects and compiled to the same IL code anyway. But, there is one thing that I really like in VB.NET, and that is the brilliant integration of XML. This goes out to all C# coders, have a look how VB.NET is capable of dealing with inline XML, thanks to T4 templates, compiled in the background.

Here’s a simple example, how a XML document is created. The functions only purpose is to create an XML element type, fills that by adding some child elements and returns the element.

    Public Function GetReport() As XElement

        Dim xml = <DSMIBenchmarkReport>
                      <Created><%= Now.ToString %></Created>
                      <StressTestMode><%= StresstestMode.ToString %></StressTestMode>
                      <Threads><%= My.Settings.ThreadCount %></Threads>
                      <IterationsPerThread><%= My.Settings.IterationsPerThread %></IterationsPerThread>
                      <Started><%= TRs.Started %></Started>
                      <Finished><%= TRs.Finished %></Finished>
                    </DSMIBenchmarkReport>

        Return xml
    End Function

If you are familiar with ASP.NET this might ring a bell. It’s easy to add elements and assigning values to the elements by putting types and properties in <%= and %> is Childs play. The VB compiler can then apply a T4 template to this and generate the XML at compile time.

You can even iterate through collections, like shown in the following example.

Public Function GetReport() As XElement

        Dim xml = <DSMIBenchmarkReport>
                      <Created><%= Now.ToString %></Created>
                      <StressTestMode><%= StresstestMode.ToString %></StressTestMode>
                      <Threads><%= My.Settings.ThreadCount %></Threads>
                      <IterationsPerThread><%= My.Settings.IterationsPerThread %></IterationsPerThread>
                      <Started><%= TRs.Started %></Started>
                      <Finished><%= TRs.Finished %></Finished>
                      <Details>
                          <%= From ex In TRs Where Not String.IsNullOrEmpty(ex.ExceptionMessage) Select (New XElement("Exception", New XAttribute("Message", ex.ExceptionMessage))) %>
                      </Details>
                  </DSMIBenchmarkReport>

        Return xml
    End Function

Here we iterate through a collection of possible exceptions but add the element only to the Details parent element if the exception text is not empty.

In case you have some extra logic to decide whether you add an element or not, you could move that code into an extra function like shown in the next example.

Public Function GetReport() As XElement

        Dim xml = <DSMIBenchmarkReport>
                      <%= GetRandomMode() %>
                      <Created><%= Now.ToString %></Created>
                      <StressTestMode><%= StresstestMode.ToString %></StressTestMode>
                      <Threads><%= My.Settings.ThreadCount %></Threads>
                      <IterationsPerThread><%= My.Settings.IterationsPerThread %></IterationsPerThread>
                      <Started><%= TRs.Started %></Started>
                      <Finished><%= TRs.Finished %></Finished>
                      <Details>
                          <%= From ex In TRs Where Not String.IsNullOrEmpty(ex.ExceptionMessage) Select (New XElement("Exception", New XAttribute("Message", ex.ExceptionMessage))) %>
                      </Details>
                  </DSMIBenchmarkReport>

        Return xml
    End Function

    Private Function GetRandomMode() As XElement
        Dim randomMode As XElement = New XElement("RandomMode")

        If My.Settings.CounterStartValue > 0 Then
            randomMode.Add(New XAttribute("Mode", "Counter"), New XAttribute("StartValue", My.Settings.CounterStartValue))
        Else
            randomMode.Add(New XAttribute("Mode", "Characters"), New XAttribute("Characters", My.Settings.RandomCharacters))
        End If

        Return randomMode
    End Function

Here we are adding an extra element called RandomMode, but how this new element should behave is defined in the function we call, the GetRandomMode method. Because the function is returning an XElement type, we can simply do the call inside the <%= and %> in the XML from above.

Have fun!

A little round trip using TDD (Part 1)

Introduction

Since a while now I use unit tests more and more often in my projects. The main reason why I think unit tests are necessary is because they provide an extra control not breaking anything while refactoring code. Especially if you are maintaining your API it can be quite relaxing to see that all your unit tests succeed after you did a lot of changes to your code base.

But, excepting unit tests and doing Test Driven Development are still two different pairs of shoes though. While the classic approach would be to write a class top down and implement all the logic and afterwards write some test code, in TDD you write the tests first. You actually begin to write a test class, writing the first test method that calls the method that is to be tested later. The tested method exists only as a stub in the beginning.

Honestly, I thought WTF? How can such an approach be efficient? I have to write every method twice just to get a good feeling. That looks like a total waste of time and money.

On the other hand, what makes TDD in the long run very sufficient is the combination of well tested, well structured and well documented code. Especially the documentation an specification is a very powerful topic in doing TDD and in my eyes the most reason to do TDD at all.

Having those arguments in mind I made the last Monday a full TDD day. Isn’t the beginning of a new year the best time to test the one or the other paradigm?

With this blog post I start a small series of probably three parts were I walk through my experiences with TDD starting with a small test method, implementing the real method to finally render automated documentation while building the project. I hope this will be fun for you.

Tools

I’m using VS 2010 and MS Unit Tests. I’m running my tests using the Resharper test runner just because I like it. I also use Sandcastle Help File Builder to render the API help files based on the xml code documentation.

A simple unit test

Let’s assume you need to write a function that can queue an mail object in a container. The method does some validity checks and if those pass the function returns the Id of the stored queue item. If one of the validity checks fail or the item can’t be queued the function throws an exception.

Our mail object would consist of the following required fields, like subject, body and recipient and probably a foreign key to a user object that initiated the email.

Having those information together I could write a test method like this:

<TestMethod()>    
Public Sub Successful_AddEmail()        
   Dim accountId As Long = 1        
   Dim subject As String = "Test Email"        
   Dim body As String = "Hi there, this is a test mail..."        
   Dim recipients As String = "oliver.dehne@directsmile.com"        
   Dim mBll As New MailBLL         

   Dim result = mBll.AddMail(accountId, subject, body, recipients)

   Assert.IsTrue(result > 0)    
End Sub

First of all the initial <TestMethod()> attribute marks this sub to be a test method. This enables Visual Studio or Resharper to list this method in the Test Explorer. I begin the function name always with Successful or Fail prefix to indicate that the parameters in the method call are all valid and so the method call must be successful.

The test is using the AAA (Arrange, Act and Assert) principal. First we arrange the test scenario by creating parameters and types we need to run the test. Then we make the actual call (just a single call, because we only test just one unit a time) and finally we do our assertions on the result.

To testify if the validity checks work correct and throw the expected exception we write a second unit test, but this time we initialize parameter with a wrong value in the arrange section. in the sample below we pass an empty subject. This in invalid, because our API does not except mail objects without subjects.

  <TestMethod()>    
  <ExpectedException(GetType(SaaSException))>    
  Public Sub Fail_AddEmail_With_Missing_Subject()        
    Dim accountId As Long = 1        
    Dim body As String = "Hi there, this is a test mail..."        
    Dim recipients As String = "oliver.dehne@directsmile.com"        
    Dim mBll As New MailBLL         

     Dim result = mBll.AddMail(accountId, "", body, recipients)         

    Assert.IsTrue(result = 0)    
End Sub

The method looks quite the same, except that we now annotate our method with the ExpectedException attribute. By passing a specific type the test will also evaluate the thrown exception type.

As you can see the result was successful, the empty subject caused an exception, as expected.

image

And in the test console we find the correct exception message.

Error: 0 : 14.01.2012 16:48:34 – [MailBLL::AddMail]: You must provide a subject to send an email.

A quick look at the code, especially in the first test method, shows us that we already have written a perfect API client code example. Later on in this series I will show you how we can take advantage from this example code in XML based documentation.

Have fun.

Best kept secret in Visual Studio, or how to generate a Windows service installer class

When writing a windows service application I always stuck at the same moment. And that is when I need to add an installer class for the service. With the installer class you can configure the behavior of a Windows service, like the startup type and the service name to show up in services list.

Although this is quite a helpful class, I’m always doomed when it’s the moment to add it to my project. Ok, I could implement the service installer class from scratch and derive from InstallerBase, but something in the back of my head reminds me that there is way that VS can generate the class for you. But, unfortunately it’s nearly impossible to find the function in the menus VS is providing. And the place is so well hidden, that it is impossible to remember where you found it once you used it.

Here is where you can find it:

Click on the Service implementation class, that the [Design] page comes up. Then right click somewhere in the gray area of the screen and choose “add installer” from the context menu.

image

Maybe this might make sense in a way for at least someone. I find it absolutely unintuitive and it’s definitely the last place I’d search. Always.

Setup tracing in ASP.NET

Logging and tracing is most important to your application. With the .Net Framework developers have quite a handy tool to trace and log automatically data about the current state of an application in a production environment. Although this is not new, the infrastructure is available since the first version of .Net, it’s definitely worth to talk about it, because of the flexibility tracing is offering and the easy way of configuration.

Another important point, and in my eyes something that is often forgotten by developers, is that the configuration can be done by the IT department and almost independent from the developer. In my experience many  developers underestimate the IT perspective. Although developers provide logging and maybe tracing as well, they do it in their own way, by logging into a database for instance. The disadvantage is obvious, those logs are difficult to integrate into the monitoring tools of an IT department. The configuration includes the source that is to be logged, when it is to be logged and finally where it should be logged to. In some cases it could make sense to log into the event log, in other cases  it won’t, but in the end it should be a decision made by the IT staff.

Goal

In the following web application I used three different trace listeners. Because this application is installed on the web, it was necessary for me to be able to follow the complete trace of every single request while it’s passing through all methods in the application. By event type filters you can configure the verbosity of different trace listeners. For example, you want a critical exception logged to the windows event log, while application tracing information needs to be logged to a file only while you are reproducing a specific error.

<system.diagnostics>
   <sharedListeners>
     <add name="FileLog" type="Microsoft.VisualBasic.Logging.FileLogTraceListener, Microsoft.VisualBasic, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL"  
          initializeData="FileLogWriter" location="Custom" customLocation="e:\dsmtemp\" traceOutputOptions="DateTime" />
     <add name="CustomDbLogger" type="dbLib.CustomTraceListener, dbLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=a8a57c651bc37c00">
          <filter type="System.Diagnostics.EventTypeFilter" initializeData="Warning" />      
     </add>
     <add name="EventLog" type="System.Diagnostics.EventLogTraceListener" initializeData="MyApplication">        
          <filter type="System.Diagnostics.EventTypeFilter" initializeData="Error" />
     </add>
     </sharedListeners>
   <sources>
      <source name="DefaultSource" switchName="TraceSwitch" switchType="System.Diagnostics.SourceSwitch">        
          <listeners>          
             <add name="FileLog"/>          
             <add name="EventLog"/>          
             <add name="LimLog"/>        
         </listeners>      
      </source>    
   </sources>    
   <switches>      
      <add name="TraceSwitch" value="Verbose" />    
    </switches>  
</system.diagnostics>

The code above shows a configuration example for three different trace listeners.

First, I added a file trace logger that captures all trace events and stores in a file. I also configured that the listener automatically adds a timestamp to the end of each line.

The second listener is a custom trace listener that can write specific trace data to a database.

And finally, I added an eventlog trace listener. All those listeners are added to the sources collection by name.

By adding filters you can control when the listeners take effect. A filter takes a string representing the EventType enum in escalating order, like:

  • Verbose
  • Information
  • Warning
  • Critical
  • Error

The enum has some more values, like Start and Stop which are typically used by WCF tracing.

The configuratin above shows that the CustomDbLogger should only log to the database if the EventType is of Warning and above. This includes all Critical and Error events. While only events of type Error are written to the Windows Eventlog.

I use the file logging for method tracing. It is configured generally in the default  TraceSwitch. In this example it is set to Verbose, which would log everything that is available. It is recommended to set this to Information or even higher to Warning in production and only switched to Verbose if you are investigating a concrete error or misbehavior.

Tracing in code

With the configuration we did in the application config file our work is almost done. All we need in code is to write a new log entry.

   My.Application.Log.WriteEntry(lr.ToString, TraceEventType.Verbose)
   My.Application.Log.DefaultFileLogWriter.Flush()

Our three trace listeners can now apply their event filters and check if its necessary to persist the trace data or ignore it.

By calling flush we ensure that the trace line is written immediately to the log file. Otherwise the trace trace listener would queue it, until a specific flush event appears (which can be configured as well) or until the application is recycled.

Custom TraceListener

Writing a customer TraceListener is easy. You need to inherit from a TraceListener and override the WriteLine method.

Public Class CustomDbTraceListener    
       Inherits TraceListener   
       Public Sub New()        
          MyBase.new()    
       End Sub
       Public Overloads Overrides Sub WriteLine(message As String)        
          'TODO: Implement your logging logic here    
       End Sub
End Class