Using an F# DSL to generate C# code

My current project is for a company called HDMP (The website is in Dutch / French). We make software for medical practitioners.  I am working on a service that communicates with web services created by the Belgian government. The services are quite old, proven by the fact that they expect as input a flat file with specific record formats, the so called efact format. I am parsing this file using the excellent FileHelpers package, which allows describing fixed length file formats. This article is only about the code generation, not about the use of the Filehelpers library.

In the efact file there over 10 different record formats, all with a fixed length of 370 bytes. Fields in a record are positional. To describe this I created a more readable Domain Specific Language in F#. In this article I will demonstrate how this works. The code for this article can be found on GVerelst/CodeGen: F# DSL for C#Code generation. (github.com).

Prerequisites

  • You’ll need a little bit of C# knowledge to follow this post. In particular, I will show how to create a couple of C# classes, with some (custom) attributes. But in the end the F# program will just generate some text that happens to be C# code.
  • F# knowledge will help, but I will explain most of what I’m doing in this post.

The Problem

The code that is needed for FileHelpers to work with fixed records looks like this:

    // //////////////////////////////////////
    // FileInfoBase
    // //////////////////////////////////////

    [FixedLengthRecord()]
    public partial class FileInfoBase
    {
        // Segment segment200
        [EFactMetadata("200", "6N", "1-6", "Naam van het bericht", "Nom du message"), FieldFixedLength(6), FieldAlign(AlignMode.Right, '0')] public int MessageName { get; set; } = 920000; // 920000|920900|...
        [EFactMetadata("2001", "2N", "7-8", "Code fout", "Code érreur"), FieldFixedLength(2), FieldAlign(AlignMode.Right, '0')] public byte Error2001 { get; set; } = 0;
        [EFactMetadata("201", "2N", "9-10", "Versienummer formaat van het bericht", "N° version du format du message"), FieldFixedLength(2), FieldAlign(AlignMode.Right, '0')] public byte MessageVersionNumber { get; set; } // 2
        // ...
        [EFactMetadata("204", "14N", "21-34", "Referentie bericht ziekenhuis", "Reference du message"), FieldFixedLength(14), FieldTrim(TrimMode.Both)] public string InputReference { get; set; } = new string('0', 14);
        // ...
        [EFactMetadata("3091", "2N", "206-207", "Code fout", "Code érreur"), FieldFixedLength(2), FieldAlign(AlignMode.Right, '0')] public byte Error3091 { get; set; } = 0;
    }


We describe the record fields mainly using attributes:

  • The FileHelper attributes that describe the format of the field
  • The EFactMetadata custom attribute that gives some additional information about the field. It contains the name of the zone (ex: “200”. Remember, this is an archaic record format), the type of the zone (ex: “6N” or “45A”), the position in the record (calculated), and then the translation in Dutch and French.
  • And in addition we also give the field a data type, a name, an optional default value and an optional comment.

The documentation describes the file format using these terms. I also created a file viewer to show the contents in a user-friendly way, hence the translated fields. This will not be on GitHub. The attributes allow the use of reflection in the user interface to show the file format. The documentation also uses a notion of segments to describe a block in the record formats that can be reused in similar record formats. We want to mimic this behavior as well.

Clearly this is a lot of error prone code to type, and also not very readable because of all the clutter. If only we could represent this in a more concise and readable way, and generate the necessary code from this …

Defining the internal Domain Specific Language in F#

Looking at this, we can see some needed entities. The first thing we need to describe is a zone (which will translate into a property in the generated class. For the first zone (“200”) this can look like

    Z "200" (N 6) "Dutch name" "French name" Int "PropertyName" "920000" "920000|920900|..."

This contains all the data we need to describe a field with all its attributes. Let’s create the Zone type:

type Zone = { zone: string; length: Length; nl: string; fr: string; datatype: Datatypes; name: string; defaultvalue: string; comments: string}

and a constructor for this type:

let Z zone length nl fr dt name dft comments =
    { zone=zone; length = length; nl= nl; fr = fr; datatype=dt; name=name; defaultvalue=dft; comments=comments }

  • zone: the zone name (being “200”, “2001”, …)
  • length: definition of the length (ex N 6 indicates 6 digits, A 5 indicates 5 characters)
  • nl: description in Dutch
  • fr: description in French
  • The rest of the fields are clear.

This is valid F# code, embedded in our project. The nice thing is that the compiler will prevent a lot of errors for us. This is what is called an “internal DSL”. An external DSL describes a separate language, with its own syntax rules. This means that the interpretation of the external DSL needs to be written as well.

There are some unknown parts in there:

Z "200" (N 6) "Naam van het bericht" "Nom du message" Int "PropertyName" "920000" "920000|920900|..."

The datatype is Int, we must describe this as well. This could have been just a string, but that doesn’t allow for validation. Ideally we want the F# compiler to catch as many errors as possible before we start to generate the code. So here is the Datatypes enumeration:

type Datatypes = Bool | CRC | Byte | Short | Int | DateTime | Time | String | Money | AmbHos | Gender | Error 

Now the F# compiler will only allow these datatypes. Depending on the datatype we can generate slightly different C# code. Example:

Int generates this:

[EFactMetadata("200", "6N", "1-6", "Naam van het bericht", "Nom du message"), FieldFixedLength(6), FieldAlign(AlignMode.Right, '0')] public int MessageName { get; set; } = 920000; // 920000|920900|...

And String will generate this:

[EFactMetadata("204", "14N", "21-34", "Referentie bericht ziekenhuis", "Reference du message"), FieldFixedLength(14), FieldTrim(TrimMode.Both)] public string InputReference { get; set; } = new string('0', 14);

Of course the other datatypes generate their own versions.

Having this in place already reduces the number of hard to find errors in the C# code.

Z "200" (N 6) "Naam van het bericht" "Nom du message" Int "PropertyName" "920000" "920000|920900|..."

We also see a Length. The constructor (N 6) is actually composed of the length type (“A” is alphabetic, “N” is numeric, “S” is numeric, but prefixed with ‘+’ or ‘-‘. This will later be used in the code generation. Let’s describe this:

type LengthType = A | N | S
type Length = { ltype: LengthType; length: int }

and we create 3 constructor functions:

let N x = { ltype= N; length= x }
let A x = { ltype= A; length= x }
let S x = { ltype= S; length= x }

N 6 will now return a new Length record with ltype = N, length = 6. Having these 3 little functions allows the F# again to validate the code at compile time.

Recap of the definition of the zone so far:

type Datatypes = Bool | CRC | Byte | Short | Int | DateTime | Time | String | Money | AmbHos | Gender | Error 
type LengthType = A | N | S
type Length = { ltype: LengthType; length: int }
type Zone = { zone: string; length: Length; nl: string; fr: string; datatype: Datatypes; name: string; defaultvalue: string; comments: string}

let N x = { ltype= N; length= x }
let A x = { ltype= A; length= x }
let S x = { ltype= S; length= x }

let Z zone length nl fr dt name dft comments =
    { zone=zone; length = length; nl= nl; fr = fr; datatype=dt; name=name; defaultvalue=dft; comments=comments }

These 9 lines of code allow us to create zones in a concise and clear way. Let’s add semantics to this. Some zones are of the same type, and have a specific meaning. For example I defined the Recordtype function as

let Recordtype rectype zone = 
    let rt = rectype.ToString()
    Z zone (N 2) ("recordtype " + rt) ("enregistrement de type " + rt) Byte "Recordtype" rt ("Always " + rt);

Every efact record will have a specific record type, the 2 first bytes of the record. They always have the same NL and FR description, so I made a new function for this. The function on itself is not to save typing, but to give semantics to this field.

Recordtype 95 "400"

Indicates a record that is a record type. We can also write this out in the code as

Z "40" (N 2) "recordtype 95" " enregistrement de type 95" Byte "Recordtype" "95" "Always 95"

It is not much longer (copy / paste is your friend here), but a lot clearer on what it means. So in the same style I defined Mutuality:

let Mutuality zone =
    Z zone (N 3) "Nummer mutualiteit" "Numéro de Mutualité" Int "MutualityNumber" "" ""

And again

Mutuality "401"

indicates very clearly what we mean here. I made some more:

let Errorcode zone =
    let nzone = normalizeName zone
    Z zone (N 2) "Code fout" "Code érreur" Error ("Error" + nzone) "0" ""

let Reserved l zone =
    let nzone = normalizeName zone
    let dft = match l.ltype with
                | A -> sprintf "new string(' ', %d)" l.length 
                | N -> sprintf "new string('0', %d)" l.length 
                | S -> sprintf "'+' + new string('0', %d)" (l.length - 1)
    Z zone l "Reserve" "Reserve" String ("Reserved" + nzone) dft ""

As you can see, a reserved zone can only be of datatypes A | N | S. For each of the cases I defined the outcome. No more need to think about what kind of attributes need to be generated, and it is clear that this is a zone that is there as a filler, in case more zones would be needed in the future (remember, this is an archaic format).

This now gives us a (domain specific) language to describe the records, for example:

    Recordtype 95 "400"
    Errorcode "4001"
    Mutuality "401"
    Errorcode "4011" 
    Z "402" (N 12) "Nummer van verzamelfactuur" "Numéro de facture récapitulative" String "RecapInvoiceNumber" "" ""
    Errorcode "4021" 
    // ...
    Reserved (N 257) "413"

Now we have a way to describe the zones in the flat file that will be converted into properties in a C# class. Let’s extend the DSL to include classes. In the eFact documentation there are some predefined structures called segments. A segment has a name and is composed of 1 or more zones. These segments will be put together in a class. So a class is a named collection of segments, and a segment is a named collection of zones. A class can also inherit from another class, which saves some more typing. A namespace is a named collection of classes, and finally a program (I didn’t find a better name for this) is composed of namespaces, and has a filename.

Here are the definitions:

type Segment = { name: string; zones: Zone list }
type Interface = { name: string; lines: string list }
type Record = { name: string; inherits: Record option; implements: Interface list; segments: Segment list }
type Namespace = { name: string; records: Record list }
type Program = { filename: string; baseNamespace: string; namespaces: Namespace list }

Let’s define a small program

let segment200 = 
    { 
        name= "segment200"; 
        zones= 
        [
            Z "200" (N 6) "Naam van het bericht" "Nom du message" Int " MessageName" "920000" "920000|920900|..."
            Errorcode "2001" 
            Z "201" (N 2) "Versienummer formaat van het bericht" "N° version du format du message" Byte " MessageVersionNumber" "" "2"
            Errorcode "2011" 
	     // ...
            Z "205" (N 14) "Referentie bericht VI" "Reference du message OA" String " ReferenceOA" "" ""
            Errorcode "2051" 
            Reserved (N 15) "206"
        ] 
    }

let segment300 = 
    { 
        name= "segment300"; 
        zones= 
        [
            Z "300a" (N 4) "Factureringsjaar"  "Année de facturation"  Int "YearBilled" "" ""
            Z "300b" (N 2) "Factureringsmaand" "Mois de facturation"  Byte "MonthBilled" "" ""
            Errorcode "3001" 
            Z "301" (N 3) "Nummer van de verzendingen" "Numero d''envoi" Int " RequestNr" "" ""
            Errorcode "3011" 
            Z "302" (N 8) "Datum opmaak factuur" "Date de création de facture" DateTime " Creationdate" "" ""
            Errorcode "3021" 
	     // ...
            Z "309" (N 2) "Type facturering" "Type facturation" Byte "Invoicingtype" "" ""
            Errorcode "3091" 
        ] 
    }

let fileInfoBase =
   {
        name= "FileInfoBase"; 
        inherits = None;
        implements = [];
        segments= 
        [
           segment200 
           segment300
        ]
   }

let fileInfo =
    {
        name= "FileInfo"; 
        inherits = Some fileInfoBase;
        implements = [];
        segments= 
        [
            segment300a
        ]
    }

// ...
let namespaceRequests =
    {
        name="Requests";
        records=
        [
            fileInfoBase
            fileInfo
	     // ...
        ]
    }

let namespaceSettlement =
    {
        name="Settlement";
        records=
        [
	     // ...
        ]
    }

let prog = 
    { 
        filename="eFact.cs";
        baseNamespace="HdmpCloud.eHealth.eFact.Serializer.Recordformats.";
        namespaces = 
        [
            namespaceRequests
            namespaceSettlement
        ]
    }

As you can see, the definition of all the needed datatypes is about 10 lines, and very readable:

type Datatypes = Bool | CRC | Byte | Short | Int | DateTime | Time | String | Money | AmbHos | Gender | Error 
type LengthType = A | N | S
type Length = { ltype: LengthType; length: int }
type Zone = { zone: string; length: Length; nl: string; fr: string; datatype: Datatypes; name: string; defaultvalue: string; comments: string}
type Segment = { name: string; zones: Zone list }
type Interface = { name: string; lines: string list }
type Record = { name: string; inherits: Record option; implements: Interface list; segments: Segment list }
type Namespace = { name: string; records: Record list }
type Program = { filename: string; baseNamespace: string; namespaces: Namespace list }

Then we defined some helper functions to make the definition of the zones a bit easier, and to give it semantic meaning. And now we have described the zones, segments, records, namespaces and the program. This is done in about 1000 lines of code.

Let’s generate some C#

Nice. We have described our language (DSL), and we have described what our C# classes should look like. We can compile this program, and if it succeeds we know that the program in our DSL is syntactically correct. Time to generate the code, so this becomes useful.

To start, let’s output a Zone. This will be output as a property in a C# class. Don’t mind the pos parameter yet.

let outputZone pos zone  =
    let (declaration, att3) = outputDeclaration zone
    let att1 = outputEFactMetadata zone pos
    let att2 = sprintf "FieldFixedLength(%d)" zone.length.length

    let attslist = [ att1; att2; att3 ]

    let atts = attslist |> List.reduce (fun a b -> a + ", " + b)
    let comment = if zone.comments.Length = 0 then "" else (C2 zone.comments)

    "[" + atts + "] " + declaration + (outputDefaultValue zone) + " " + comment 

As you can see, there are some helper functions here. I’ll discuss them below.

The outputZone function takes 2 parameters: pos and zone. The output is a string describing a C# property with the necessary attributes. This is the central function in the code generation. The output type of this function is a string. In the end the generated program will just be a list of strings to be written into a file.

In F# a function can only be used if it was defined before the calling function. At first this is a pain, but it forces you to have a correct dependency structure. Typically this results in a list of small functions that are composed into more useful functions. Let’s look at some of the functions in “generator.fs”, which contains the code to generate the C# classes.

Very simple function to generate the string “5N” from the type (N 5):

let outputLength (l: Length) =
    sprintf "%d%A" l.length l.ltype

Make the first character of a string uppercase:

    let captitalize (s:string) =
        if s.Length = 0 then ""
        else s.Substring(0,1).ToUpper() + s.Substring(1) 

Create the EFactMetadata attribute:

// EFactMetadata("312", "449", "352-800", "Reserve", "Reserve") 
let outputEFactMetadata zone pos =
    let nl = captitalize zone.nl
    let fr = captitalize zone.fr
    let rng = sprintf "%d-%d" pos (pos + zone.length.length - 1)
    sprintf "EFactMetadata(\"%s\", \"%s\", \"%s\", \"%s\", \"%s\")" zone.zone (outputLength zone.length) rng nl fr 

The function is straightforward thanks to the use of the small helpers.

// public string Reserve9 { get; set; } = new string(' ', 449);
let outputDeclaration zone =
    let (dt, att) = match zone.datatype with
                     | CRC -> ("byte", "FieldTrim(TrimMode.Both)")
                     | Int -> ("int", if (zone.length.ltype = LengthType.S )
                                         then sprintf "FieldConverter(typeof(SignedIntConverter), %d)" zone.length.length
                                         else "FieldAlign(AlignMode.Right, '0')")
                    // ...
                     | Gender -> ("Gender", "FieldConverter(typeof(EnumIntConverter),1)")

    (sprintf "public %s %s { get; set; }" dt zone.name, att)

// [EFactMetadata("312", "449", "352-800", "Reserve", "Reserve"), FieldFixedLength(450), FieldValueDiscarded] public string Reserve9 { get; set; } = new string(' ', 449);
let outputZone pos zone  =
    let (declaration, att3) = outputDeclaration zone
    let att1 = outputEFactMetadata zone pos
    let att2 = sprintf "FieldFixedLength(%d)" zone.length.length

    let attslist = [ att1; att2; att3 ]

    let atts = attslist |> List.reduce (fun a b -> a + ", " + b)
    let comment = if zone.comments.Length = 0 then "" else (C2 zone.comments)

    "[" + atts + "] " + declaration + (outputDefaultValue zone) + " " + comment

The first function with some logic in it: outputSegment

We want to output a segment, which is a number of zones. There will be a loop to cover all the zones, but in functional programming we avoid loops as much as possible. F# provides us with a lot of functions to handle collections.

The output we want is not just a line for each zone, but given that eFact files are records with fixed-length fields, we also want to indicate the position of the field in the record. We saw before that each record has a length, this allows us to calculate the positions. Here is some partial output of a zone:

        // Segment segment200
        [EFactMetadata("200", "6N", "1-6", "Naam van het bericht", "Nom du message"), FieldFixedLength(6), FieldAlign(AlignMode.Right, '0')] public int MessageName { get; set; } = 920000; // 920000|920900|...
        [EFactMetadata("2001", "2N", "7-8", "Code fout", "Code érreur"), FieldFixedLength(2), FieldAlign(AlignMode.Right, '0')] public byte Error2001 { get; set; } = 0;
        [EFactMetadata("201", "2N", "9-10", "Versienummer formaat van het bericht", "N° version du format du message"), FieldFixedLength(2), FieldAlign(AlignMode.Right, '0')] public byte MessageVersionNumber { get; set; } // 2
        [EFactMetadata("2011", "2N", "11-12", "Code fout", "Code érreur"), FieldFixedLength(2), FieldAlign(AlignMode.Right, '0')] public byte Error2011 { get; set; } = 0;

Notice the 3rd parameter of the eFactMetadata attribute (“1-6”, “7-8”, “9-10”, “11-12”, …). This is a running total that is calculated using a start position and the lengths of the zones. Remember that the outputZone function takes a “pos” parameter, this explains why. Here is the function:

let outputSegment start (seg: Segment)  =
    let (endpos, lines) = 
        seg.zones |> List.fold (fun (pos, lines) z -> 
            let z2 = outputZone pos z
            (pos + z.length.length, z2::lines)
                                    ) (start, [])

    let zs2 = (C2 ("Segment " + seg.name)) :: (lines |> List.rev)

    (endpos, zs2)

A record is composed of one or multiple segments, so we need a start position and we return the end position for this segment. Later the outputRecord function will use the same trick as we use here for the position in the EfactMetadata attribute.

The main loop is implemented in the List.fold function:

seg.zones |> List.fold (fun (pos, lines) z -> 
            let z2 = outputZone pos z
            (pos + z.length.length, z2::lines)
                       ) (start, [])

Taking the collection of zones as its input, List.fold will iterate over each zone and apply an accumulator function to it. The accumulator is the tuple (pos, lines), which indicates that we are accumulating 2 things at the same time: the position and the generated lines.

let z2 = outputZone pos z			// generates the line for the current position
(pos + z.length.length, z2::lines)		// returns pos plus the length of the zone and the generated line in front of all the lines that were already generated

The result is that we now have our lines with the position correctly filled, but in reverse order. This explains the following line:

let zs2 = (C2 ("Segment " + seg.name)) :: (lines |> List.rev)

If you like you can read the rest of the code on GitHub. Most of the code is straightforward from this point on.

More enhancements

One simple enhancement is this:

let C2 s = "// " + s

Now we can generate comments like   C2 “Segment 200”.

Errorcodes

In the efact format there are many Errorcode fields. They always look the same:

Z  “2001” (N 2) "Code fout" "Code érreur" Error ("Error" + nzone) "0" ""

This is always a 2-digit field (N 2), so we can define a new function for this:

let Errorcode zone =
    let nzone = normalizeName zone
    Z zone (N 2) "Code fout" "Code érreur" Error ("Error" + nzone) "0" ""

Errorcode “2001” will now create a 2-digit zone in a descriptive way.

Reserved zones

There are also 2 types of reserved zones: numeric and alphabetic. Depending on their type they will be filled up with different values. They are the FILLERS in good old COBOL (and yes, this says something about my age).

To describe them we make another function:

let Reserved l zone =
    let nzone = normalizeName zone
    let dft = match l.ltype with
                | A -> sprintf "new string(' ', %d)" l.length 
                | N -> sprintf "new string('0', %d)" l.length 
                | S -> sprintf "'+' + new string('0', %d)" (l.length - 1)
    Z zone l "Reserve" "Reserve" String ("Reserved" + nzone) dft ""

Conclusion

Describing the data model for the classes to be generated takes about15 lines of code. Then we defined a couple of small helper functions and some bigger functions to generate the code. The generator.fs file contains 163 lines of code. With this we can describe our program in a readable way. We also added some semantics to the code with constructor functions to describe fillers, errors, a mutuality, … I think this is a nice demonstration of F# as a functional language.

References

F# for fun and profit (fsharpforfunandprofit.com)

FileHelpers Library

Lists – F# | Microsoft Docs

Posted in .Net, Codeproject, F#, Functional Programming | Tagged , , | Leave a comment

Sending notifications with Corona updates to thousands of doctors

During the Corona crisis, our (Belgian) government sends out regular updates for medical professionals. I created a small UWP app that will notify all the subscribed doctors when new information is available. The doctors can download and install this application from the Microsoft Store to see the updates pop up in a toast message.

Introduction

I am working for a company specialized in software for general practitioners . The practitioners use our software to keep data about patients, and use a whole lot of (almost) mandatory services provided by the government. This is all initiated from the client.

Now we need to work in the other direction. When we receive an update (in this case typically about Corona), we must push it to all the connected doctors. This can be done using polling (every x minutes we check if there is something new), or using push notifications.

Architectural choice

For communication from the server to clients there are 2 main possibilities in Microsoft Azure: either use SignalR, or use Azure Notification hubs. My first choice was to use SignalR, but it this is mainly supported for web applications (web sockets). 

I decided to use Azure Notification Hubs.

  • In the free tier we can push 1 million messages per day to max 500 active devices. Currently this is sufficient, but when more users install this app we’ll upgrade to the basic tier. This allows for 200 000 devices and 10 million messages per day, for a wobbling 8,43€ / month. That will do!
  • But to start pushing messages to devices we first need to create and register a UWP application. For this we have to create an account, and then register our application in this account. Once this is done, we can send messages from the notification hub to this application, hence to devices where the application is installed.

Steps to register the UWP application

These are the steps to take to set up a UWP application showing popups when a notification is sent. Below are more details for each step.

  1. If  this is your first UWP application, register your company (or you personally) in Windows Store.
  2. Register the app in Windows store, to obtain the necessary IDs.
  3. Create a notification hub in MS Azure, link it with the app IDs that we just registered.
  4. Create a UWP app to receive the notifications and show Toast messages to the users.
  5. Deploy this app in the Windows Store.

Registering the company in the Windows Store

  • Go to the Windows Dev Center and sign in with your Microsoft account.
  • Now click on “Windows / XBOX” and follow the steps to register yourself or your company.
  • The registration is finalized by entering your creditcard data. At the moment of writing (March 2020) it costs 14€ for an individual developer, and 75€ for a company. There is a warning that registering a company can take some time (possibly weeks) to verify the account. Creating a personal account seems to be immediate.

Create an app in the Windows Store

Now that we have a developer account we can create a new app in that same portal. Click on “Create a new app” to start the registration.

New app button

Enter a name for your app and click on “Check Availability”. If this is OK, you can proceed by clicking on “Reserve product name”. To be able to send notifications to this application, we need to retrieve some IDs.

 

  • In the menu on the left click on “Product Management” > “WNS/MPNS”. This opens the Push notifications page.
  • In the page is a link “Live Services site”. The link opens your Notifications Registration page in a new tab.
  • On this page you find the Application Secrets and the Package SID. Either keep this page open, or take a note of the values. We will need them when setting up the Notification hub.

On this page you can also add a logo if you want.

Create a Notification Hub

To create the notification hub, you will of course need an Azure account. If you don’t have an Azure account yet, follow the steps outlined here.

Log in to the portal to create the notification hub. The easiest way to do this is by clicking on the “hamburger menu” in the top left corner and then select “Create a resource”.

 

Type “Notification Hub” in the search box and press enter. You can review the overview and the plans. Click on “Create” to start the creation of the resource.

This takes us to the Basics tab:

The main things to fill out here are

  • Resource group. You can select an existing resource group or create a new one. I usually create a resource group per application, and per environment (ACC / PROD). I also apply naming conventions to nicely separate all. The name of this resource group can be something like “myapp-rg-dev”.
  • Notification Hub Namespace. Enter a unique name here. Also apply naming conventions.
  • The same for Notification Hub.
  • Don’t change the pricing tier. You can do that later, when the application outgrows the capacity.

You can now go to the “Tags” tab to enter some tags if you want, or click “Create” directly. This will create the Notification hub namespace and the Notification hub itself.

Link the UWP app to the Notification hub

Once the notification hub is created, you can go to the “Windows (WNS)” tab to enter the package SID and the Security key from the UWP app that we just registered. Don’t forget to click “Save”!

Create a UWP app to show notifications

We will keep the app very simple, the only thing it needs to do is wait for a notification to arrive, and then display that notification as a Windows toast. The nice thing is that once the app installed via the Microsoft store, it doesn’t have to be running to receive the toasts. When the app is installed, it will be opened so all we want is some static welcome screen.

  • In Visual Studio 2019 you need to have the UWP app-development tools installed. This can be done by running the VS2019 setup. When you create a new project, you can now pick the “Blank App (Universal Windows)” project template.
  • Give the application a name, and use the defaults for the other settings. 
  • Hit F5 and admire the empty application.

MainPage.xaml

Nothing fancy here, especially given that I am “graphically handicapped.”

<Page
x:Class="****Notifications.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
Width="Auto" Height="Auto"
>
<Page.Background>
<AcrylicBrush TintColor="#FF35B2A6"/>
</Page.Background>

<StackPanel>
<TextBlock Text="**** Notifier" Foreground="#FCFFFFFF" FontSize="72" HorizontalAlignment="Center"/>
<TextBlock Text="This application will show a message in the system tray when new COVID updates are avaialable." Foreground="#FCFFFFFF" FontSize="18" HorizontalAlignment="Left" TextWrapping="WrapWholeWords" />
<TextBlock Text="You can safely close this window, you will still receive the notifications." Foreground="#FCFFFFFF" FontSize="18" HorizontalAlignment="Left" TextWrapping="WrapWholeWords"/>
</StackPanel>
</Page>

Mainpage.cs is not changed at all.

App.xaml.cs

This is where the magic happens. But it is also disappointingly simple because UWP does the hard work.

I have created a new function that will hook the app to the events sent by the notification hub:

private async void InitNotificationsAsync()
{
var channel = await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();
var hub = new NotificationHub("****-notifications-hub", "Endpoint=sb://****-notifications-ns.servicebus.windows.net/;SharedAccessKeyName=DefaultListenSharedAccessSignature;SharedAccessKey=3aqps-secret stuff=");
var result = await hub.RegisterNativeAsync(channel.Uri);
}

Of course it would be wise to obtain the connection string from a config file, I didn’t do this for the sake of simplicity.

This function is called from the OnLauched function, and that’s it.

/// <summary>
/// Invoked when the application is launched normally by the end user. Other entry points
/// will be used such as when the application is launched to open a specific file.
/// </summary>
/// <param name="e">Details about the launch request and process.</param>
protected override void OnLaunched(LaunchActivatedEventArgs e)
{
InitNotificationsAsync();
Frame rootFrame = Window.Current.Content as Frame;
// ...
}

Testing the application

Run your application (F5 or ctrl+F5). The main window will appear.

No go to the notification hub and in the overview click on “Test Send”. Set the parameters as in the screenshot and click on Send. If all goes well, you will see a toast appearing on your desktop. 

This notification is only sent to 10 users (as it is only used to verify that your app is working properly).

 

 

Deploying the app to the Microsoft Store

There are some settings to be verified before deploying your app.

  • Right-click your project > Properties to open the properties window.
  • Click the “Package Manifest” button. This is where you can set all the information about your app to make it valid for the store. There is some work to be done here, but it is all straightforward.
  • Once this is all done, you can deploy your project by Build > Deploy.  
  • Go back to the application overview in the Windows Dev Center, open Products on the left and click on your application. In the “Application overview tab” you will see the new submission under (you guessed it right) “Submissions.”
    • Click on the “Update” button next to it and follow the instructions. You can enter the price, age ratings, … here and start making money!

Sending a notification from an application

We can send notifications from any type of application, it doesn’t need to be a UWP app. Here is the Toaster class that will do the work. 

internal class Toaster
{
private const string _secret = "hub secret";
private const string _sid = "ms-app://app id";
private const string _uri = "https:// hub uri";
private const string _notificationType = "wns/toast"; // wns/toast | wns/tile | wns/badge / wns/raw
private const string _contentType = "application/xml";

public async Task PostToWns(string xml)
{
var client = NotificationHubClient.CreateClientFromConnectionString("Endpoint=sb://hdmp-notifications-ns.servicebus.windows.net/;SharedAccessKeyName=DefaultFullSharedAccessSignature;SharedAccessKey=ijztkVtbIDtELzAJQKIkcwLy8Ru6d+tM3k7K/33yrA4=",
"hdmp-notifications-hub");

await client.SendWindowsNativeNotificationAsync(xml);
}
}

This will generate a toast message from the xml that is passed. The format of the xml must be correct of course. This format is documented here: https://docs.microsoft.com/en-us/windows/uwp/design/shell/tiles-and-notifications/toast-xml-schema. 

In the store you can also find a great app, called “Notifications Visualizer”. This app allows you to edit notifications and test them locally. This allows to rapidly compose and test your messages. 

References

https://docs.microsoft.com/en-us/azure/notification-hubs/

https://docs.microsoft.com/en-us/azure/notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification 

https://docs.microsoft.com/en-us/windows/uwp/design/shell/tiles-and-notifications/toast-xml-schema

Publishing your app to the store

 

Posted in .Net, Architecture, Azure, Cloud, Codeproject, Development | Tagged , , , | 1 Comment

Obtaining a free MS Azure account

In many of my posts I talk about how to perform specific actions and tasks using MS Azure. If you don’t have an account yet, here is how you can create a test account easily:

On the Azure home page you can find the steps to create a free subscription, that will be valid for 3 months. On the page you get a list of all the free services that you will receive when signing up. 

The “start free” button will take you to a login page. If you have already a Microsoft account, you can use it here, or you can create a new account. Go through the login procedure to get to the sign-up page. Your credit card number will be asked in the subscription process, but it will never be charged. So you are safe 🙂

Enjoy your new MS Azure subscription!

 

Posted on by Gaston | Leave a comment

How to use Microsoft Azure Key Vault

Introduction

In this post I will describe how to set up and use an Azure key vault to store your secret values.

Sometimes we see secrets like storage keys and connection strings written as literals in the code of a project, such as

public static class Secrets
{
  public const string ApiKey = "MyAppKey";
  // ...
}

This doesn’t seem too bad because

  • It is the fastest way to obtain a key
  • Probably the key won’t change too often in time

But there are some serious drawbacks to this way of working as well:

  • If the key does change, code needs to be adapted and redeployed.
  • The key is plain visible in the code.
  • The key is “for ever” in the source code system, maybe even on a public repository.
  • When you change the environment (from DEV to ACC to PROD), the key will probably change as well. This becomes a problem with a hard-coded key.

It would be nice to store the key elsewhere, but what are the options?

  • The key can be stored in a configuration file. This is better already, but this file will still be readable by developers (and on the public repo).
  • The key can be stored in Azure. This is what we’re going to talk about in this article.

Prerequisites for this article

If you want to follow along with the examples, you’ll need an Azure subscription. On the Azure home page you can find the steps to create a free subscription, that will be valid for 3 months.

Introducing Azure Key Vault

We can store the following items in a Key Vault, for later use:

  • Secrets. A lot of types of data can be stored here, such as tokens, passwords, keys, …
  • Keys. Encryption keys can go here, and can be references later to encrypt / decrypt your data.
  • Certificates.

These items are stored securely in the vault, only users (or processes) with the right access rights will be able to retrieve them. This access is monitored, so you can know who accessed what, and how the performance of the Key Vault is.

KeyVault

Creating an Azure Key Vault

In the Microsoft Azure portal

image

  • Click on the “Create a resource” button at the top left.
  • In the blade that appears enter “Key Vault” in the search box and select “Key Vault” from the list below.

image

Click “Create” and fill in the necessary parameters:

  • Name: a unique name for the key vault
  • Subscription: the subscription that will contain your key vault
  • Resource group: here you can either select an existing resource group or create a new one. For this example, you may want to create a new resource group so you can clean up everything easily when you are done “playing”.
  • Location
  • Pricing tier: standard, unless you want HSM backed keys.
  • Access policies: by default the current user will be the owner of the key vault. You can add or remove permissions here.
  • Click on “Create” and the key vault will be created for you. This can take some time.

Inserting values in the Key Vault

  • Find your new key vault in Azure, and click on it. If your subscription contains a lot of objects, you may first select the resource group that the key vault is in.
  • You now see the overview page, with some useful information.
    image

    • The main important piece of information here is the DNS Name (top right).  You will need this to connect to the key vault from your code.
    • You can also see the number of requests, the average latency, and the success ratio.
    • Pro tip: make a note of the average latency as a baseline value for future requests.
  • On the left side click on “Secrets”. You will see all the currently stored secrets. If you just created the key vault, this will be empty.
  • Click on “Generate/Import” to create a new secret:
    • Upload options: manual
    • Name: Password   (for our example)
    • Value: My Secret
    • Content type: leave this empty
    • If you wish you can also set an activation date and an expiration date for this secret. We will leave this empty for our example.
    • Make sure that “enabled” is set to yes and click “Create”.

When you click on the “Secrets” button on the left again, you will now see an entry for this key.

If you prefer to do this by scripting, the next section is for you.

Setting up the key vault using Azure Cloud Shell

Using a script to create an Azure object makes it repeatable. If you have multiple tenants, you can compose a script that will create the necessary objects for each tenant. This will save you time because

  • obviously, executing a script is faster than creating each object by hand
  • consistency. If everything is scripted, you can be sure that all the objects are created the same for each tenant. This can save you hours of finding configuration bugs.
  • you can keep the scripts in source control, which allows you to version them as well.

Open Cloud Shell

image

At the top, click the “Cloud Shell” icon. If this is the first time that you open the cloud shell, a wizard will be shown to set up the shell. You can choose the scripting language to use (PowerShell or Linux Bash), and then Azure will create some storage for you. There is also a fair warning that the storage will cost you some money.

For this example I will use Linux Bash.

RESOURCE_GROUP='CodeProject'
LOCATION='WestEurope'
KEY_VAULT='CPKeyVault666'

az group create --name $RESOURCE_GROUP --location $LOCATION
az keyvault create --resource-group $RESOURCE_GROUP --name $KEY_VAULT
az keyvault list
az keyvault secret set --vault-name $KEY_VAULT --name Password --value 'My Secret'
az keyvault secret list --vault-name $KEY_VAULT
az keyvault secret show --vault-name $KEY_VAULT --name Password --query value --output tsv

Using Azure Key Vault in your .NET project

Project setup

clip_image001

Using Visual Studio 2019, create a new .NET Core Console App, name it ‘KeyVault’.

NuGet packages

To use Azure Key Vault, you’ll first need to add 2 NuGet packages to your project:

  • Microsoft.Azure.KeyVault
  • Microsoft.Azure.Services.AppAuthentication

Open the “Package Manager Console” (Tools > NuGet Package Manager > Package Manager Console…) and type the following statements:

install-package Microsoft.Azure.KeyVault
install-package Microsoft.Azure.Services.AppAuthentication

In your source file you will need the following using statements:

using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;

Reading a string from the Key Vault

To separate the concerns in the application it is best to create a separate class for this, such as:

using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.KeyVault.Models;
using Microsoft.Azure.Services.AppAuthentication;

namespace KeyVault
{
  public class KeyvaultUtilities : IKeyvaultUtilities     
  {
     private readonly IKeyVaultClient _keyVaultClient;         
     private readonly string _vaultBaseUrl;

     public KeyvaultUtilities(string keyvaultName)         
     {             
        _vaultBaseUrl = $"https://{keyvaultName}.vault.azure.net";             
        AzureServiceTokenProvider azureServiceTokenProvider = 
		new AzureServiceTokenProvider();             
        _keyVaultClient = new KeyVaultClient(
		new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));         
     }
     /// <summary>         
     /// Get the value for a secret from the key vault.         
     /// </summary>         
     /// <param name="keyname"></param>         
     /// <returns></returns>         
     /// <exception cref="KeyVaultErrorException">When the key is not found, this exception is thrown.</exception>         
     public async Task<string> GetSecretAsync(string keyname)         
     {       
        try             
        {
           var secret = await _keyVaultClient.GetSecretAsync(_vaultBaseUrl, keyname)
                         .ConfigureAwait(false);
           return secret.Value;             
        }
        catch (KeyVaultErrorException kvex)
        {
           throw new KeyNotFoundException($"Keyname '{keyname}' does not seem to exist in this key vault", kvex);
        }
      }
   }
}

The purpose is to read a secret from the key vault, so that is the only method that I have implemented. You can add other key vault related methods in the class when needed.

Using this class is easy. Instead of passing the key vault name as a string, you may get it from a settings file. That will also allow you to travel easily through your development environments.

Notice that we never created a secret with a name “xyz”. Trying to retrieve this value will throw a KeyNotFoundException.

using System;
using System.Threading.Tasks;

namespace KeyVault
{
  class Program
  {
     static async Task Main(string[] args)
     {
        Console.WriteLine("Hello World!");
        IKeyvaultUtilities util = new KeyvaultUtilities("cpkeyvault666");

        string pwd = await util.GetSecretAsync("Password");
        Console.WriteLine("Password: " + pwd);
        string xyz = await util.GetSecretAsync("xyz");
        Console.WriteLine("xyz: " + pwd);
     }
  }
}

Cleanup in Azure

On the Azure Portal go back to the Cloud Shell. Delete the ‘CodeProject’ resource group:

RESOURCE_GROUP='CodeProject'
az group delete --name $RESOURCE_GROUP --yes

This will delete the ‘Codeproject’ resource group, with all of its contents. Don’t worry if you don’t perform this step, the key vault only costs you a wobbling 3 cents per 10000 operations.  You can calculate your costs here: https://azure.microsoft.com/en-us/pricing/calculator/.

You can also delete the resource group through the Azure portal.

First retrieval of the secret can be (very) slow

Retrieving the first key can take several seconds. If you are not sure that you will always need a secret from the key vault you may consider using the class Lazy<T>.

The next retrievals are fast.

For this reason you may consider to register the KeyVaultUtilities as a singleton and inject it instead of recreating it each time. How you do this will depend on the type of application that you are creating.

References

https://docs.microsoft.com/en-us/azure/key-vault/key-vault-overview

https://docs.microsoft.com/nl-be/azure/key-vault/quick-create-net

https://docs.microsoft.com/en-us/azure/key-vault/tutorial-net-create-vault-azure-web-app

Posted in .Net, Architecture, Azure, Codeproject, Development | Tagged | 3 Comments

Creating multiple identical VMs in Microsoft Azure

Introduction

I am preparing a course for 5 persons. They will all need a virtual machine (VM) with Visual Studio 2017 and some files on to perform exercises. I could create each machine, one by one and perform the same installation everywhere but this would not be very productive, and error-prone. Instead I want to create the virtual machines by creating a “master” image, from which I can easily create the other VMs.

Prerequisites

If you don’t have an Azure account yet, there are some ways to get a free test account. You can surf to https://portal.azure.com, where your credentials will be asked. If you don’t have an account yet, you click on “Create One!” and Microsoft Azure will gladly guide you to create a new account. This account will be free for the first 3 months and will provide you with a free (limited) budget to allow you to test Microsoft Azure.

Creation of the first VM

I want to create a Windows VM image that will contain Visual Studio 2017, and all the necessary course files. I already took these steps to organize my Azure resources:

  • Created a new resource group called “courses”.
  • In that resource group created a new VM called “vs2017-2”.
  • Once the VM was running, installed all the needed software and downloaded the needed files.

This is not the scope of this article, so I won’t describe this here. It would make a boring blog post…

Preparing the VM to be used as a template image

Now that the VM is installed the way we like it, let’s destroy it …

We are going to prepare the image in a way that it can be deployed on multiple computers (or VMs in our case). These don’t necessarily have the same configuration, so we need a tool to prepare for this cloning process. Enter sysprep.exe.

Sysprep can strip the image to the minimum, allowing it to be used to create other VMs. Each VM that we will create using this image will have the same software installed, with the same data files, settings, …

You can find sysprep in this folder: %windir% \System32\Sysprep. On https://blogs.technet.microsoft.com/danstolts/2014/05/how-to-sysprep-sysprep-is-a-great-and-powerful-tool-and-easy-too-if-you-know-how-step-by-step/ you can find the use of Sysprep described in a very good way.

Sysprep can be used with parameters (when you know what you are doing), or just without parameters, which will pop up a little form. In the screenshot, you can see the form with the right values filled in:

Out of Box Experience.

Generalize. This checkbox will change your image so that it can be run on a different computer. All the hardware-specific settings will be removed.

Shutdown. With the previous 2 settings you’re going to make a clean image of your computer which will only be useful to create other images from. So you don’t want to try to reboot this image.

Click OK, when you’re certain that all the security data can be wiped from this VM. Sysprep will clean up your VM, and then execute the generalize step. This can take several minutes to run. When it is ready, we can go back to the Azure portal to capture the image.

Capturing the VM in Azure

As said before, the next step is now to capture the VM, so that we can clone it later. This is done on the blade for the VM itself. To go there: open the “courses” resource group, then open the VM that you just created and generalized. On the top menu you’ll find the “Capture” button.

Clicking this button takes us into the “Create image” page, with will show some warnings to start with. Here you’ll give your new image a name, assign it to a resource group and you get the possibility to delete the VM that you are capturing. This makes sense because that VM will not be useful anymore. Below, I have created an image called “vs2017-image”, in the resource group “courses” and decided to clean up (Automatically delete):

Clicking on the “Create button” …

  • Stops the VM. When you have shut down the VM, it still is available in Azure (and still costs money). If you’re not going to use a VM for a while, don’t forget to also Stop it in the Azure portal. Warning: when you restart the VM in the portal, it will have another IP address. If you downloaded the RDP file to access this machine, you’ll need to adapt the IP address in the RDP file, or download it again. For the course I will only use the 5 VMS for 4 days, and only between 8:00 and 17:00. Therefor I also create a policy on each VM that will make the VM stop at 18:00.
  • Generalizes the VM further.
  • Creates the image.
  • Deletes the VM, as requested.

Even though the VM is deleted, other elements are not automatically deleted, so this needs to be done manually. These items don’t cost a lot in MS Azure, but it is a good practice to remove what you don’t need anymore.

  • Public IP address. Click on the Public IP address to open its blade. Click on the “Dissociate” button to remove it from its network interface (and confirm). Now you can click the “Delete” button to make the final kill!
  • Network interface. Open the blade by clicking on the name, then click “Delete” (and confirm).
  • Network security group. Open the blade by clicking on the name, then click “Delete” (and confirm).
  • The Disk. Open the blade by clicking on the name, then click “Delete” (and confirm).

The order of deletion is important, because some resources depend on others.

Creating a new virtual machine from the image

Now comes the time to profit from the work before. In the Azure portal, click on the image that you created (in this example “vs2017-image”). In the blade that appears, click on “Create VM”. This will take you through a wizard-like series of pages to enter all the necessary parameters. The important parameters are:

  • Resource group. You can select an existing resource group; or create a new one for the VM.
  • VM name. This must be a unique name for your VM.
  • Image. This will be pre-filled with the image name that you just created.
  • Size. The size for your new VM. This can be modified afterwards if needed.
  • Username / password.
  • Inbound port rules. If you want to access the VM over RDP, you need to add this here:

    You can specify these rules on the first page of the wizard, or on the network tab.
  • Most of the other fields will depend on your specific needs.

When you’re done, click on the “Create” button. The VM is now created from the image. You can test the VM by starting it; and connecting to verify if everything works correctly. When the VM is created, it is already started so you can connect immediately to it.

To create additional VMs, you don’t have to wait for the first VM to finish creation.

Conclusion

When you need one or two VMs it may not be worth setting up an image to clone the VMs from. But when you need more than that, you’ll save a lot of time using the sysprep / capture combo. In the end the steps to create an image are quite simple:

  • Create a VM that will serve as the master template. Install all the necessary software on it, together with all the data files that you may need. When everything works remove temporary files that you left during the testing of the VM. If needed, also remove MRU lists (ex in Visual studio: recently used files and projects) and other user state.
  • Run the sysprep tool on this VM.
  • Once sysprep is done, and the VM is shut down, capture the VM in the Azure portal. It is not necessary to remove all the left-overs from the master VM, but it is good practice.
  • When the capture is done, you can create new VMs from the created image.

References

Posted in Azure, Cloud, Codeproject | Tagged , , , | Leave a comment

Why would you use Common Table Expressions?

Introduction

In this article I suppose that you have a good understanding of SQL already. I will introduce some concepts very briefly before moving on to Common Table Expressions.

Below you can find the relevant database diagram of the database that I will use in this article:

image

How is SQL processed by SQL Server?

When we look at a basic SQL statement; the general structure looks like

SELECT <field list>
FROM <table list>
WHERE <row predicates>
GROUP BY <group by list>
HAVING <aggregate predicates>
ORDER BY <field list>

As a mental picture we see the order of execution as:

First determine where the data will come from. This is indicated in the <table list>. This list can contain zero or more tables. When there are many tables, they can be joined using inner or outer join operators, and possibly also cross join operators. At this stage we consider the Cartesian product of all the rows in all the tables.

select count(*) from [HR].[Employees]                    -- 9
select count(*) from [Sales].[Orders]                    -- 831
select count(*) from [HR].[Employees], [Sales].[Orders]  -- 7479
select 9 * 831                                           -- 7479

In the third query we combine the tables, without a join operator. The result will be all the combinations of employees with orders, which explains the 7479 rows. This can escalate quickly.

As a side remark: this is valid SQL, but when I encounter this in a code review it will make me suspicious. One way to make clean that you want all these combinations is the CROSS JOIN operator:

select count(*) from [HR].[Employees] cross join [Sales].[Orders]    -- 7479

This will be handled exactly the same as query 3, but now I know that this is on purpose.

Image result for sql joke

Once we know which data we are talking about, we can then filter using the <row predicates> in the where clause. This will make sure that soon in the process the number of rows is limited. In most join operators there is a condition (inner join T1 on <join condition>) which would be applied here, again limiting the number of rows.

select count(*) 
from [HR].[Employees] E 
inner join [Sales].[Orders]    O on E.empid = O.empid    -- 831

The predicate E.empid = O.empid will make sure that only the relevant combinations are returned.

If there is a group by clause, that happens next, followed by the filtering on aggregated values.

Then finally SQL looks at the <field list> to determine which fields / expressions / aggregates to make available, and then the order by clause is applied.

Of course this is all just a mental picture

Imagine a join between 3 tables, each containing 1000 rows. The resulting virtual table would contain 1.000.000.000 rows, on which SQL would have then to select the right ones. Through the use of indexes SQL Server will only obtain the relevant row combinations.  Each DBMS (Database Management System) contains a query optimizer that will intelligently use indexes to obtain the rows in the <table list>, combined with the <row predicate> from the where condition, and so on. So, if the right indexes are created in the database, only the necessary data pages will be retrieved.

Inner queries

The table list can also contain the result of another SQL statement. The following is a useless example of this:

select count(*) 
from (select * from [HR].[Employees]) E

This example will first create a virtual table named E as the result of the inner query, and use this table to select from. We can now use E as a normal table, that can be joined with other tables (or inner queries).

Tip: It is mandatory to give the inner select statement an alias, otherwise it will be impossible to work with it. Even if this is the only data source that you use, an alias is still needed.

As an example I want to know the details of the 3 orders that gave me the highest revenue. To start with, I first find those 3 orders:

select top 3 [orderid], [unitprice] * [qty] as LineTotal
from [Sales].[OrderDetails]
order by LineTotal desc

This gives us the 3 biggest orders:

orderid LineTotal
10865 15810,00
10981 15810,00
10353 10540,00

Now I can use these results in a query like

select *
from [Sales].[OrderDetails]
where orderid in (10865, 10981, 10353)

which will give the order details for these 3 orders, at this point in time. I can use the result of the previous query in the where condition to make the query work at any point in time:

select *
from [Sales].[OrderDetails]
where orderid in 
(
    select top 3 [orderid]
    from [Sales].[OrderDetails]
    order by [unitprice] * [qty] desc 
)

This query will give me the correct results. I just had to adapt some things from the initial query because the IN clause requires a list of values, so we can only return 1 value (the [orderId]. The order by clause then needs to use the full expression. Don’t worry, no more calculations than needed will be done. Trust the optimizer!

To further evolve this query we can now use an inner join instead of WHERE … IN. The resulting execution plan will be the same again, and the results too.

select *
from [Sales].[OrderDetails] SOD
inner join (select top 3 [orderid]
    from [Sales].[OrderDetails]
    order by [unitprice] * [qty] desc) SO 
on SO.orderid = SOD.orderid

Common Table Expressions

With all this we have gently worked toward CTEs. A first use would be to separate the inner query from the outer query, making the SQL statement more readable. Let’s first start with another senseless example to make the idea of CTEs more clear:

;with cte as
(
    select top 3 [orderid]
    from [Sales].[OrderDetails]
    order by [unitprice] * [qty] desc
)
select * from cte

What this does is to create a (virtual) table called cte, that can then be used in the following query as a normal data source.

Tip: the semicolon at the front of the statement is not needed if you just execute this statement. If the “with” statement follows another SQL statement then both must be separated by a semicolon. Putting the semicolon in front of the CTE makes sure you never have to search for this problem.

The CTE is NOT a temporary table that you can use. It is part of the statement that it belongs to, and it is local to that statement. So later in the script you can’t refer to the CTE table again. Given that the CTE is part of this statement, the optimizer will use the whole statement to make an efficient execution plan. SQL is a declarative language: you define WHAT you want, and the optimizer decides HOW to do this. The CTE will not necessarily be executed as first, it will depend on the query plan.

Let’s make this example more useful:

;with cte as
(
    select top 3 [orderid]
    from [Sales].[OrderDetails]
    order by [unitprice] * [qty] desc
)
select *
from [Sales].[OrderDetails] SOD 
inner join cte on SOD.orderid = cte.orderid

Now, for us humans we have split the query in 2 parts: we first calculate the 3 best orders, then we use the results of that to select their order details. Like this we can show the intent of our query.

In this case we use the CTE only once, but if you would use it multiple times in this query it would become more useful.

Hierarchical queries

image

In this table we see a field empid, and a field mgrid. (Almost) every employee has a manager, who can have a manager, … So clearly we have a recursive structure.

This kind of structures often occurs with

  • compositions
  • Categories with an unlimited level of subcategories
  • Folder structures
  • etc

So let’s see how things are organized:

select [empid], [firstname], [title], [mgrid]
from [HR].[Employees]

Gives us the following 9 rows:

image

We can see here that Don Funk has Sara Davis as a manager.

If we want to make this more apparent, we can join the Employees table with itself to obtain the manager info (self-join):

select E.[empid], E.[lastname], E.[firstname], 
       E.[title], E.[mgrid],
       M.[empid], M.[lastname], M.[firstname]
from [HR].[Employees] E
left join [HR].[Employees] M on E.mgrid = M.empid

Notice that a LEFT join operator is needed because otherwise the CEO (who doesn’t have a manager) would be excluded.

image

We could continue this with another level until the end of the hierarchy. But if a new level is added, or a level is removed, this query wouldn’t be correct anymore. So let’s use a hierarchical CTE:

;with cte_Emp as
(
select [empid], [lastname] as lname, [firstname], [title], 
       [mgrid], 0 as [level]
from [HR].[Employees]
where [mgrid] is null

union all

select E.[empid], E.[lastname], E.[firstname], E.[title], 
       E.[mgrid], [level] + 1 
from [HR].[Employees] E 
inner join cte_Emp M on E.mgrid = M.empid
)
select *
from cte_Emp

I’ll first give the result before explaining what is going on:

image

As explained before we start with a semicolon, to avoid frustrations later.

We then obtain the highest level of the hierarchy

select [empid], [lastname], [firstname], [title], 
       [mgrid], 0 as [level]
from [HR].[Employees]
where [mgrid] is null

This is our starting point for the recursion. Using UNION ALL we now obtain all the employees that have Sara as a manager. This is added to our result set, and then for each row that is added, we do the same, effectively implementing the recursion.

To make this more visual I added the [level] field, so you can see how things are executed. Row 1 has level 0, because this is the part of the query (0 as [level]). The for each pas in the recursive part, the level is incremented. This explains perfectly how this query is executed.

Conclusion

Common Table Expressions are one of the more advanced query mechanisms in T-SQL. They can make your queries more readable, or perform queries that would otherwise be impossible, such as outputting a hierarchical list. In this case the real power is that a CTE can reference itself, making it possible to handle recursive structures.

Reference

https://docs.microsoft.com/en-us/sql/t-sql/queries/with-common-table-expression-transact-sql

Posted in Codeproject, Databases, Development, SQL | Tagged , | Leave a comment

Creating a Visio Add-In in VS2017

Problem statement

A good friend asked me the following question:

How can I in Visio change the color to a previously selected color just by selecting a shape?

Sounds simple enough, but there are some caveats, so here is my attempt to tackle this problem. The main caveats were:

  • hooking up the SelectionChanged event,
  • keeping and accessing the state in the Ribbon (for the default color),
  • setting the color of the selected shape.

The code for this project can be found at https://github.com/GVerelst/ActOnShapeSelection.

Visio Add-In – Preparation

I decided to use Visual Studio 2017 to create a Visio Add-In. To do this we need to install the “Office/SharePoint development” workload. Since Visual Studio 2017; the installer allows for modular installation, so we just need to add this to our installation.

Start Visual Studio Installer (Start -> type “Visual Studio Installer”). In the installer window select “More > Modify”:

image

After a little while this takes us to the workloads selection screen. Select Office/SharePoint development and then click “Modify”.

image

When you launch Visual Studio again you’ll find a new bunch of project templates.

Creating the Visio Add-in

In VS create a new project (File > New > Project…) like this:

image

As you can see there are new project templates for “Office/SharePoint”. I choose the Visio Add-in project and gave it an appropriate name “ActOnShapeSelection”.

The result is a project with one C# file (ThisAddIn.cs). This is where we will initialize our events. As a test we show a message box when starting our add-in, and another one when closing it:

public partial class ThisAddIn
{
    private void ThisAddIn_Startup(object sender, System.EventArgs e)
    {
        MessageBox.Show("Startup ActOnShapeSelection");
    }

    private void ThisAddIn_Shutdown(object sender, System.EventArgs e)
    {
        MessageBox.Show("Shutdown ActOnShapeSelection");
    }

    // …
}

Remark: by default the namespace System.Windows.Forms is not included in the using list, so we will need to add it. An easy way to do this is by clicking on the red-underlined MessageBox and typing ctrl+; (control + semicolon that is). Now we can choose how to solve the “using problem”.

Starting the application (F5) will now start Visio and our first message box is indeed shown. Closing Visio shows the second message box. No rocket science so far, but this proves that our add-in is loaded properly into Visio.

Convincing Visio to do something when a shape is selected is just a bit harder.

Wiring the Selected event

public partial class ThisAddIn
{
    private void ThisAddIn_Startup(object sender, System.EventArgs e)
    {
        Application.SelectionChanged += Application_SelectionChanged;
    }

    private void Application_SelectionChanged(Visio.Window Window)
    {
        MessageBox.Show("SelectionChanged ActOnShapeSelection");
    }
// …
}

The SelectionChanged event must be wired in the startup event. Later we will do the same for the ShapeAdded event. Once you know this “trick” things become easy.

When running this code, each time we select something in Visio we see our fancy message box. So the event wiring works. Now we want to be able to only execute code when a shape is selected. Let’s investigate if we can find out something about the selected object(s) in the “Window” parameter:

image

As expected this is a dynamic object. Visio is accessed over COM. Luckily the debugger allows to expand the dynamic view members. Looking through the members of this object we find a property “Selection”. This looks promising! Looking a bit further, “Selection” is an implementation of the interface IVSelection. And this interface inherits from IEnumerable.

So Selection is actually an enumerable collection of all the selected items, hence it can be handled using a standard foreach( ). Let’s try this:

private void Application_SelectionChanged(Visio.Window Window)
{
    //MessageBox.Show("SelectionChanged ActOnShapeSelection");
    Visio.Selection selection = Window.Selection;
    foreach (dynamic item in selection)
    {
        Visio.Shape shp = item as Visio.Shape;
        if (shp != null)
        {
            shp.Characters.Text = "selected";
        }
    }
}

We run the add-in again (F5) and add 2 shapes on the page. When we select the shapes, they get the text “selected”. So now we are capable of knowing which shapes are selected and doing something useful with them. Let’s add a new ribbon with actions to perform on our shapes. After all, that is the purpose of this exercise.

Adding a ribbon

This can easily be done by right-clicking on the project, New Item. Then select Office/SharePoint > Ribbon (Visual Designer).

image

Name this ribbon “ActionsRibbon.”

Opening the ribbon, we can now use the visual designer. Make sure that the toolbox window is visible (View > Toolbox).

Now we add 3 ToggleButtons on the design surface, named btnRed, btnGreen, and you guessed it: btnBlue.  For each of the buttons we add a Click event by double-clicking on the button. Using the GroupView Tasks we also add a DialogBoxLauncher. This will open a ColorDialog for selecting a custom color.

image

Double-click the group to implement the “DialogLauncherClick” event, which will handle this.

The ActionsRibbon will contain its own data, being the 3 components for a color (Red, Green, Blue):

public byte Red { get; private set; }
public byte Green { get; private set; }
public byte Blue { get; private set; }

Each of the toggle buttons will toggle its own color component:

private void btnRed_Click(object sender, RibbonControlEventArgs e)
{
    Red = (byte)(255 - Red);
}

private void btnGreen_Click(object sender, RibbonControlEventArgs e)
{
    Green = (byte)(255 - Green);
}

private void btnBlue_Click(object sender, RibbonControlEventArgs e)
{
    Blue = (byte)(255 - Blue);
}

Remark: This code works fine if there is no possibility for a custom color. When selecting a custom color the values will become something else than 0 or 255 and not correspond to the UI anymore. I leave it as an exercise to the reader to make a better implementation.

Choosing a custom color:

private void group1_DialogLauncherClick(object sender, RibbonControlEventArgs e)
{
    ColorDialog dlg = new ColorDialog { Color = Color.FromArgb(Red, Green, Blue) };

    if (dlg.ShowDialog() == DialogResult.OK)
    {
        Red = dlg.Color.R;
        Green = dlg.Color.G;
        Blue = dlg.Color.B;
    }
}

In the SelectionChanged event of our AddIn class we now need to refer to the RGB values from the Ribbon. Here is the full code for the event handler:

private void Application_SelectionChanged(Visio.Window Window)
{
    ActionsRibbon rib = Globals.Ribbons.ActionsRibbon;
    Visio.Selection selection = Window.Selection;
    foreach (dynamic item in selection)
    {
        Visio.Shape shp = item as Visio.Shape;
        if (shp != null)
        {
            shp.Characters.Text = "selected";
            shp.CellsSRC[(short)Visio.VisSectionIndices.visSectionObject,3, 0].FormulaU = $"THEMEGUARD(RGB({rib.Red}, {rib.Green}, {rib.Blue}))";
        }
    }
}

There is some Visio-fu to set the color. Consider this as a cookbook recipe. When you do this, you’ll get the desired result. Visio is not always straightforward, one could say.

Now we have a working Visio add-in, that does what was asked. BUT when we add a new shape, it will automatically receive the selected color. To solve this we add another event handler:

Application.ShapeAdded += Application_ShapeAdded;

We also add a boolean indicating if we are adding a shape or not.

bool _isAddingAShape = false;

In the ShapeAdded event we set its value to true:

private void Application_ShapeAdded(Visio.Shape Shape)
{
    _isAddingAShape = true;
}

And we modify the SelectionChanged event to do nothing when a shape was added. This event will be called when a shape is selected, AND when a shape is added (which indeed selects it). The code:

private void Application_SelectionChanged(Visio.Window Window)
{
    if (! _isAddingAShape)
    {
        ActionsRibbon rib = Globals.Ribbons.ActionsRibbon;
        Visio.Selection selection = Window.Selection;
        foreach (dynamic item in selection)
        {
            Visio.Shape shp = item as Visio.Shape;
            if (shp != null)
            {
                shp.Characters.Text = "selected";
                shp.CellsSRC[(short)Visio.VisSectionIndices.visSectionObject, 3, 0].FormulaU = $"THEMEGUARD(RGB({rib.Red}, {rib.Green}, {rib.Blue}))";
            }
        }
    }
    _isAddingAShape = false;
}

Conclusion

Using Visual Studio it took us about 100 lines of code to implement the desired behavior. It would not be hard to add more actions to the ribbon. But in that case it will be wise to move the code to perform the action into the ActionRibbon class instead of using the values of this class in the AddIn class.

The hardest part was to find how to obtain the newly created Ribbon from within the AddIn class. All the rest is just a straightforward implementation.

Posted in .Net, Codeproject, Development, Office Development | Tagged , | Leave a comment

Architecture of a Polyglot Azure Application

Introduction

I started working on a C# project that will communicate requests to several different partners, and receive feedback from them. Each partner will receive requests in their own way. This means that sending requests can (currently) be done by

  • calling into a REST service,
  • preparing a file and putting it on an FTP share,
  • sending a mail (SMTP).

Needless to say that the formats of these requests are never the same, which complicates things even more.

Receiving feedback can also be done in different ways:

  • receiving a file through FTP. These files can be CSV files, JSON files, XML files, each in their own format,
  • polling by calling a web service on a schedule.

So we need an open architecture that is able to send a request, and store the feedback received for this request. This feedback consists of changes in the states for a request. I noticed that this is a stand-alone application, that can easily be moved into the cloud. We use Microsoft Azure.

Here is a diagram for the current specifications:

Current specifications

First observations

When I analyzed this problem, I noticed immediately some things that could make our lifes easier. And when I can make things simpler, I’m happy!

The current flow

Currently everything is handled in the same application, which is just a plain simple C# solution. In this solution a couple of the protocols are implemented. This is OK because currently there are only 2 partners. But this will be extended to 20 partners by the end of the year.

There are adapters that transform the request into the right format for the corresponding partner, and then send it through a REST service. So we already have a common format to begin with. If the “PlaceOrder” block can receive this common format we know at least what comes in. And we know what we can store in the “Feedback Store” as well; this will be a subset of the “PlaceOrder request.”

PlaceOrder then will have to switch per partner to know in which data format to transform the request, and send it to that partner.

On the feedback side, we know that feedback comes in several formats, over several channel types. So in the feedback handler we need to normalize this data so that we can work with it in a uniform way. Also, some feedback will come as a file (SFTP) with several feedback records; or per one record (for example when polling). This needs to be handled as well.

So now we can think about some more building blocks. The green parts are new:

image

  • The “Initiatior Service” will receive a request from the application (and in the future from multiple applications). All it will do is transforming the request into a standard format and putting it on the “Requests Queue“. Some common validations can be done here already. Creating a separate service allows future applications to use the application as well.
  • We introduce the “Request Queue”, which will receive the standardized request.
  • And now we can create the “PlaceOrder queue handler” which will wake up when a request arrives on the queue, and then handles all the messages on the queue.

Advantages of adding queues

  • Separation. A nice (and simple) separation between the caller (Application -> “Initiator Service“) and the callee (the “PlaceOrder Queue Handler“).
  • Synchronization. In the queue handler we only need to bother about 1 request at a time. Everything is nicely synchronized for us.
  • Elasticity. When needed we can add more Queue Handlers. Azure can handle this automatically for us, depending on the current queue depth.
  • Big loads will never slow down the calling applications, because all they have to do is to put a message on the queue. So each side of the queue can work at its own pace.
  • Testing. Initiating the Queue Handler means putting a message on the queue. This can be done using tools such as the Storage Explorer. This makes testing a lot easier.
    • Testing the “Initiator Service“: call the service with the right parameters, and verify if the message on the Request Queue is correct.
    • Testing the “Queue Handler“: put in some way (ex: storage explorer) a request in the correct format on the queue and take it from there.
    • Both are effectively decoupled now.

We can do the same for the feedback handler. Each partner application can receive feedback in its own way, and then send the feedback records one by one to the Feedback Queue in a standard format. This takes away a lot of the complexity again. The feedback programs just need to interpret the feedback from their partner and put it in the right format on the Feedback Queue. The Feedback Queue Handler just needs to handle these messages one by one.

To retrieve the feedback status we’ll need a REST service to answer to all the queries. You’ll never guess the name of this service: “Feedback Service“. I left this out of scope for this post. In the end it is just a REST service that will talk to the data store via the “Repository Service.”

I also don’t want to access the database directly, so a repository service is created as well. Again, this is a very simple service to write.

But there is still a lot of complexity

image

The “Place Order Queue Handler” handles each request by formatting the message and sending it to the specific partner. Having this all in 1 application doesn’t seem wise because

  • This application will be quite complex and large
  • When a new partner needs to receive calls we need to update (and test, deploy) this application again.
  • This is actually what we do currently, so there would be little or no advantage in putting all this effort into it if we stopped here.

So it would be nice to find a way to extend the application by just adding some assemblies in a folder. The first idea was to use MEF for this. Using MEF we can dynamically load the modules and use them, effectively splitting out the complexity per module. Again, each module has only 1 responsibility (formatting & sending the request).

The same would apply (more or less) for the feedback part.

But thinking a bit further, I realized that this is actually nothing but a workflow application (comparable to BizTalk). And Azure provides us with Logic Apps, which are created to handle workflows. So let’s see how we can use this in our architecture…

image

I left out the calling applications from this diagram. A couple of things have been modified:

  • DLQ. For each queue I also added a Dead Letter Queue (DLQ). This is sometimes also called a poison queue. The “Initiator Service” puts a request on the queue to be handled. But if the Queue Handler has a problem (for example, the Partner web service sends back a non-recoverable error code), we can’t let the Initiator Service know that. So we’ll put those failed messages on the DLQ to be handled by another application. A possible handling could be to send an e-mail to a dedicated address to resolve the problem manually.
  • Logic App. The “Request Q Handlernow is a Logic App. This is a workflow that can be executed automatically by Azure when a trigger is fired. In our case the trigger is that one or more requests are waiting on the “Request Queue.” In this post I won’t go into detail into the contents of this Logic App, but this is the main logic:
    • Parse the content of the request message as JSON
    • Store the relevant parts of the message in the database with a “Received” status.
    • Execute the partner specific logic using Logic App building blocks, and custom made blocks.
    • Update the status of the request in the database to “Sent”
    • When something goes wrong put the request on the DLQ.
  • Configuration. The nice thing is that this all can be done using available building blocks in the Logic App, so no “real” programming is needed – only configuration. Adding a new partner requires just adding a new branch in the switch and implementing the partner logic.
  • The database is accessed by a REST service, and there are Logic actions that can communicate with a REST service. So accessing the database can be done in a very standard way.

The feedback part is also a bit simpler now

  • One Logic App will poll every hour for those partners who work like that. This means that this App will have a block per “polling partner” which will retrieve the states for the open requests, transform them into a standard format and put them in the Feedback Queue. So the trigger for this Logic App is just a schedule.
  • Some partners communicate their feedback by putting a file on an FTP location. This is the trigger and the handling is a bit different:
    • Interpret the file contents and transform them into JSON.
    • For each row in the JSON collection execute the same logic as before.
    • Delete the file.
    • Again, these are all existing blocks in a Logic App, so no “real” programming!

The “Feedback Q handler” is again simple. Because the FTP Logic Apps (Notice the plural!) make sure that the feedback records are stored one by one on the “Feedback Queue“, all we have to do is to update the status in the database, and possibly execute a callback web service.

Conclusion

Thanks to MS Azure I was able to easily split the application in several small blocks that are easy to implement and to test. In the end we reduced a programming problem to a configuration problem. Of course some programming remains to be done, for example the “Repository Service” and possibly some other services to cover more exotic cases.

Posted in Analysis, Architecture, Azure, Cloud, Codeproject, Development | Tagged , | 1 Comment

Areas in ASP.NET Core

Introduction

In a default MVC application everything is organized by Controllers and Views. The controller name determines the first part of your URL, and the controller method the second part. By default the view that will be rendered has the same name as the method in the controller, although this isn’t required.

So when you create a HomeController in your application, with a method About( ) you have defined the URL Home/About for your application. Easy. For small web applications this is sufficient, but when things start to get bigger you may want to add another level in your URL.

Areas

imageThis is done by creating separate areas in the application. You can create as many areas as you like, and you can consider each area as a separate part of your application, with its own controllers, views and models. So now you can make an “Admin” area for user management and other “admin stuff.” The nice thing is that now your URL will have an additional level, such as Admin/Users/Create.

This allows organizing your project in a logical way, but don’t exaggerate. I have seen projects where areas only contain 1 controller. In that case the advantage of using an area is gone, and worse yet: you haven’t simplified your application, but added an extra layer for nothing. The KISS principle is still one of the most important principles in software engineering!

The problem

In the old ASP.NET MVC all you have to do is

  1. Right-click on the project level, select “Add area”
  2. Enter the name of the area
  3. Everything is done for you: you get a nice solution folder for your area, routing is set up, …

Looking for this menu item in ASP.NET Core was disappointing, it is not there anymore. I couldn’t imagine that areas would have disappeared, so I consulted my friend Google. This led me to this page in Microsoft Docs: https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/areas.

So how does it work in ASP.NET Core?

I want to create a new area in my application, called “Reports”. We already know that right-clicking doesn’t work anymore, so here are the steps.

Create a folder “Areas”

imageRight-click on the project > Add > Folder, enter “Areas”.

MVC will by default search for /Areas/… which is why you need this folder. If you want to give it a different name you also need to configure your RazorViewEngineOptions. More info on https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/areas.

Now right-click the Areas folder and add a new folder called “Reports”. And under Reports, create the 3 folders “Controllers”, “Models” and “Views”.

Caveat

The views under your area don’t share the _ViewImports.cshtml and _ViewStart.cshtml. This means that your site layout will not be automatically applied to your area’s views. The solutions is simple: copy both files under the corresponding Views folder.

The standard _ViewStart.cshtml looks like this:

@{
    Layout = "_Layout";
}

If you want to use the same layout in your areas you should change the copied file to

@{
    Layout = "~/Views/Shared/_Layout.cshtml";
}
Of course, if you want your area to have a different layout you don’t have to do this. You can then create a “Shared” folder under the Views folder and create a new _Layout.cshtml there.

We’re ready to add some code now.

Create a HomeController in the Reports Area

Right-click on the newly created Controllers folder > Add > Controller. This takes you to the known “Add Scaffold” dialog box; we choose to add an empty controller.

image

Name the controller “HomeController” and let VS2017 do its scaffolding work for you. We now have a controller with the Index( ) method already implemented. This controller is created under the areas folder structure, but for ASP.NET Core this isn’t enough. We need to indicate which area it belongs to. This is easy:

image

I added line 11, which does the trick. This means that areas and folder structure are now decoupled.

As you notice I also changed the return type to string on line 12, and on line 14  I return … a string Winking smile.  This string will be literally returned to the browser when this page is requested. Of course we could have gone through the trouble of creating a view, but let’s keep things simple in this demo.

Inform ASP.NET Core that areas are involved

MVC determines which controller to instantiate, and which method in the controller to call by means of routing. Routing is implemented by templated routing tables, as you can see below. By default there is 1 route template defined:

routes.MapRoute(
    name: "default",
    template: "{controller=Home}/{action=Index}/{id?}");

In the template we see {Controller=Test}, which will interpret the URL (ex: http://localhost:12345/Test/index). Test is now used to determine that the class TestController must be instantiated. The second part is easy to explain too: the method Index( ) will be called, and that’s how routing works basically.

When we start the site we don’t want (our users) to type http://localhost:12345/Home/Index, which is why a default value is foreseen: when we just type http://localhost:12345 the default HomeController will be instantiated, and the default Index( ) method will be called.

URLs are mapped against the routes in the routing table, and the first match will be used. This means that the “areaRoute” (in yellow below) best comes first in the routing table. This is all defined in the Startup.cs file in the project folder. Go to the Configure method and find where the routes are mapped. Add the lines in yellow:

image

Now we can try if our work has paid off:

  1. Start the application (ctrl + F5). This will show the default home page (no areas involved).
  2. Browse to http://localhost:12345/Reports/Home/Index. Of course 12345 depends on your configuration. We now see the string that we returned from the area controller. And of course http://localhost:12345/Reports/Home/ and http://localhost:12345/Reports/ return the same, because Home and Index are indicated as default values in the route mapping (lines 54 and 55).

Generating a link to the Reports/Home controller

Somewhere in the application we’ll want to refer to the newly created controller. This is typically done from the _Layout.cshtml view; which serves as a template for all your pages. By default a top menu is created for easy navigation between your pages.

We don’t want to hard-code links, because then part of the advantage of using the MVC framework disappears (and we have to think about the link, which always provides room for error). In the navigation we find links like this:

<ul class="nav navbar-nav">
    <li><a asp-area="" asp-controller="Home" asp-action="Index">
		Home
	</a>
    </li>
    <!--   other links  -->
</ul>

The TagHelpers clearly indicate the intend of this line: a link to Home/Index is created.

So for our Reports home page we just need to fill in the area :

<li><a asp-area="Reports" asp-controller="Home" asp-action="Index">
	Reports
    </a>
</li>

This will do the trick. We have created a new (top-level) menu that will open our great Reports page. The link will be http://localhost:12345/Reports. The /Home/Index part is left out because MVC knows from its routing tables that these are default values.

Conclusion

Adding an area is slightly more complex now, but the documentation was quite clear. I will need to do this more than once, hence this post Smile

References

https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/areas

https://docs.microsoft.com/en-us/aspnet/core/mvc/views/tag-helpers/intro

https://docs.microsoft.com/en-us/aspnet/core/mvc/views/layout

Posted in .Net, Codeproject, Development, MVC, Web | Tagged , , , | 9 Comments

Automating the creation of NuGet packages with different .NET versions

Introduction

Image result for +nugetI created a couple of straightforward libraries to be used in almost every project. So evidently these libraries are a good candidate for NuGet. This will decouple the libraries from the projects that they are used in. It also forces the Single Responsibility principle because every NuGet package can be used on its own, with only dependencies on (very few) other NuGet packages.

Creating the packages for one version of .NET is quite straightforward, and well-documented. But the next request was: “can you upgrade all our projects from .NET 4.5.2 to .NET 4.6.1, and later to .NET  4.7?”.

The plan

We have over 200 projects that preferably all are compiled in the same .NET version. So clicking each project open, change it’s version, and compile it is not really an option…

<PropertyGroup>
  <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
  <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
  <ProjectGuid>{21E95439-7A66-4C75-ACC5-1B9A5FF4A32D}</ProjectGuid>
  <OutputType>Library</OutputType>
  <AppDesignerFolder>Properties</AppDesignerFolder>
  <RootNamespace>MyProject.Clients</RootNamespace>
  <AssemblyName>MyProject.Clients</AssemblyName>
  <TargetFrameworkVersion>v4.6.1</TargetFrameworkVersion>
  <FileAlignment>512</FileAlignment>
  <TargetFrameworkProfile />
</PropertyGroup>

image

Investigating the .csproj files I noticed that there is 1 instance of the <TargetFrameworkVersion> element that contains the .NET version. When I change it, in Visual Studio the .NET version property is effectively changed. So this is easy: using Notepad++ I replace this in all *.csproj files and recompile everything. This works but …

What about the NuGet packages?

The packages that I created work for .NET 4.5.2, but now we’re at .NET 4.6.1. So this is at least not optimal, and it will possibly not link properly together. So I want to update the NuGet packages to contain both versions. That way developers who are still at 4.5.2 with their solutions will use this version automatically, and developers at 4.6.1 too. Problem solved.  But how …

Can this be automated?

Creating the basic NuGet package

This is quite good explained on the nuget.org website. These are the basic steps:

Technical prerequisites

Download the latest version of nuget.exe from nuget.org/downloads, saving it to a location of your choice. Then add that location to your PATH environment variable if it isn’t already.
Note:  nuget.exe is the CLI tool itself, not an installer, so be sure to save the downloaded file from your browser instead of running it.

I copied this file to C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7Tools, which is already in my PATH variable (Developer command prompt for VS2015). So now I have access to the CLI from everywhere, provided that I use the Dev command prompt of course.

So now we can use the NuGet CLI, described here.

Nice to have

https://github.com/NuGetPackageExplorer/NuGetPackageExplorer

From their website:

NuGet Package Explorer is a ClickOnce & Chocolatey application that makes it easy to create and explore NuGet packages. After installing it, you can double click on a .nupkg file to view the package content, or you can load packages directly from nuget feeds like nuget.org, or your own Nuget server.

This tool will prove invaluable when you are trying some more exotic stuff with NuGet.

It is also possible to change a NuGet package using the package explorer. You can change the package metadata, and also add content (such as binaries, readme files, …).

image

Prerequisites for a good package

An assembly (or a set of assemblies) is a good candidate to be a package when the package has the least dependencies possible. For example a logging package would only do logging, and nothing more. Like that NuGet packages can be used everywhere, without special conditions. When dependencies are necessary, then they are preferably on other NuGet packages.

Creating the package

In Visual Studio, create a project of your choice. Make sure that it compiles well.

Now open the DEV command prompt and enter

nuget spec

in the folder containing your project file. This will generate a template .nuspec file that you can use as a starting point. This is an example .nuspec file:

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2013/05/nuspec.xsd">
  <metadata>
    <!-- The identifier that must be unique within the hosting gallery -->
    <id>Diagnostics.Logging</id>

    <!-- The package version number that is used when resolving dependencies -->
    <version>1.1.0</version>

    <!-- Authors contain text that appears directly on the gallery -->
    <authors>Gaston</authors>

    <!-- Owners are typically nuget.org identities that allow gallery
         users to early find other packages by the same owners.  -->
    <owners>Gaston</owners>

    <!-- License and project URLs provide links for the gallery -->
<!--
    <licenseUrl>http://opensource.org/licenses/MS-PL</licenseUrl>
    <projectUrl>http://github.com/contoso/UsefulStuff</projectUrl>
-->
    <!-- The icon is used in Visual Studio's package manager UI -->
<!--
    <iconUrl>http://github.com/contoso/UsefulStuff/nuget_icon.png</iconUrl>
-->
    <!-- If true, this value prompts the user to accept the license when
         installing the package. -->
    <requireLicenseAcceptance>false</requireLicenseAcceptance>

    <!-- Any details about this particular release -->
    <releaseNotes>Added binaries for .NET 4.6.1</releaseNotes>

    <!-- The description can be used in package manager UI. Note that the
         nuget.org gallery uses information you add in the portal. -->
    <description>Logging base class </description>
    <!-- Copyright information -->
    <copyright>Copyright ©2017</copyright>

    <!-- Tags appear in the gallery and can be used for tag searches -->
    <tags>diagnostics logging</tags>

    <!-- Dependencies are automatically installed when the package is installed -->
    <dependencies>
      <!--<dependency id="EntityFramework" version="6.1.3" />-->
    </dependencies>
  </metadata>

  <!-- A readme.txt will be displayed when the package is installed -->
  <!--
  <files>
    <file src="readme.txt" target="" />
  </files>
  -->
</package>

Now run

nuget pack

in your project folder, and a Nuget package will be generated for you.

Verifying the package

If you want to know if the contents of your package are correct, use Nuget Package Explorer to open your package.

image

Here you see a package that I created. It contains some meta data on the left side, and the package in 2 versions on the right side. You can use this tool to add more folders and to  change the meta data. This is good and nice, but not very automated. For example, how can we create a Nuget package like this one, that contains 2 .NET versions of the libraries?

Folder organization

I wanted to separate the creation of the package from the rest of the build process. So I created a NuGet folder in my project folder.

I moved the .nuspec file into this folder, to have a starting point and then I created a batch file that solved the following problems:

  1. Create the necessary folders
  2. Build the binaries for .NET 4.5.2
  3. Build the binaries for .NET 4.6.1
  4. Pack both sets of binaries in a NuGet package

I also wanted this package to be easily configurable, so I used some variables.

The script

Initializing the variables

set ProjectLocation=C:\_Projects\Diagnostics.Logging
set Project=Diagnostics.Logging

set NugetLocation=%ProjectLocation%\NuGet\lib
set ProjectName=%Project%.csproj
set ProjectDll=%Project%.dll
set ProjectNuspec=%Project%.nuspec
set BuildOutputLocation=%ProjectLocation%\NuGet\temp

set msbuild="C:\Program Files (x86)\MSBuild\14.0\bin\msbuild.exe"
set nuget="C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\nuget.exe"

The 2 first variables are the real parameters. All the other variables are built from these 2 variables.

The %msbuild% and %nuget% variables allow running the commands easily without changing the path. Thanks to these 2 lines this script will run in any “DOS prompt”, not just in the Visual Studio Command Prompt.

Setting up the folder structure

cd /d %ProjectLocation%\NuGet
md temp
md lib\lib\net452
md lib\lib\net461
copy /Y %ProjectNuspec% lib
copy /Y readme.txt lib

imageIn my batch file I don’t want to rely on the existence of a specific folder structure, so I create it anyway. I know that I can first test if a folder exists before trying to create it, but the end result will be the same.

Notice that I created Lib\Lib. The first level contains the necessary “housekeeping” files to create the package, the second level will contain the actual content that goes into the package file. The 2 copy statements copy the “housekeeping” files.

Building the project in the right .NET versions

%msbuild% "%ProjectLocation%\%ProjectName%" /t:Clean;Build /nr:false /p:OutputPath="%BuildOutputLocation%";Configuration="Release";Platform="Any CPU";TargetFrameworkVersion=v4.5.2
copy /Y "%BuildOutputLocation%"\%ProjectDll% "%NugetLocation%"\lib\net452\%ProjectDll%

%msbuild% "%ProjectLocation%\%ProjectName%" /t:Clean;Build /nr:false /p:OutputPath="%BuildOutputLocation%";Configuration="Release";Platform="Any CPU";TargetFrameworkVersion=v4.6.1
copy /Y "%BuildOutputLocation%"\%ProjectDll% "%NugetLocation%"\lib\net461\%ProjectDll%

The secret is in the /p switch

When we look at a .csproj file we see that there are <PropertyGroup> elements with a lot of elements in them, here is an extract :

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="12.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" />
  <PropertyGroup>
    <!--  …   -->
    <OutputType>Library</OutputType>
    <!--  …   -->
    <TargetFrameworkVersion>v4.6.1</TargetFrameworkVersion>
    <!--  …   -->
  </PropertyGroup>

Each element under the <PropertyGroup> element is a property that can be set, typically in Visual Studio (Project settings). So compiling for another .NET version is as simple as changing the <TargetFrameworkVersion> element and executing the build.

But the /p flag makes this even easier:

%msbuild% "%ProjectLocation%\%ProjectName%" 
          /t:Clean;Build /nr:false 
          /p:OutputPath="%BuildOutputLocation%";Configuration="Release";Platform="Any CPU";TargetFrameworkVersion=v4.5.2

In this line MSBuild is executed, and the properties OutputPath, BuildOutputLocation, Release, Platform and TargetFrameworkVersion are set using the /p switch. This makes building for different .NET versions easy. You can find more information about the MSBuild switches here.

So now we are able to script the compilation of our project in different locations for different .NET versions. Once this is done we just need to package and we’re done!

cd /d “%NugetLocation%”
%nuget% pack %ProjectNuspec%

Conclusion

We automated the creation of NuGet packages with an extensible script. In the script as much as possible is parameterized so it can be used for other packages as well.

It is possible to add this to your build scripts, but be careful to not always build and deploy your NuGet packages when nothing has changed to them. This is a reason that I like to keep the script handy and run it when the packages are modified (and only then).

References

MSBuild Reference

NuGet CLI Reference

Posted in .Net, Architecture, Codeproject, Development | Tagged | Leave a comment