The Kitchen sink project

So I’m not quite sure where this little blog series is going to go, but I’m going to try and develop a semi serious distributed .NET Core microservices solution. One thing about the series is it will be the ‘kitchen sink’ project – any technology I read about, hear about or otherwise will be thrown at it, primarily for my own learning purposes but perhaps to aid other readers as well.

I’m proposing to base it on the layout of a University’s architecture, so students and their learning might be a key focus. I’ve chosen this domain as (a) its one I know relatively well and (b) for others it should be relatively simple domain to understand and thus allow focus on the technology.

The below is all subject to change, but a broad outline for the kind of blog posts I intend to publish are below (and I’ll change these to link to the blog posts as they go live). The series will start with a focus on developing the ecosystem and development environment – by getting these in place early this should hopefully lead to faster subsequent development.

  • The Start – A .NET Core Web Site (A viewer of data) and a .NET Core Web API project (The Student Record System as a repository of data?)
  • Using Source Control with Git and Github
  • .NET Core Health Checks and a Health Check Microservice to view status
  • Logging Concepts
    • Serilog
    • ELK Stack exploration
  • Testing
    • Unit Testing with MSUnit
    • Selenium Testing for User Interfaces
  • Containers with Docker on Windows
    • Images and Containers
    • Mutli-stage images – build and run steps
    • Specifying environment variables
    • Mounting volumes & persistent storage
    • Docker compose
  • Continuous Integration & Continuous Deployment
    • TeamCity?
    • Azure?
  • Developing the Student Record System Web API:
    • Data
      • Student Data
      • Module Data
      • Programme Data
      • Associated Student Enrolments
    • Hardcoding the data above
    • Persistent storage to SQL Express with Entity Framework Core
    • Docker and SQL Express?
  • Raising Events from the Student Record System
    • RabbitMQ (via Docker?)
    • Sending events from SRS to RabbitMQ
  • Processing Events in (new) Virtual Learning Environment API Application

Other topics that will be introduced as some point will be things akin to Polly and Bundle Minifer.

Using Selenium to test across multiple browsers

When having a set of Selenium Tests one nice thing to be able to do would be to run these same tests against multiple browsers. The nicest solution to do this that I’ve discovered is to use the [TestFixture] attribute (Note: Only available in NUnit and not in MSTest).

This is a 2-step process:
1. Introduce a base factory class to initialise the web driver with the specific browser
2. Update the test classes to derive from the base class and add the TestFixture attributes.

Step 1:

So, for example, to test against Chrome and Internet Explorer 11, introduce the Factory Base Class:

    public class WebDriverFactory
        public IWebDriver _driver;

        protected WebDriverFactory(BrowserType type)
            _driver = WebDriver(type);

        public enum BrowserType

        public void Close()

        public static IWebDriver WebDriver(BrowserType type)
            IWebDriver driver = null;

            switch (type)
                case BrowserType.Chrome:
                    driver = ChromeDriver();

                case BrowserType.IE11:
                    driver = IeDriver();

            return driver;

        private static IWebDriver IeDriver()
            InternetExplorerOptions options = new InternetExplorerOptions();
            options.EnsureCleanSession = true;
            options.IgnoreZoomLevel = true;
            IWebDriver driver = new InternetExplorerDriver(options);
            return driver;

        private static IWebDriver ChromeDriver()
            ChromeOptions options = new ChromeOptions();
            IWebDriver driver = new ChromeDriver(options);
            return driver;

Step 2 – Update Test Cases

With this in place, alter the test classes to:

    public class NUnitTests : WebDriverFactory
        public NUnitTests(BrowserType browser) : base(browser)


        public void Test1()
            Assert.AreEqual("", _driver.Url);


Stack Overflow: Testing framework on multiple browsers using selenium and NUnit

ApiController Validation in .Net Core 2.1/2.2 – Unit Testing

One of the additions in .NET Core 2.1 was the introduction of the [ApiController] attribute which could be applied at Controller level to validate the Model State, removing a lot of boilerplate code around the system. (NB. .NET Core 2.2 introduced the ability to set this at assembly level, e.g.

[assembly: ApiController]
namespace WebApp

This can be applied on any class, but I’ve choosen Startup.cs as a fairly obvious place for the configuration of most things.

Before .NET Core 2.1, a lot of code would have looked like the following:

public async Task<IActionResult> ActionToPerform(InputModel inputModel)
   if (!ModelState.IsValid)
      return BadRequest();

This boilerplate code is likely to be scattered the code base.

With this code in place, the invalid state of the model can tested using Unit Tests as follows (NB. The error must be manually set, as the model binding won’t happen without the middleware pipeline)

var controller = new StudentController();

// force model state to be invalid.
controller.ModelState.AddModelError(string.Empty, "Test Error");

Assert.IsFalse(controller.ModelState.IsValid, "Model state has remained valid.");

// Test for BadRequest being returned
Var result = await controller.ActionToPerform(model);
BadRequestResult badRequestResult = result as BadRequestResult;

Assert.IsNotNull(badRequestResult, "Wrong type returned.");

With this test in place, and hopefully passing, now comes the opportunity to introduce the ApiController attribute. In theory that means that the boilerplate code can be removed, and the pass will continue to pass.

Unfortunately that’s not the case. As per the binding on the model state, I’m assuming this happens elsewhere in the pipeline. Indeed it can be proved to be working outside of the Unit Tests by using Postman to trigger a request and see a BadRequest (400) be returned – provided an invalid object is supplied).

As such the Unit Test needs to change to check for the presence of ApiController – either on the Controller itself, or at Assembly level, depending on where its been declared. Here is the test for assembly level, on the StartUp class:

var customAttributes = typeof(Startup).Assembly.GetCustomAttributes(true);

var entry = customAttributes.FirstOrDefault(a => a.GetType().Name == "ApiControllerAttribute");



Connecting to an oracle database from microsoft .net

Work has changed for me slightly over the past few years, and I’ve moved into the world of Project Management. I’ve however got a chance again now to get my hands ‘dirty’ with some .NET coding – its been a few year since I’ve done any in anger – so I’m trying to gain a refresh into the current platform and its technology. Expect a few .NET 101 posts to follow this one.

We’re looking significantly to adopt .NET Core as we’ve got a great greenfield project on the horizon. However much of our infrastructure is based on Oracle and without a .NET Core supported route, I think we might have to fall back to .NET Framework 4.6.X, at least in the short term.

This blog post is to look at how .NET apps can connect to, specifically, Oracle Databases.


You need to install Oracle Data Access Components (ODAC) with Developer Tools for Visual Studio (from here). Extract the zip, and click setup.exe and follow the defaults.

1. Connecting via a Windows Console App (.NET Framework 4.6.1)

  • Open Visual Studio 2015, and click File->New->Project.
  • Then select Visual C#->Windows and Console Application, confirming that .NET Framework 4.6.1 is selected from the drop-down menu.


  • Once the project has been created, add 2 new packages via NuGet (Tools->NuGet Package Manager->Manage NuGet Packages for Solution…)
  • Click Browse from the top menu bar, and then search from EntityFramework by Microsoft (its likely to be listed near the top anywhere). Select and install.
  • Then search for ODP and install the ‘Oracle.ManagedDataAccess.EntityFramework’ by Oracle.
  • Next, close down NuGet Package Manager and right click on the project and Add New Item…
  • From here select Data and ADO.NET Entity Data Model
  • Then select ‘Code First from Database’
  • To create a new Connection I’ve found adding the details the installed tnsnames.ora file to be the best way – I’m not in a position to share mine, but there should be examples out there on the Internet.
  • Select the appropriate connection, enter the username and password and click Next.
  • I then select all the tables that are available in the database.
  • Click Finish. This will generate a number of files, that should relate primarily to the tables you’ve selected.
  • Then enter the following code into your program (note, the name of the database may alter depending on what you enter in the ADO.NET file – I think it defaults to Model – my example below uses GLJ). The syntax for the query you’re running may also alter depending on your data columns.

Running this provides a list of modules that we have stored in a test database from Oracle.

2. Connecting to Oracle from a .NET Core Application

I’m really not expecting this to work – I’m not aware of any announcement about Oracle and .NET Core support at this time – certainly my Google searches haven’t been that fruitful. The closest I’ve got is this discussion on the OTN. And looking back over the @OracleDotNet twitter feed I can’t see anything having been announced.

Anyway, back to Visual Studio 2015 and this time New->Project. And then Visual C#->.NET Core and Console Application (.NET Core).

Then using NuGet to add the 2 packages as per the previous guide for a standard Console App – even trying to add the Microsoft EntityFramework at this time produces a compiler error. However I can search for ‘Microsoft.EntityFrameworkCore’ and I get a library that seems to be fine. Trying to install the Oracle ODP Framework though gives the expected compiler issue.

At this point its probably worth abandoning this attempt.

3, Connecting to Oracle from a .NET Core App on .NET Framework

So for this attempt I’m going to try a web application because something that interests me (but I haven’t yet got my head round) is the 3 option listed for ASP.NET Core Web Application (.NET Framework).

I select Web Application to get something running quickly. I then added the Microsoft Entity Framework Core Library, via NuGet as mentioned in Step 2. I then also added the ODP Oracle Entity Framework (as mentioned in Step 1, this time without issue).

However clicking into Add New Item – the Data option previously used isn’t on show.

4, Connecting to Oracle from a ASP.NET MVC on .NET Framework

Switching to create a New ASP.NET MVC application on the .NET Framework and following the steps above (but using Entity Framework rather than the dotnet core version) does produce the option to have a Data object added – and following the steps detailed in Stage 1 seems to work.

The next step I guess for this would be to try the dotnet core options with a MS SQL Server database and see what the support is like. However, it looks like we’ll still be on .NET 4.6.X until some Oracle DB support comes to dotnet core.

Related Reading:

Java 8 and LAMBDAS

A little bit late to the party with this one. I’ve been keeping up-to-date with Java 8 (and 9) changes, but our enterprise servers tend to not to be updated early on (understandably).

However, Oracle ran a Java 8 MOOC recently (they’ve since made the videos available via YouTube). I’m particularly interested in the parallel streams as we have a long running process that may benefit from the new syntax.

An unscientific test on my own laptop comparing the standard Java loop (an example from the MOOC):

for (Student s : students) {
   if (s.getGradYear() == 2011) {
      if (s.getScore() &gt; highestScore) {
         highestScore = s.getScore();

Takes approximately 469 ms to run.

However, using the new Lambda syntax:

highestScore = students.parallelStream()
                  .filter(s -&gt; s.getGradYear() == 2011)
                  .mapToDouble(s -&gt; s.getScore())

This comes down to 272 ms.

As I say, unscientific and a very simple use case. Be interesting to see what happens in the real world though.

Oracle sql loader investigations

Today I’ve been exposed to the world of SQL Loader. There is a process that runs on a daily basis that does the following:

1. Parse a text file and copy to a remote server
2. Use SQL Loader to append the contents of this file to a database table
3. A final process to concatenate 2 fields in the database after the insert has completed.

Step 3 is an unnecessary overhead that was introduced a while back. Steps 1 or 2 are the better areas for this fix to be implemented. Having never used SQL Loader it looked a more interesting learning experience, and heres some quick findings.

To read the value into a temporary store, use the BOUNDFILLER key word (apparently supported in Oracle 9+). However, with our implementation, then continue to process the columns in order, before finally coming back to insert the temp values into their relevant columns.

An example:

The text file looks roughly like this:


And the database table has the following columns


The SQL Loader to concatenate File Entry [0] and File Entry [3] is as follows:

ID “:tempID”,
CODE “:tempCode||:tempID”

Found plenty of examples on the Internet, but few made reference to the fields needing to be the final ones inserted.

Unfinished Business–Part 1

Baldock Beast. Its a local half marathon, virtually on my door-step. The 3rd running of the event happened last weekend (15/2). And I recorded my third DNS (Did Not Start).

  • 2013 – My wife was pregnant, but it was prior to scans so no-one knew. My running buddy had just become a dad, and his son had a bout of measles. Measles and pregnant ladies don’t mix. I couldn’t think of a way to explain not wanting to run with my mate, and with a slight hint of a cold decided to pull the ‘flu flag’. DNS 1.
  • 2014 – I headed into London for a Cage The Elephant / Foals gig. By 6pm the beers and shots were flowing, and I knew I’d be in no fit state the next day. I wasn’t. I may have sobered up by the Monday. Just. DNS 2.
  • 2015 – I’ve been slightly limping for a year of so. My right knee. It hasn’t interfered with training, but 2 people this year have asked if I was injured – when I felt fine (I think I’d got use to the limp). On the Thursday before this years race I heard those dreaded words from a physio – ‘I’d advise you don’t run’. DNS 3.

I’ll be entering again next year. Lets hope I make the start line this time round.

PS. Knee is nothing too much to worry about. A tight tendon that needs loosen off, a slight alteration to my running style and perhaps a change of trainers. Latest diagnosis was for about 6-8 weeks off. In better news I can continue to cycle and swim. No Crossfit for the time being either though.