In this 1st article I’m going to use a lot of scaffolding code from Microsoft to get up and running very quickly. I’m going to create a Web Site application (based upon MVC) and a separate Web API Application. I’m going to keep them in a single solution for now just to even further simplify the learning curve. They may break out at a late stage once we start to consider automated testing techniques.

I’m going to use Visual Studio (and 2019 version in particular) and use .NET Core 2.2. I’ll hopefully migrate to .NET Core 3.0 as soon as possible after release. However there is no reason why these can’t be created using the .NET Core command line tools and I’ll link to relevant articles showing how to do this at the end of this post. I’m also not aware of anything here that is specifically .NET Core 2.2 specific, so things may be okay with earlier versions (but please let me know if you find anything and I’ll amend).

The Web Application

At this stage I’m not sure what role this Web Application is going to fulfill in the ecosystem so I intend to use the very scaffold default MVC website project from Microsoft. This should allow flexibility going forward, or alternatively it may get fully replaced in the future. For now it’s sole purpose to display data from the other microservices that will be introduced.

1. Start Visual Studio and select ‘Create New Project’.
2. Select ‘ASP.NET Core Web Application’ and click ‘Next’.

createnewproject1-2

3. Select a suitable name (I’ve got for ViewerWebApplication) and I’ve place it under a folder called UniversityExample.
4. Then select Web Application (Model-View-Controller) and click Create.

createnewproject1-4

Once Visual Studio has created the application, hit F5 (or the Green play button with IIS Express) and check that the website runs. It should look similar to below:

createnewproject1-5

We’ll explore some of the code that has been generated as we start to bring in new functionality later in the series.

The Web API Application

Next. in the same solution, we’re going to add the out-the-box Web API application.

1. Right click on the solution and click ‘Add’ and ‘New Project’.
2. Again select ASP.NET Core Web Application and click Next.
3. For project name, call it ‘StudentRecordSystem’ and click Create.
4. On the next screen click ‘API’ and then click ‘Create’.

When launching applications, Visual Studio will launch them on random ports. In order to fix this, for the Web API application just created, expand ‘Properties’ and click on the ‘launchSettings.json’ file. Change the bottom section to specify a port for the https:// address and remove the http:// entry altogether. Your new file should look similar to the following:

{
  "$schema": "http://json.schemastore.org/launchsettings.json",
  "iisSettings": {
    "windowsAuthentication": false, 
    "anonymousAuthentication": true, 
    "iisExpress": {
      "applicationUrl": "http://localhost:62421",
      "sslPort": 44382
    }
  },
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "launchUrl": "api/values",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "StudentRecordSystem": {
      "commandName": "Project",
      "launchBrowser": true,
      "launchUrl": "api/values",
      "applicationUrl": "https://localhost:5100",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

Finally, rather than using Visual Studio to run this app, we’ll use the command line to it. Right click on the project ‘StudentRecordSystem’ and find the entry ‘Open Folder in File Explorer’. Once Windows Explorer opens, in the address bar, type cmd and press enter. This will launch a command line window at the directory specified.

On the command line type ‘dotnet run’ and wait for some output telling you the app is running, similar to below:

createnewproject2-1

Now head to a browser, and enter the address above, e.g. https://localhost:5001/api/Values and you should get the following output:

createnewproject2-2

Don’t worry about any security certificate warnings just for now.

Nothing too exciting just yet, but next we’ll look to introduce a new API controller which will provide some hardcoded student information, which we’ll then display on the Web Site application, creating a connection between the 2 applications.

Advertisements

The Kitchen sink project

So I’m not quite sure where this little blog series is going to go, but I’m going to try and develop a semi serious distributed .NET Core microservices solution. One thing about the series is it will be the ‘kitchen sink’ project – any technology I read about, hear about or otherwise will be thrown at it, primarily for my own learning purposes but perhaps to aid other readers as well.

I’m proposing to base it on the layout of a University’s architecture, so students and their learning might be a key focus. I’ve chosen this domain as (a) its one I know relatively well and (b) for others it should be relatively simple domain to understand and thus allow focus on the technology.

The below is all subject to change, but a broad outline for the kind of blog posts I intend to publish are below (and I’ll change these to link to the blog posts as they go live). The series will start with a focus on developing the ecosystem and development environment – by getting these in place early this should hopefully lead to faster subsequent development.

  • The Start – A .NET Core Web Site (A viewer of data) and a .NET Core Web API project (The Student Record System as a repository of data?)
  • Using Source Control with Git and Github
  • .NET Core Health Checks and a Health Check Microservice to view status
  • Logging Concepts
    • Serilog
    • ELK Stack exploration
  • Testing
    • Unit Testing with MSUnit
    • Selenium Testing for User Interfaces
  • Containers with Docker on Windows
    • Images and Containers
    • Mutli-stage images – build and run steps
    • Specifying environment variables
    • Mounting volumes & persistent storage
    • Docker compose
  • Continuous Integration & Continuous Deployment
    • TeamCity?
    • Azure?
  • Developing the Student Record System Web API:
    • Data
      • Student Data
      • Module Data
      • Programme Data
      • Associated Student Enrolments
    • Hardcoding the data above
    • Persistent storage to SQL Express with Entity Framework Core
    • Docker and SQL Express?
  • Raising Events from the Student Record System
    • RabbitMQ (via Docker?)
    • Sending events from SRS to RabbitMQ
  • Processing Events in (new) Virtual Learning Environment API Application

Other topics that will be introduced as some point will be things akin to Polly and Bundle Minifer.

Using Selenium to test across multiple browsers

When having a set of Selenium Tests one nice thing to be able to do would be to run these same tests against multiple browsers. The nicest solution to do this that I’ve discovered is to use the [TestFixture] attribute (Note: Only available in NUnit and not in MSTest).

This is a 2-step process:
1. Introduce a base factory class to initialise the web driver with the specific browser
2. Update the test classes to derive from the base class and add the TestFixture attributes.

Step 1:

So, for example, to test against Chrome and Internet Explorer 11, introduce the Factory Base Class:

    public class WebDriverFactory
    {
        public IWebDriver _driver;

        protected WebDriverFactory(BrowserType type)
        {
            _driver = WebDriver(type);
        }

        public enum BrowserType
        {
            IE11,
            Chrome
        }

        [TearDown]
        public void Close()
        {
            _driver.Close();
        }

        public static IWebDriver WebDriver(BrowserType type)
        {
            IWebDriver driver = null;

            switch (type)
            {
                case BrowserType.Chrome:
                    driver = ChromeDriver();
                    break;

                case BrowserType.IE11:
                    driver = IeDriver();
                    break;
            }

            return driver;
        }

        private static IWebDriver IeDriver()
        {
            InternetExplorerOptions options = new InternetExplorerOptions();
            options.EnsureCleanSession = true;
            options.IgnoreZoomLevel = true;
            IWebDriver driver = new InternetExplorerDriver(options);
            return driver;
        }

        private static IWebDriver ChromeDriver()
        {
            ChromeOptions options = new ChromeOptions();
            IWebDriver driver = new ChromeDriver(options);
            return driver;
        }
    }

Step 2 – Update Test Cases

With this in place, alter the test classes to:


    [TestFixture(BrowserType.Chrome)]
    [TestFixture(BrowserType.IE11)]
    public class NUnitTests : WebDriverFactory
    {
        public NUnitTests(BrowserType browser) : base(browser)
        {

        }

        [Test]
        public void Test1()
        {
            _driver.Navigate().GoToUrl("https://www.google.co.uk/");
            Assert.AreEqual("https://www.google.co.uk/", _driver.Url);
        }
    }

References

Stack Overflow: Testing framework on multiple browsers using selenium and NUnit

ApiController Validation in .Net Core 2.1/2.2 – Unit Testing

One of the additions in .NET Core 2.1 was the introduction of the [ApiController] attribute which could be applied at Controller level to validate the Model State, removing a lot of boilerplate code around the system. (NB. .NET Core 2.2 introduced the ability to set this at assembly level, e.g.


[assembly: ApiController]
namespace WebApp

This can be applied on any class, but I’ve choosen Startup.cs as a fairly obvious place for the configuration of most things.

Before .NET Core 2.1, a lot of code would have looked like the following:


public async Task<IActionResult> ActionToPerform(InputModel inputModel)
{
   if (!ModelState.IsValid)
   {
      return BadRequest();
   }
}

This boilerplate code is likely to be scattered the code base.

With this code in place, the invalid state of the model can tested using Unit Tests as follows (NB. The error must be manually set, as the model binding won’t happen without the middleware pipeline)


var controller = new StudentController();

// force model state to be invalid.
controller.ModelState.AddModelError(string.Empty, "Test Error");

Assert.IsFalse(controller.ModelState.IsValid, "Model state has remained valid.");

// Test for BadRequest being returned
Var result = await controller.ActionToPerform(model);
BadRequestResult badRequestResult = result as BadRequestResult;

Assert.IsNotNull(badRequestResult, "Wrong type returned.");

With this test in place, and hopefully passing, now comes the opportunity to introduce the ApiController attribute. In theory that means that the boilerplate code can be removed, and the pass will continue to pass.

Unfortunately that’s not the case. As per the binding on the model state, I’m assuming this happens elsewhere in the pipeline. Indeed it can be proved to be working outside of the Unit Tests by using Postman to trigger a request and see a BadRequest (400) be returned – provided an invalid object is supplied).

As such the Unit Test needs to change to check for the presence of ApiController – either on the Controller itself, or at Assembly level, depending on where its been declared. Here is the test for assembly level, on the StartUp class:

var customAttributes = typeof(Startup).Assembly.GetCustomAttributes(true);

var entry = customAttributes.FirstOrDefault(a => a.GetType().Name == "ApiControllerAttribute");

Assert.IsNotNull(entry);

References:

https://alenjalex.github.io/dev/dev/Asp.Net-Core-ModelState-Validation-Using-UnitTest/

https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/testing?view=aspnetcore-2.2

https://docs.microsoft.com/en-us/aspnet/core/release-notes/aspnetcore-2.1?view=aspnetcore-2.2

https://docs.microsoft.com/en-us/aspnet/core/web-api/index?view=aspnetcore-2.2

Connecting to an oracle database from microsoft .net

Work has changed for me slightly over the past few years, and I’ve moved into the world of Project Management. I’ve however got a chance again now to get my hands ‘dirty’ with some .NET coding – its been a few year since I’ve done any in anger – so I’m trying to gain a refresh into the current platform and its technology. Expect a few .NET 101 posts to follow this one.

We’re looking significantly to adopt .NET Core as we’ve got a great greenfield project on the horizon. However much of our infrastructure is based on Oracle and without a .NET Core supported route, I think we might have to fall back to .NET Framework 4.6.X, at least in the short term.

This blog post is to look at how .NET apps can connect to, specifically, Oracle Databases.

Pre-Requisites

You need to install Oracle Data Access Components (ODAC) with Developer Tools for Visual Studio (from here). Extract the zip, and click setup.exe and follow the defaults.

1. Connecting via a Windows Console App (.NET Framework 4.6.1)

  • Open Visual Studio 2015, and click File->New->Project.
  • Then select Visual C#->Windows and Console Application, confirming that .NET Framework 4.6.1 is selected from the drop-down menu.

    image

  • Once the project has been created, add 2 new packages via NuGet (Tools->NuGet Package Manager->Manage NuGet Packages for Solution…)
    image
  • Click Browse from the top menu bar, and then search from EntityFramework by Microsoft (its likely to be listed near the top anywhere). Select and install.
  • Then search for ODP and install the ‘Oracle.ManagedDataAccess.EntityFramework’ by Oracle.
  • Next, close down NuGet Package Manager and right click on the project and Add New Item…
  • From here select Data and ADO.NET Entity Data Model
    image
  • Then select ‘Code First from Database’
  • To create a new Connection I’ve found adding the details the installed tnsnames.ora file to be the best way – I’m not in a position to share mine, but there should be examples out there on the Internet.
  • Select the appropriate connection, enter the username and password and click Next.
  • I then select all the tables that are available in the database.
  • Click Finish. This will generate a number of files, that should relate primarily to the tables you’ve selected.
  • Then enter the following code into your program (note, the name of the database may alter depending on what you enter in the ADO.NET file – I think it defaults to Model – my example below uses GLJ). The syntax for the query you’re running may also alter depending on your data columns.
    image

Running this provides a list of modules that we have stored in a test database from Oracle.

2. Connecting to Oracle from a .NET Core Application

I’m really not expecting this to work – I’m not aware of any announcement about Oracle and .NET Core support at this time – certainly my Google searches haven’t been that fruitful. The closest I’ve got is this discussion on the OTN. And looking back over the @OracleDotNet twitter feed I can’t see anything having been announced.

Anyway, back to Visual Studio 2015 and this time New->Project. And then Visual C#->.NET Core and Console Application (.NET Core).

Then using NuGet to add the 2 packages as per the previous guide for a standard Console App – even trying to add the Microsoft EntityFramework at this time produces a compiler error. However I can search for ‘Microsoft.EntityFrameworkCore’ and I get a library that seems to be fine. Trying to install the Oracle ODP Framework though gives the expected compiler issue.

At this point its probably worth abandoning this attempt.

3, Connecting to Oracle from a .NET Core App on .NET Framework

So for this attempt I’m going to try a web application because something that interests me (but I haven’t yet got my head round) is the 3 option listed for ASP.NET Core Web Application (.NET Framework).
image

I select Web Application to get something running quickly. I then added the Microsoft Entity Framework Core Library, via NuGet as mentioned in Step 2. I then also added the ODP Oracle Entity Framework (as mentioned in Step 1, this time without issue).

However clicking into Add New Item – the Data option previously used isn’t on show.

4, Connecting to Oracle from a ASP.NET MVC on .NET Framework

Switching to create a New ASP.NET MVC application on the .NET Framework and following the steps above (but using Entity Framework rather than the dotnet core version) does produce the option to have a Data object added – and following the steps detailed in Stage 1 seems to work.

The next step I guess for this would be to try the dotnet core options with a MS SQL Server database and see what the support is like. However, it looks like we’ll still be on .NET 4.6.X until some Oracle DB support comes to dotnet core.

Related Reading:

https://csharp.today/entity-framework-6-database-first-with-oracle/

https://docs.efproject.net/en/latest/platforms/aspnetcore/existing-db.html

https://community.oracle.com/thread/3903545

Java 8 and LAMBDAS

A little bit late to the party with this one. I’ve been keeping up-to-date with Java 8 (and 9) changes, but our enterprise servers tend to not to be updated early on (understandably).

However, Oracle ran a Java 8 MOOC recently (they’ve since made the videos available via YouTube). I’m particularly interested in the parallel streams as we have a long running process that may benefit from the new syntax.

An unscientific test on my own laptop comparing the standard Java loop (an example from the MOOC):

for (Student s : students) {
   if (s.getGradYear() == 2011) {
      if (s.getScore() &gt; highestScore) {
         highestScore = s.getScore();
      }
   }
}

Takes approximately 469 ms to run.

However, using the new Lambda syntax:

highestScore = students.parallelStream()
                  .filter(s -&gt; s.getGradYear() == 2011)
                  .mapToDouble(s -&gt; s.getScore())
                  .max().getAsDouble();

This comes down to 272 ms.

As I say, unscientific and a very simple use case. Be interesting to see what happens in the real world though.

Oracle sql loader investigations

Today I’ve been exposed to the world of SQL Loader. There is a process that runs on a daily basis that does the following:

1. Parse a text file and copy to a remote server
2. Use SQL Loader to append the contents of this file to a database table
3. A final process to concatenate 2 fields in the database after the insert has completed.

Step 3 is an unnecessary overhead that was introduced a while back. Steps 1 or 2 are the better areas for this fix to be implemented. Having never used SQL Loader it looked a more interesting learning experience, and heres some quick findings.

To read the value into a temporary store, use the BOUNDFILLER key word (apparently supported in Oracle 9+). However, with our implementation, then continue to process the columns in order, before finally coming back to insert the temp values into their relevant columns.

An example:

The text file looks roughly like this:

1[TAB]A[TAB]MODULE[TAB]0LTD0023[TAB]25/03/15[TAB]Username

And the database table has the following columns

ID
SEMESTER
TYPE
CODE
ACCESS_DATE
USERNAME

The SQL Loader to concatenate File Entry [0] and File Entry [3] is as follows:

(
tempID BOUNDFILLER,
SEMESTER,
TYPE,
tempCode BOUNDFILLER,
ACCESS_DATE,
USERNAME,
ID “:tempID”,
CODE “:tempCode||:tempID”
)

Found plenty of examples on the Internet, but few made reference to the fields needing to be the final ones inserted.