Performance Testing with JMeter 2.9 Book Review

Disclaimer: I was provided with a copy of the book free of charge in exchange for a review. The views expressed below are mine.


I’ve blogged previously about my love for the simplicity of JMeter, but also acknowledged my lack of in-depth know-how and its something I should really get more to grips with. I hoped that this book would take me to the next level with JMeter, and indeed the book arrived at a very timely stage for a project I’m currently working on.

The book begins with a very good introduction into why you’d want to use JMeter (or similar tools) in the first place, and presents a scenario where the benefits of it use can be seen. It then proceeds to guide through the installation of JMeter. All good so far. Unfortunately it then starts to go into what I refer to some more advanced topics before starting to use the application for the first time. These topics deserve a mention, but perhaps appear at the wrong place in the book. However glossing over them and making a mental note for where they exist for future reference seems to suffix.

The book however did present a few nuggets of information for myself, broadly split into the following categories:

  • How to record a test plan to simulate proper use of a website
  • The session / authorisation components
  • The distributed testing model to overcome local machine performance issues
  • The ability to directly test a database connection

And for these reasons alone has proved a very worthwhile read for myself. The book also goes into detail on a number of other topics / areas, many unfortunately not relevant to me at this moment, but were interesting none the less, and just re-confirmed my initial impressions that JMeter is very powerful tool with flexibility to accommodate many situations. I’ve made a mental note of the topics covered and will no doubt refer to this book to exploit them should the need arise on future projects.

Whilst the book is very good overall, there are a couple of weaknesses that need to be highlighted. I’ve bought a number of titles in the past from PacktPub, but they’ve lacked the final polish that other publishers appear to have. This also appears apparent in this book – indeed the first step-by-step exercise that appears could be set out so much clearer with just a little thought into the layout and numbering of the steps. At one point in another exercise I found myself referring to videos on YouTube for clarity in order to overcome the problem I was facing. However, whilst this could be classed as a weakness, some (including myself) may see this as an advantage – if everything was plain sailing would we really be engaging our minds and learning?

So all-in-all, I thought a very good book. I’ve certainly learnt a lot about a very useful tool, and its encouraged me to make more use of it on current and future projects. I now feel in a position to exploit its usefulness ever further than the simple tests I had been running previously. Indeed this week I think I’ve used JMeter more times and to better effect then I have since my original JMeter blog post back in 2010.


Going Paperless

Last year I thought about having a new years resolution about going paperless. The length of time to write that previous sentence could probably be longer than the effort I dedicated to that task last year.

But this year I’m going to give it a shot. Not really as a resolution, but more because I used Evernote to handle my ToDo List, attempted a new way of doing things (FWIW I used my own take on GTD, I then tried to be more expressive with TSW which didn’t really work for me), which left everything in a bit of chaos. Whilst tidying Evernote out, I decided to Google around and re-read some stuff about going paperless.

So as part of my tidy up of my To Do List, I’ve decided to look at how I use Evernote and attempt to basically put everything in there from now on.

Setting Up Evernote

1. !Inbox default notebook. Everything will go here. Any email requiring more than 5 minutes work will end up for re-evaluation. Any photos I take from my phone. Any notes I take on my phone.

2. !Journal – This is my day-to-day recording of what happens each day. I start with the 3 things I want to accomplish that day (hat tip to: Getting Results the Agile Way). I also do my best to summarise what happened that day, and record my daily exercise routine.

3. Then 3 Notebook Stacks:

  • Personal Life (contains any personal documentation, car stuff, exercise articles etc…)
  • Masters Degree (with a seperate notebook for each module and my disertatation)
  • Work (seperate notebook for each project, conferences and a general folder)

4. Finally a ToDo notebook, with an individual note for the next action. I also use tags here to essentially break tasks into those relating to Home, Work or my Masters.

Getting ‘Stuff’ In


As per the GTD guide, any email to any of my accounts gets the following treatment:

  • Resolve in less than 5 minutes, now – Do It Now (and then decide whether to delete or file. If I may require in future, I email it Evernote).
  • Defer It – Send straight on to some else.
  • Don’t Do It – Delete it.
  • Do It Later – Send it to Evernote !Inbox notebook to classify and sort out.

There are 2 ways I do this. Evernote has an excellent MS Outlook Plug-In – so any work emails get a click of the plug-in and they get transferred to Evernote. Other accounts are web-based, and I simply forward them to my Evernote address (Note: create a contact called Evernote). I am considering on a couple of my less frequently used emails to try and do auto-forwarding.


Any paper I receive through the day (and to be fair, its not much), a similar process exists as email. Bin it (well, recycle it), do it or send to Evernote Inbox for later. In order to help with this, I’ve got 2 applications on my Android phone. CamScanner works really well at capturing high quality, readable images – so anything with important steps I capture via that route. It integrates nicely with Evernote.

If the quality isn’t required, then my Evernote app comes with a widget to quickly capture an image and send to Inbox.


The Evernote widget also has a quick link to create a text note or an audio note. These notes also captured the GPS location, so (should I need to?) I’d be able to figure out where I was when capturing the note.

All That Old Paper

Well, nope. I haven’t got time really to go through it all. I also don’t own a scanner (yet!). My thoughts at this point is to capture stuff as it arrives, and then if I go hunting for anything, I’ll capture the located document at that point.

Thats It

So here we are. I’m not really sure where this will lead, but I’ll give it a shot. Probably best to review again in 6 months time for how things are going. No doubt they’ll be plenty of lessons along the way.

Soft Systems Methodology–My Understanding?

Disclaimer – I’m learning about this subject area on a module this semester so my understanding, and example given below, my be incorrect. I doing this post to gain feedback from the wider community, my tutor and fellow students.

Underlying Principle

Identify a problem. Then step back from that problem and look at the overall structure. Ask simple questions like why are doing this? What are we trying to achieve? How *should* we be doing this?

A Simple Example

So in my world of trying to make everything complex simple, leads me to bringing this subject area back to the area of football. I won’t be going in-depth so a understanding of the sport won’t be required!

Step 1 – Identify the problem (ignore the solution)

The Problem: We’re not winning football matches / trophies. Its the managers fault – sack the manager!

Step 2 – Rich Picture

Draw a rich picture showing the entire solution area – to understand connections between various stakeholders and the wider environment.

Step 3 – Step Away from the Problem:

What Are We Trying to Do: Win trophies. Be successful.

How could we do this: Ideally, we’d have the best goalkeeper, best defenders, best midfielders and best forwards in the league, if not the world. We’d also have excellent (although not necessarily the best) squad to cover any injuries and suspensions. We’d also have a very good manager to get best out of our players, and apply tactics to limit the opposition strengths and exploit their weaknesses.

Step 4 – Identify the ideal model/system to achieve this

Step Description
1 Best goalkeeper
2 Best defenders
3 Best midfielders
4 Best attackers
5 Very good squad to cover suspensions and injuries
6 Good manager to make best of our strengths and exploit other teams weaknesses

Alongside the ideal model (conceptual model) we need to consider constraints. Examples in this scenario could be money, expectations of success sooner rather than later.

We would also need to consider how to monitor progress – we could use statistics like the league table compared to this point last season, individual player stats, perhaps fitness levels.

Step 5 – Gap Analysis (Compare the Ideal model to What is Currently in Place)

Step Description Currently Have? Do Well? Further discussion
1 Best goalkeeper Yes
2 Best defenders Yes
3 Best midfielders Maybe Still best, but some getting old.
4 Best strikers No Discuss further
5 Best squad No Discuss further
6 Best Manager Yes

Step 6 – Hold Meeting to Have Discussion Points

Here we can see these likely revolve around Steps 3,4 & 5 in the model. Discussion around constraints will happen here – an immediate example would to buy 2 new forwards, but that could cost £100m – is the best investment of money, and is the money available.

#lak13 (Late) Assignment 1–Logic and Structure Assignment

Ok, due by the 4th March, so I’ve missed the deadline by some distance. But hey, its not graded and I quite liked the idea behind it. With a couple of deadlines passed on my actual University course I may have more time to dedicate to this MOOC.

1. What do you want to do / understand better / solve?

I’ve got some grand ideas. But I’ve decided to start with something relatively simple to at least get started. In essence I’m looking to identify the ‘At Risk’ student – ultimately so support mechanism can ‘kick in’ and help reduce the odds of the student dropping out.

2. People involved? Social Implications?

It would be great to have a team of various people from around the University to help develop the idea, but I’m not sure quite how I’d go about that. They’d have a much greater input into any missing elements / systems.

The only other major implication is the identification of the right person / set of people who could start the wheels for the support mechanism. I would imagine this would be a faculty / school admin person in the first instance, before probably being the course / module administrator for identified students.

3. Brainstorming – How could this be solved?

My initial trail of thoughts have been around identifying those students we haven’t ‘seen’ for a period of time (6 weeks by default – coincides nicely with the HESA return process). But ‘seen’, I mean haven’t accessed various systems, starting with:

  • Have swiped the attendance system, but not within the last 6 weeks.
  • Not logged into the VLE and / or accessed any teaching resources in the last 6 weeks.
  • Not submitted to an assignment in the last 6 weeks

Other mechanisms around library book usage could be identified.

4. Potential data sources – any issues? How ‘clean’ is the data?

  • Would need the list of ‘current’ students. Held centrally, a number of primary keys including the Student Number.
  • Attendance Monitoring System – Primary ID is the Student Number
  • VLE Monitoring Logs – Primary ID is the Username. But a username to SRN resolver in is place.
  • Assignments System – Number of primary keys, including Username and Student Number

#lak13–Getting Data In (and Getting Data Back Out!)

So the previous blog post talked more about the high-level view of how data transfer could be performed within an institution. Now we’ve (potentially) sussed out how systems can link, we can turn our attention to the actual movement of data.

When integrating systems, the top level action in any meeting is getting the blasted data into the system. After all, I guess that is the goal of any integration project. However, I prefer to take a backwards step and look at how to get the data out. This is 3-fold really:

  1. Its nearly always easier to read data than write. (Prove the simplest thing works)
  2. Having the ability to read allows for automated testing of writing
  3. It allows for long-term monitoring and much more functionality to enhance the student experience (but more this later)

Using Web Services

So in front of every system I’ve integrated I attempt to get the standard CRUD services in place. CRUD being Create, Read, Update, Delete. (After the request for Read to be done first, people normally start thinking I’m real idiot asking for Delete especially if data should never be deleted. But my thinking is you’re in there learning now, put the delete in place whilst testing, put it under source control and then remove the functionality – rather than needing to come back to it in 5 years time when everything is forgotten).

Read Providing Mechanism for Analytics

So whilst Create and Update get the data in, from an analytics point of view it is the read that bears the most interest. So using the library system as an example, we could probably place a read service that takes the Unique User ID and returns the users current status, along with their history.

From an analytics point of view, we could look at a number of students, comparing and contrasting grades and looking at the number of books they’d borrowed from the library (and/or how they much they owed in late fines). Did all the students who did very well on a single module all borrow the same book? If so, does the library have enough copies of this book for everyone on the course? (Of course, it gets difficult to determine if others had bought the book rather than borrowed it, or used an eBook instead). But there could be patterns here.

Could we also detect the student who hasn’t used the library? Are they at risk of dropping out? Is there a training need about how to locate books and check them out? Of course, like above, they may have bought there own copies or alternatively just used reference only materials in the library. But then we could use the swipe system to determine if they’ve ever swiped in whilst in the library / LRC.

Achievements System

This read structure could of course fit nicely with a rewards / achievements structure. A number of first achievements could be devised by the University, such as the first time you’ve taken out a book – a badge appears on your VLE page (I’m not a 4Square user, but I think they use this concept for different types of check-ins). This could of course further add to the analytics questions above – if you’ve not borrowed a book but everyone on your module / course has – are you at risk of dropping out? Can we intervene? Not only displaying you have been awarded this badge – we could display something like ‘a lot of your classmates have been awarded the ‘First Book Check-Out Badge’ – want to find out how to win yours? Click here – linking off to video and resources explaining how to locate books, check them out and where to go for further support.

Enhancing Student Experience

Provided the achievements are well thought through – they really could contribute to the student experience. Firstly some services that are offered by the University are not always known to students. This achievements page could highlight other awards to research – showing the range of services available, along with links for how to use them (and ultimately award an achievement). (It would also serve a reference page to help when they forgot the procedure for booking a Group Study Room for example from one year to the next).

Finally though, the read services in front of these satellite systems would also contribute to the availability of a student portal – letting students know what they need to know, now. When a student logs into the VLE, a read request could be made to the library system. They could be notified of the amount of money they owe (with a link to pay), and when their next book is due back – the sooner it is, the more prominent this message becomes.

And boringly, they aid support of systems

I should put this in small print. But by having read services available, tools to help diagnose student status’ across various systems can be developed to help resolve issues – find out where faults have occurred. A diagnostic tool if you will. If your Helpdesk could see your status across all systems with just entering your ID number / username – how much easier would their life be?

Oh, the read service would indicate if the system was up or down – help report to users that the system is unavailable, and allowing support staff to proactive to faults, rather than reactive.

The next blog post might actually get round to crunching some data, and see some really stuff happen – but that’ll need to keep until tomorrow.

#lak13 Analytics MOOC

Well, the course has been running for 3 weeks, and I’ve generally kept up, reading quite a few blog articles and then catching up with the actual course content on a Friday when time allows. Admittedly this is my first MOOC and I can see how a lot of people feel overwhelmed and lost (I’m currently feeling this!).

But anyway, I digress. I’m not 100% sure where this blog post will go, but initially I’m thinking this will form part of a series, otherwise it’ll turn into a flown blown essay – and neither the reader nor I want that.

Enterprise Solution to Aid Analytics

A lot of the talk that I have seen in the discussion forums has been regarding how to get hold of data sources. I’m fortunate in that working within the IT department of my University, and also contributing to the integration of various systems, I tend to have access to large datasets wherever I look. On the other hand, if we can think of a benefit of having some data, then I generally know who to ask.

But I’d like to take a step back from this point and look at System Architecture design that would really aid institutions take part in performing data analytics. I’ve been fortunate enough to be a fundamental part of the development of Data Exchange System at a previous institution, which has set me in good stead for designing a system architecture to avoid duplicates, and resolve issues in needing to ‘cleanse’ data across multiple systems.

A University Data Exchange System

Whilst the real world is never quite this simple, for the purposes of this written article not exploring every eventuality, a University could be seen as having 2 primary source systems:

  • HR System (for staff details)
  • Student Record System (for the Student and Curriculum data)

This data would then flow to a number of satellite systems, for example:

  • VLE/MLE (Blackboard, Moodle, etc…)
  • Library System
  • Swipe Card / Attendance Monitoring system.

There are of course plenty of others, but these as possibly most relevant to all institutions.


Now these systems evidently need to be linked up. Firstly, to create the initial data in the various satellite systems, and then to periodically update it. One (poor) approach could be to create direct links from the source system to each satellite system like below:


I say poor for a number of reasons. A primary reason is that the next time the Student Record System is replaced, well, every system needs to have their link regenerated from scratch. There is also the issue of duplicate accounts entering the system (if preventive action being in place in the source systems), then they quickly end up in all satellite systems, leading to loss of man-hours dedicated to cleaning up duplicate data – let alone the mention of the impact on student experience.

My preferred solution is to have a central piece of the jigsaw which processes the changes. This central piece can perform a number of functions but primarily:

  • It can attempt to detect duplicates and prevent them from entering the satellite systems – flagging these issues at source.
  • It can, for example, issue each student a unique identifier such as a GUID, which can be used to send to all satellite systems (and this is where LAK comes in – making it easier to query an entity across multiple systems)
  • It can also have the logic regarded to determine if I change needs to be sent to a satellite system. For example, a staff member changing their address probably doesn’t need to be sent to the VLE. It would however need to go to the Library System.
  • It can also have the benefit of merging staff and student accounts into one – where the member of staff is also a student. That would save from having two logins, two ID cards, etc….

It also addresses the concern highlighted above of only one link needs to be re-worked should a system be replaced. There are also further benefits in having the ability to queue up changes in the event of the system being down for essential maintenance, broken(!), along with potentially centrally storing the business logic for the processing rules.


So, I’ll leave it there for now. But with the introduction of a unique identifier, centrally stored, we can now start to perform analytics from every system within this architecture. There may of course be links between student performance (or drop-out) and the amount of ‘churn’ through the system, and provided we introduced some kind of logging to this system we’d be able to perform some checks on this and identify patterns.

The next post in the series may come soon (I’m waiting for a long running process to complete) and will look a potential solution to integrate the systems to make retrieval and analysis of data easier (and lend a hand to other mechanisms to deliver student expectations).

A New Year on the Horizon

And I guess a look forward to some new challenges. One thing that has become apparent to myself is a lack of focus on my development of IT skills, and hence a lack of blog postings here. This has primarily been due to my Masters in Project Management taking a lot of my spare time, and then this year I also trained (and completed) for an Ironman competition.

Crossing fingers and hoping that my Masters now has 2 modules left to run and a dissertation, I’m hoping to now start thinking of turning my attention back to learning some IT skills to complement any developments next year. A practice I’ve done in previous years is to have a quick look at job boards and just search on a key term. This year I’ve chosen to look at Java. I really want to keep up with .NET but in my current environment its easier to get the latest tools for Java development. I’ll perhaps review the situation later in 2013 and see what provision would be available for keeping up to date in the Microsoft world.

Any pinching out the key words from the job descriptions a number of recurring themes seem to appear, namely:

  • Spring
  • Struts
  • Web Services
  • Automated Development and Test Driven Development
  • Hibernate / JPA
  • GlassFish / Tomcat
  • MVC
  • JMS
  • JMX
  • Oracle / MySQL
  • Multi-Threading Expertise

Some of these I know reasonably well, others I’ve only heard off in passing. I’m going to attempt to learn from scratch (to remove any bad habits) all the of the above, on a month-by-month basis. This won’t give me as a good a grounding as possible, but hopefully developing the knowledge and then finding application with come. The things I know better than others should give me a month of slight slack, whereas the other months might be a bit more intense.

The month of learning will constitute some kind of blog post – either an overarching summary of the technology, or a series of posts exploring the journey I go along.

Is there a key techonology (part of Java) that I’m really missing? Only thing on my mind is Android development which should probably get thrown into the mix at some point.