Thursday, 9 July 2015

How to rate internal product quality during systems development

There are many ways to define and rate product quality. This product quality rating is based on typical product quality assessments for software products and in particular online software.
This post was written for dev teams who do not have a dedicated QA manager or the like.


So why should you do a product quality rating? It will give you an indication of how well the product is engineered in a more holistic perspective. Many will judge the quality of the product based on what end users say about the product. While end user feedback is a very important indicator of quality or value, the product may still have many issues underneath the covers that are just waiting to blow up and become a big issue for the user or other sides of the business.

This product quality review helps to get a complete overview of the product and its strengths and weaknesses. It helps to plan ahead and reduce risk.


The rating can be conducted by the product development team, perhaps in collaboration with others such as internal users and other stakeholders. As several of these categories are only known to the dev-team this rating cannot be done by customers or other external stakeholders.

A score from 0 to 10 can be given where 10 would be the score that the best product on the market would receive in that category.


Is the product considered to be extensible? Can it be enhanced with new capabilities or functionalities without hampering its existing functions?


Is it product ready for immediate use? Does it require a lot of configuration including technical setup? Is it difficult to get started with the product? Are there long start-up times?


Features are the “bells and whistles” of products. How complete is the product? Compared to competitors how would you rate the feature-set in terms of user value? Are you missing important features? Are there many half-finished features?


Refers to throughput and latency. Can the system handle the user load? Is the system perceived as slow at times?


Is the system accessible for impaired/handicapped users?


Does the product perform flawlessly under stated conditions for a long period of time?

Correctness / Conformance

Is it free from faults/bugs? Is it aligned with specifications? Does it conform to standards?


Does the product perform effectively without wasting resources?


Is it easy to maintain code?


Is it easy to understand and grasp what the product is for? Does it seem too complex?


Is it easy to use and learn?


Is it well documented? Is it easy to support?


Does the product scale well and can it handle controlled increased load? Is it efficient when new resources are added?


Is the product able to withstand harsh conditions and rough handling? It should be durable enough to endure erratic variations in stress as well as pressure that too with minimal damage or alteration to its functionality. Does it handle bad or corrupt data?


Are there any likelihood of potential security breaches due to poor coding practices and architecture etc? Have you done penetration test or security audits? Do you have logging, exception handling etc.


Is the product stylish, good looking, giving a good first impression?

Further reading

Sunday, 8 March 2015

Why testing tasks should be part of the task board when you don't have experienced testers

Many Scrum teams feel there's something not quite right about testing and their use of a task board. In this blog post I'll go in-depth on the issue of having testing-related tasks or not on the task board. I have found that this is a typical question especially in beginner Scrum teams.

Example Scrum team scenario

To keep this blog post relatively short I will focus on the following scenario:
  1. Automated testing code coverage is medium to low so there is an extra need for manual testing.
  2. The product is relatively complicated with lots of intricate scenarios/settings to test.
  3. The product is a SaaS with thousands of users.
  4. You do not have dedicated testers but you have plenty of access to people outside the team that can help to carry out testing. 
  5. Testers are not experienced in testing. They know the product to be tested but are not super users. They prefer not to use developer oriented testing tools because they are not highly technical persons. 
  6. Testers outside the team come and go and they are quite busy so they need a simple way to carry out the tests without having to log in to a complicated testing suite. Because of the turnover you do not want to provide a lot of training for testers. 
  7. You do not have a test manager, QA lead etc.
  8. Developers are also inexperienced testers and have limited knowledge about QA.
  9. Developers are writing the test scripts because the testers are not qualified to write high quality test cases. 
This scenario may describe a team with a medium Agile maturity level so other more fundamental actions could also be needed such as QA coaching, adding QA related metrics etc. but we will focus on the question of testing tasks on the board.

What is testing anyway?

You should never rollout a feature/User Story/Product Backlog Item (PBI) without testing. Someone has to test it. This might take the form of unit testing, acceptance testing, security/penetration testing, exploratory testing, regression testing, performance testing, load testing, code review, integration testing, web tests e.g. Selenium. In addition to local testing during development. Maybe you want to add UX user testing and maybe you even need to do some regulatory testing.

The purpose of the task board

Before we discuss if testing tasks should be on the board we should first do a recap of what a task board is for. The task board, as the name implies, is for tasks. The purpose of the task board is to keep track of tasks and create visibility to the team and other stakeholders. This helps the team make sure the right things are completed at the right time [4]. The three pillars of the Empirical process which Scrum and Agile is based upon is Visibility, Inspection and Adaption. Tasks on the task board enables visibility and transparency.

Now, some teams try to cram more information onto the board by adding lanes for server environments or lanes such as "Ready for testing", "In testing", "Ready for staging" etc.
The starter task board with the 3 lanes: "To do", "In progress/WIP/Doing" and "Done" is still the recommended basic setup [1],[2],[3],[4],[5],[6],[9]. Remember that the task board is primarily for tracking tasks.

A task can be in progress, done etc. To say a task is in QA/staging or in testing does not make sense. A user story/feature can be in testing but a task is a piece of work to be done. You don't test a testing task. This inconsistency lies at the root of disagreements on how to visualize testing on the board.

Why not have explicit testing tasks on the board

Typical arguments for not having testing tasks for each story:
  • It's extra work to add test-related tasks to each user story. Each user story will often have the same duplicated testing tasks. Seems like unnecessary extra work to add them for each story. 
  • Testing is an integral part of development so we don't need tasks for it. It is part of the programming work.  
  • More tasks means there will be more overhead needed to pass all the tasks through the lanes of the task board.
  • The board gets messy when we add tasks that are to be partially carried out by others not in the core team. We like to have full control of what all the tasks are and not have things that relate to other persons in there.
  • Testing is something that is done after we have deployed to the QA servers. Seems out of place and not logical to have them on the board. Testing is sort of another phase of the process and it seems illogical to have it alongside programming tasks. 
Some of these arguments include faulty assumptions.
  • "The board gets messy when we add tasks that are to be carried out by others not in the core team". This claim assumes that the team don't need to do anything related to testing. Someone has to write the test cases, make sure the test environment is testable, someone has to coordinate testing, make sure testing gets done, provide support for testers etc. There are also tests that only developers can do like performance testing.
  • "Testing is sort of another phase". According to Agile practices this is just wrong [7],[8],[10],[11]. This thinking leads some teams to add a testing lane to their board which again strengthens a view that testing is a phase in the sprint and so test-related tasks are not needed on the board.
    Example task board with lane for testing
    This can also have the effect that epics are not broken down into small enough user stories and tasks. Since it does not always make sense to have a task in testing you may end up having tasks that are actually user stories. When a task on the board is actually a user story it makes sense to have the "task" In testing. 

Testing is not a phase, but a way of life
   - Elisabeth Hendrickson

Ideally testing should be a collaborative effort between developers and testers going on in parallel [8],[10],[13],[14]. In our scenario this is difficult since there are no skilled testers so developers have to step up and write test cases, provide testing support and coordinate testing [11].

In Scrum and Agile development we strive to complete one by one user story. Testing is of course part of that. One by one story is completed to reduce risk and to be able to deliver value increments.

Alternatives to testing tasks on the task board

Based on our scenario I have identified a few alternatives to having testing tasks.
AlternativeImplication for test script writingImplication on carrying out the tests
Developers write test specifications at the start of the sprint based on the requirements specifications, dev tasks, DOD and Acceptance Criteria.You don't yet have a working feature so you will not be able to try it out and come up with all the test cases at the start. There will be changes underway so the test script will be outdated. To avoid rewriting the test cases or have to come back to it at a later time the developer will delay writing the test cases. Eventually the developer may forget to do it. Poor test coverage and outdated test cases that cannot be carried out.
Rush at the end of the sprint to get the test script together.
Developers update the test specification continuously as they go along.Will be forgotten because the developer is in a coding mode and the team does not yet have a high quality focus.The developer may forget to do testing tasks such as performance tests, integration tests, code reviews, security/penetration tests etc.
Rush at the end of the sprint to get the test script together.
Developers write the tests when all dev tasks of a story are done.Who is responsible? Several developers have worked on the user story. It is not clear who should write the test cases so it doesn't get done.Rush at the end of the sprint to get the test script together.
Developers write test cases for each dev task.Many tasks are not testable [12]. Often there are no relevant manual test cases to write and there are different types of testing needed of different tasks and so the developer would constantly need to think about testing. Not all developers care that much about quality.
Also the test script needs to be organized into a readable non-overlapping set of test cases. You may not end up with a complete and easy to use test script just from piecing together tests from the individual dev tasks.
There is no reminder or test task to check off as completed so the developer may forget about it or defer writing the test cases.   
Missing test cases. Tests that are hard to understand. Overlapping tests.
Rush at the end of the sprint to get the test script together.

If you have a lane for testing this does not mean that tests will automatically be carried out. Because some tasks are testable and others not you don't get into a systematic process of writing test cases and eventually things are forgotten or delayed. Also there might not be a clear mapping between the dev task and its test cases so others will not know if the test cases are written or not.  
Developers write the tests at day x into the sprint.The developer will always want to complete as much features as possible to look good or because he don't like writing test specs.
Quality is jeopardized because the developer rush to complete the test specification in order to be able to complete the testing before the sprint ends.
This is not Agile.
Testing is postponed, it gets chunked up towards the end of the sprint. You may not be able to complete any stories at all because testing was halted for some reason. The feedback cycle lags behind so critical bugs found during testing could not be fixed in time before the sprint ended or the team have to work overtime, again.
Developers test by trying to break each other's code and find bugs.In this case there is less need to write a test script. Although this may sound fun to developers this alternative ultimately depend on the QA maturity of the developers and the process. If developers are poor in testing they will find fewer bugs. Without specific tasks with a timeslot developers may do a sloppy testing job especially if there are no guidelines from managers on expected quality levels or if there are no other incentives for finding bugs.

Some readers might have noticed how central the test script has become in our scenario. Maybe you would like to suggest dropping test scripts all together and just end the discussion right here. Developers should be end to end developers and have the skills needed to test everything along the way as until the story is done [14]. In our scenario with a complicated product this is simply not an option because a test script is absolutely needed for QA to make sure edge cases are both identified and tested. Without a test script you get into a testing procrastination situation. You don't know where to even start testing and what has been tested. Systematic testing is needed in our scenario.

Typical problems in immature Scrum teams when not having testing tasks for each story

Developers usually don't like to do or prepare testing other than technical unit testing and performance tests so you may end up with poor tests and test scripts being ready too late. Developers love to code and develop stuff and often find test-related work boring.

Typically the team will end up postponing testing as a bulk job to be done towards the end of the sprint [7]. As you don't have explicit tasks for writing test specs and the task of writing them is usually seen as boring work for developers they will tend to focus on development and postpone writing test scripts as long as they can. This means that they will usually start to work on another story before the first one has gone through testing. You are now in a situation where developers try to have all stories ready to be tested some time before the sprint ends so testers can take over and begin their work. This is what we would call mini-waterfall (not Agile).

Another problem is responsibility. Who is responsible for writing the manual test scripts, the acceptance tests etc? Who is responsible for coordinating or doing the actual testing. If there's no assigned person you can bet it will be deferred or not done at all when the QA maturity level is low amongst developers.

Another side-effect of not having testing tasks on the board is that some person will voluntarily or not take the role of writing the test specifications because that person know it is the only way it will be done. This person might be good at it and he might even like to do it. The problem here is that it creates a dependency. What happens if this person goes away on vacation or is busy doing other stuff for a period of time? The testing grinds to a halt or you have a severe drop in test coverage.

Why have testing tasks as part of the task board

Example of Scrum board with testing tasks

Better estimates

Time will go into writing test specs, so estimates for it should be registered somewhere. Time will be spent setting up the test environment, doing initial integration tests, regression tests etc. Time will go into fixing the bugs found during testing.

By using test-related tasks you can during Sprint planning have a more conscious discussion around what kind of testing is needed, how much time will be needed and how much extra time should be added for fixing bugs found during testing). With explicit testing tasks you will be able to plan better and come up with better estimates. 

Increased Quality

Sometimes tests can only be done by developers. E.g. performance testing or testing that require specific tools e.g. testing infrastructure related stuff. By defining these tests as explicit you don't forget to do them and you have accountability. Everyone sees who is responsible and if it has been done or not.

If you add test-related tasks during sprint planning you will have a slot to think about testing. You will be able to kick-off some thoughts about what needs to be tested, how thoroughly etc.

If there's no defined task for writing the test script you might be pressured to hurry the job of writing the test cases because the burn-down chart will look bad if you spend too much time writing it. When there are no testing-related tasks on the board there are no estimates to burn. The developer might feel he look less productive than other developers who are working on defined dev tasks so the developer may want to just get the test cases done as quick as possible. Not thought through test cases = poor testing.

If you have testing-related tasks on the board you will be reminded if there are some other types of testing missing.

Who is responsible for updating the test scripts when a bug is found and there was no test to cover it? Without a person in charge of overseeing the testing of a user story this may very well be forgotten.

Reporting and status visibility

If you don't have a task for testing the user story you can't display the status of the testing work on the board. There is no way to see if the testing has started and how many hours remain etc. Testing-related work as any other task can have impediments. If you have test-related work on the board it's easier to have discussions about status and impediments.


In our scenario someone in the team has to coordinate or connect developers with testers. Testers will have questions for how to carry out some test task. There are usually questions such as; is it a bug or is it a known issue? A developer has to be available to answer questions such as why a test task is not testable. The correct conditions, for example the right data might not be present to be able to carry out the test. Without a explicit task for the responsibility of coordinating and supporting the testers the tests might very well stop without anyone knowing. You might have unclear communication lines and get extra noise in the team because people don't know who is responsible for overseeing the testing.

Avoid duplicated work. If the developers test each other work it is still useful to have testing tasks so you know which developer is testing who's work.

Continuous improvement

Reducing waste is central to the Agile movement. If everything is visible you get a better overview of everything going on and how things depend on other things. With testing tasks you can more easily improve the entire process.

Without tasks and responsibilities you don't have accountability. If the test script is bad and there are bugs because of poor tests we know who wrote the test spec and we can improve the process by talking to the responsible persons.


Monday, 2 February 2015

Fluid, Responsive and Adaptive VS Fixed layout

In this blog post I present a way to decide on Fluid, Responsive and Adaptive VS Fixed layout. I have chosen to bulk Fluid, Responsive and Adaptive together because they are similar in that they adjust to the screen size, fixed layout does not. Fluid, Responsive and Adaptive layout are all different but for brevity I will call them Responsive below.

Most old websites have a fixed layout so many developers who are in the situation of giving a facelift to a legacy website often have to decide between Responsive VS Fixed layout.

The rise of Bootstrap and responsive layout with its extremely easy to use framework has certainly contributed to a many developers dropping their fixed layout.

I have observed how many web developers have just jumped on the bandwagon and adopted Bootstrap because it is just so darn easy. As an engineer I feel that many blindly go for Bootstrap or similar framework without much thought.
So below I give a more nuanced look into what to consider before redesigning an existing web application. The factors below have been picked especially for rich interactive social web applications rather than simple informational public homepages. I also assume you don't have unlimited resources so you are, as the rest of us, constrained to a limited development time.

Factors to consider

A - Esthetics.
B - Speed of user interaction.
C - Accessibility/ease of use.
D - Development time.
E - Screen size of users.
F - User browser maximized or not.
G - Mobile support.

All projects are different so you have to decide which of these factors are more important to you.

A: Do you want our application to look perfect with less cost? 
It is easier to make the site look good with fixed layout because layout often breaks with fluid layout. Images that users add to content areas don’t wrap as they expect on other screens. Headings added by users don’t wrap how they like on other computers. Background images can look weird etc.
If you have few resources and you want the app to look really good (on PC) you might want to steer towards fixed layout.

B: Do you have super users? 
How are users using the app? Are there mostly highly drilled users? Is it a Line of Business app where the business will measure the time users spend doing their work in the app? Do users have to be able to complete their tasks really quick?
By using more screen real estate users can perform actions faster and find information faster without having to scroll, use paging or navigate.
If you have requirements for being able to do lots of actions fast you may want to steer towards responsive layout since you are able to use more of the screen width.

C: Do you have to be particularly user friendly? 
Using more screen real estate will often lead to more “things” being visible on the screen. More things = higher cognitive load. For super users this is fine but for users who is not using the solution that often and for senior users this will reduce their user experience because they are simply overwhelmed by the content and options visible.
So who are you targeting? old people, beginners, non-computer literates.
Responsive design does not have to increase the amount of stuff on the screen but it usually does. A fixed layout is easier to learn as it always looks the same. If you by any chance did not maximize the browser window as you usually do you might get a different view than you are used to. For computer illiterate and infrequent users this can pose an issue. With a mobile first design approach, a max width and adaptive layout the UI can still look clean but this is not compatible with B above.
If you have requirements for being very user friendly you may want to steer towards fixed layout.

D: How much time do you have?
My initial starting point for this blog post was that you were about to decide to change from fixed to responsive layout so naturally you are in for more work if you decide to change. Also responsive layout requires more development and testing to check that it looks good on different screen sizes.
If you have little time you may want to steer towards fixed layout.

E: What's the typical screen size of your users?
If a significant percentage is using a screen that is smaller than the fixed layout width or they are using phones and tablets then you should definitely switch to responsive.  If users are having a screen size that is smaller than our fixed layout it will cause bad horizontal scroll bars or zooming that makes it very cumbersome to hit buttons etc.

F: Do users have their browser maximized? 
Could for example be because users need to keep multiple windows side by side etc for example when they need to copy content from another window and into the app.
In this case a responsive layout may work better.

G: Do you have to support mobile?
This is usually the one biggest concern that will decide it all. Is it a strategic goal that the app work well on mobile? Do you have plans to develop a separate mobile web app or even native mobile app? For many it is just too resource demanding to develop a separate mobile/native app so in this case you would go with responsive layout.

What are others using 

(Note that it is usually not a black or white choice. Most sites use a mix of techniques for example facebook is fixed but uses some adaptive techniques such as the chat and contacts feature on the right side)

Fluid/Adaptive/Responsive layout:

  • SharePoint
  • Jira (Confluence)

Fixed layout:

  • LinkedIn
  • Facebook
  • Yammer
  • Podio

Side note

Why does Wikipedia have fluid layout? Because all you do there is read articles. Ease of reading is extremely important there. With fluid layout the user has full control of the readability (line length etc) by resizing the browser window and adjusting browser font size.

Further reading