Archive for February 2008
When it comes to software testing a number of terms are used. You hear the simple term testing, then some talk about QA and others call for QE.
In common language test refers to the act of trying something out, checking on knowledge or behavior and other forms of comparing expected results/behavior to what can be observed. In the case of software testing you expect certain results/behavior from a software system. So you enter some values, perform a number of activities in the user interface and then compare what the systems shows or provides you with what you expected. If it matches, the software passes the test. If it doesn’t match, it is deemed faulty and you write a report about the problem. Those reports are usually called defect or bug reports.
I looked up the terms QA and QE on Wikipedia and for QA I found:
Quality assurance – Wikipedia.org:
Quality assurance, or QA for short, is the activity of providing evidence needed to establish quality in work, and that activities that require good quality are being performed effectively. All those planned or systematic actions necessary to provide enough confidence that a product or service will satisfy the given requirements for quality.
For Quality Engineering (QE) it has been a bit hard to find a good definition. Wikipedia thinks it’s just a synonym for Quality Assurance (QA) and if you use Google, you will find links to people calling themselves Quality Engineer as well as companies offering courses in Quality Engineering.
So I’d like to look closer at the terms and try to define them myself for the purpose of this blog post.
Quality Assurance (QA) and Quality Engineering (QE) have two different notions. These terms refer to quality and in the case of QA people want to assure quality and in the case of QE quality is being engineered. That prompts to ask how can we assure or engineer quality in software. But before we can answer that question we need to understand what quality in itself is. Here are a few quotes from an article about quality on Wikipedia:
Quality in everyday life and business, engineering and manufacturing has a pragmatic interpretation as the non-inferiority, superiority or usefulness of something. This is the most common interpretation of the term.
One key distinction to make is there are two common applications of the term Quality as form of activity or function within a business. One is Quality Assurance which is the “prevention of defects”, such as the deployment of a Quality Management System and preventative activities like FMEA. The other is Quality Control which is the “detection of defects”, most commonly associated with testing which takes place within a Quality Management System typically referred to as Verification and Validation.
So there we have it. Quality means that something is fit for a specific purpose and is not bad or faulty. And we can have people ensuring that a product doesn’t get shipped with defects (Quality Assurance) while others control the level of quality by detecting and counting defects in shipped products.
Those definitions apply very well to manufactured goods. In a factory you can take every nth product from the assembly line and check it for defects. Then you create some statistics about the defects you find and in the end you may use those numbers to prompt some action, if the quality drops below a certain level you have defined as unacceptable. Then some people in the quality assurance department get asked to come up with solutions to prevent these manufacturing defects. They may talk to product engineers and change something in the product to make production easier in some way. My knowledge about manufactoring processes is kind of non-existent but apparently it works well in that industry. Where does that leave us for software?
A lot of software development practices are inspired by other industries. There is a lot of people who perceive the act of creating software as some kind of engineering and call programmers software engineers. Probably to distinguish what they do from what a programmer perceivably does. An engineers takes on more complex tasks, creates something new or enhances a machine or building while a programmer writes a sequence of instructions to tell a machine what to do in which order. That implies as well that a programmer usually does not come up with something on her own but instead gets told what the machine should do.
Now, if you have software engineers who create software, then you may want quality engineers to work on the quality of the software before it gets used by people outside of your engineering organization. One team of quality engineers may control the quality of the product before it gets shipped. Another team of quality engineers may assure that less defects make it into the product being shipped. Just as in manufacturing – isn’t it?
Ways of controlling and assuring software quality
Black box testing and white box testing describe two distinctive approaches of controling software quality. A black box doesn’t reveal much details to the observer. You can tell its size, maybe it’s weight and then there is the fact that the box is black. That’s about it. Software without access to the source code is quite similar to such a black mysterious box. A white box is viewed as one that reveals its inner workings to the observer.
When we do black box testing we explore the functionality of the software, verify its behavior and the results by comparing it to a description of what should happen. Basically the quality engineer executes the software and does what a regular user would do. Whatever goes wrong he reports as a defect. When black box testing a software the only way of performing tests is by means of the user interface provided by the software. If the software uses a database, the quality engineer can compare before and after values in the DB. But most of the times the only way of knowing whether the software does what was expected is by looking at the user interface again. Does the report show the correct values? Got the selected element highlighted? Does it print and the output looks as expected?
That requires a lot of manual work and is per se error prone. A smart quality engineer wants to automate tests to create a number of baseline tests. When he gets a new version of the software he’s testing, he simply runs his test suite and can concentrate on what’s new instead of manually testing the same stuff over and over again. There is a number of tools – commercial and open-source – available for programmatically drive the UI of Windows, Java and web applications. Quite famous is the open-source tool Selenium for web applications. On OpenQA, a place for open-source QA tools, you can find as well tools to automate Java Swing or Windows GUI testing. For more tools, including commercial ones, look here.
My personal opinion about black box testing is that it appears to be a good way to control the quality of a finished software product and provides information to decide whether it is safe to ship it. But it is extremely important to keep in mind that this kind of testing is based on the user interface and you still might miss hundreds of defects just because you don’t execute the software in the way that makes them show up. Astonishingly a lot of organizations view this kind of testing as the most important form of testing. Probably because most people perceive software as the user interface. So what they can see and touch that must be the thing they call software.
In the beginning of this post I was talking about quality assurance and quality control. So far we have identified a tool to control the quality of the user interface. I think it’s safe to say it that way. What about controlling or even assuring the quality of the software’s functions? To achieve that we need to look on the inside.
White box testing requires us to work with the source code, which allows us to perform tests on internal mechanisms instead of just the user interface. The keywords that come to mind now are TDD (Test-Driven Development), unit tests and tools like those from the JUnit family. This is going to be a complex topic, as it is not only about controlling quality by measuring things. To assure quality you have to start with the process from the very beginning, which is when you analyze the problem you want to solve with software. It touches the way you develop (your development process) and essentially is not any more a matter of the QA/QE team. I think it makes sense to cover white box testing in a second post.
When I posted Thoughts on the ideal project team a few days ago I received a comment to which I’d like to respond in more detail here. I really appreciate that kind of comment. Who ever is xcosyns: Thank you!
Commenter xcosyns in Thoughts on the ideal project team at my blog:
Often you will need someone handling the infrastructure, uat, itt should match the real production environment. And from experience this is not always the case, and not that easy to achieve when multiple applications have to interact, one way or the other, together? And we are not yet talking about backups, db replication, san drives, authentification systems, load balancing…
Development infrastructure should be as simply as it makes sense. And it’s the developers who should have control over it. For example in our case mostly I do that and when you have some experience it’s not that hard to do. I encourage my fellow team members to not limit themselves to the “code monkey” role, but learn how to administer their own systems and team systems. My belief is that every software developer needs to be able to do sysadmin duties – at the very least for those systems he works with on a daily basis.
By that I don’t mean to say that SAN drives and load balancing in a production environment should be maintained by the developers of a system. That’s the job of an operations group. But developers who can do this for their little development environment have a much better understanding of these things and know what to do and what not to do when creating an application for such an environment.
Also, developers can really benefit of a DB expert, often java developers really sucks when it comes to db performance, and most developers are rarely aware about what their db can really do. A good DB expert can speed up legacy applications with no or minor changes.
You are right. Often Java developers really suck when it comes to DB performance. Why is that? Probably because their education is too shallow. One thing is to merely understand the language and a few common libraries/frameworks. And another thing is to understand software development in a holistic way. That’s the result of experience and natural curiosity. For example in my own case I have been forced to learn Unix system administration, networking (up to managing a larger network for an ISP I co-founded; think of the whole zoo of routers, access concentrators and the various protocols such as BGP, OSPF, etc.) plus several high and low level programming languages including all the, literally, zillions of libraries and frameworks for those. It took a while, but I consider this the difference between an apprentice and a master. The ideal project team practicing agile development should be comprised of masters. There is a room for apprentices. You can easily assign to each master an apprentice and delegate some tasks to those.
Some documentation should be written, the business analysts can do the functional part, developers or the architect can do the technical part. But when the project/projects/teams scales up it becomes a time-consuming task and it can partially be delegated.
Agreed. If you have a need for more extensive documentation, then you certainly can add a tech writer. I see one question that remains. If you make the tech writer part of the team and follow the very good practice of “done done” for your user stories, that would mean that a story is only “done done” when the documention related to the story is completed as well. In this case you add another constraint besides the programmers. Probably it depends on the type of application and the target market whether that makes sense.
Also someone needs to follow up the development process, a project manager or a team leader. Someone that can prioritize the tasks in function of the business needs and business gains, someone that has a global view of the project and the company. And managing the budget, logistics, recruiting, etc…
For that there is the role of the Product Owner. His job is to prioritize stories based on business value. The customer is the only one who can really know the business value of stories so he should be the only one providing that indicator. Budgeting, logistics and recruiting are unrelated to the work the team is supposed to perform. You form the team before you get started – obviously ;-) – and then that’s your team. I would not move people in and out of a team, because that negatively affects the accumulated domain knowledge and slows down.
The original post:
As I’m preparing some material for an upcoming event, I thought I could as well share this little piece here on the blog. Basically it is about the ideal software development team, the roles people play and the skills each team member needs to possess to be able to make meaningful contributions.
The customer is not that company or that guy who pays. The customer actually drives the project and is part of the team. It is important to engage the customer in a constant dialog or at the very least to give him a means to collaborate and respond to questions easily.
Programmers write the code and are supported by Testing Programmers. The programmers are the constraint on the team. Their number needs to grow in order to be able to handle a larger project. I share the common opinion that when you add two programmers, you should add one testing programmer as well. It seems to be a good idea to add programmers in pairs, because that allows them to pair on difficult tasks.
Skills required: Expert knowledge in the choosen programming language, tools and other technologies used for the project.
The Testing Programmer is just an ordinary programmer, but in that role he looks out of test coverage, performs manual and automated tests of the integrated software system and checks for completenes of the implemented solution based on user stories and in cooperation with the Business Analyst acting as Product Owner.
Skills required: Expert knowledge in the choosen programming language, tools and other technologies used for the project plus strong testing skills and expert knowledge in test automation.
Almost every system has some kind of user interface. The Information Architect designs the user interface, creates wireframe models for communication purposes, works with the Business Analytics and the Customer and helps the programmers when they implemented the user interface. He creates graphical elements for the user interface.
Skills required: feeling for good user interface design, graphic design, communication abilities
The Business Analytics acts as Scrum Product Owner for the team and maintains a constant dialog with the customer. His job is to understand the customer’s goal and expectations for the project. He collaborates with the customer to create user stories.
Skills required: Strong analytical skills and ability to gain expert knowledge in the customer’s business in short time. Very good communication skills oral and in writing. Expert in writing user stories.
This is a re-post of something I wrote earlier on the old blog of my company. I just read a discussion about the topic Outsourcing for start-ups on LinkedIn and although the text is written to attract and convince prospective clients, I think it may spark some interesting comments.
What seems to be the prevailing point of view seen in the LinkedIn discussion is that, if the start-up is a technology – read software – company, you should not outsource your R&D. That’s kind of obvious. ;-)
On the other hand some seem to think about outsource in terms of offshore to some cheap labor country and fear the loss of IP rights. Isn’t there a difference between offshore and outsource? You can outsource to a team located just down the street and that may make a lot of sense.
I really like the comment Peter Nguyen gives:
I teach strategic business design to entrepreneurs, and one thing I stress is to be clear on what your business model is. Unless your startup does software development, it’s a good idea to outsource. However, keep in mind that IT is such a central part of any business organization that, as I mentioned before, it’s important to outsource to the best IT companies, regardless of where on the planet they come from.
Here is the re-post about New Product Development:
Traditional Software Development does not work
Software Development is not an easy task. In over 20 years many projects have failed and a lot of money and opportunities have been lost due to wrong expectations and bad project management. Unlike common belief development of a software system is not an exact science. Although the term “software engineering” is commonly used, it’s more a union between art and engineering. Good software engineers have developed a feeling for good systems design due to their long-term experience. What we do is more comparable to the art of playing a violin than the work of an engineer who can leverage norms, standards and mathematical models. Such clear rules do not exist in software development to the same extend. Further teams frequently struggle to deal with a great number of unknown factors, such as unclear specifications, changing requirements, and simply unforeseeable requirements, while being expected to deliver functionality on time, on budget, and with high quality.
Agile Development is a dialog with the client
In recent years a new idea is getting more and more adapted by forward thinking software developers and companies. Instead of following the failed “Waterfall” development model, which made us belief that we can design a huge system upfront and then have programmers write the code according to the specification, “Agile” teaches that it’s better to develop in short iterations and embrace change with each iteration completed. Instead of big upfront design, we design enough to produce working code for a limited set of functionality and have the user see and try it to give us more guidance. Instead of working out of sight, we enter into a process of continuous conversation with our users and learn more and more about their business while we build software for them.
Calculating development costs
Clients want to know/need to know when the new software will be ready and how much it is going to cost. Agile Development allows us to answer both questions more honestly. We can only estimate something we know and understand. So instead of promising all and everything to our client, we calculate the price for the product iterations. Neither the client nor we can possibly know how many iterations will be needed to development the new product the client is looking for. Instead of exposing the client to a huge risk or accepting the full risk ourselves, we lower the total risk and allow for corrective measures from the beginning.
Caimito uses the Scrum methodology for all projects. Development in Scrum is done in sprints of a fixed length. The duration of a sprint can be one week or up to four weeks. Each sprint will deliver a new product increment, which is running code that could be used. Before a sprint starts the development team plans the work it wants to do and commits to the goal it defines for the sprint. After the sprint the team conducts a meeting with the client to demo the new product increment and gathers feedback.
Before a sprint starts team and client know exactly what the product increment will be. There will be no surprises in terms of unexpected results, lack of features or raising costs.
Depending on the size of the team and the complexity of the project we suggest short sprints and small team sizes to further lower the risk. Misconceptions can easily be handled when detected after one week, while the same problem will get costly after four weeks.
We understand that our clients are busy with their own business and frequently can’t afford to dedicate too many resources to the much-needed dialog with the development team. Usually agile practices ask for a client on-site who works with the team each day. Unfortunately that doesn’t work for all clients and we’ve come up with an alternative.
In our adaptation of Scrum there is the client, a client representative and the team. Facing the team the client representative acts as Scrum Product Owner and is responsible to administer the product backlog, which is the list of requirements for the product in the form of user stories, change requests and bug reports. Facing the client the same person represents the team and communicates to the client the achievements of a sprint and gathers new requirements.
The skill set of this person includes that of a business analyst with experience in the client’s industry or at least the ability to learn quickly, but as well languages, as clients not necessarily speak the same language as the development team.
Currently Caimito supports clients in English, German and Spanish.
Domain Driven Design and Unit Tests
Software developers are experts in software engineering, but can’t be experts in the client’s domain. The challenge we face is to create a working solution that optimizes processes in the client’s business without fully understanding a business foreign to us. Domain Driven Design (DDD) is a relatively new technique, which allows us to model the important parts of a client’s business with objects in code and shape the solution by adding functionality incrementally. Backed by automated tests we can assure that things that have worked before keep working while we add or modify code to build or extend the system.
Continuous Integration, Integration Tests and test coverage
Some programmers crank out code at high speed and leave testing to a QA (quality assurance) department. We don’t believe in this approach. Instead we perform continuous integration using an automated build and test environment where all tests that were ever written for the project are executed with each hourly build of the entire code base of the solution. That way we will know immediately, if new code breaks something and can fix it, before more valuable time has been wasted following the wrong track.
Our Integration Tests are end-to-end tests that span from the presentation layer down to the infrastructure layer and further down to a real SQL database, if such a data storage technology is part of the solution. Tests on the presentation layer are performed with automated testing tools that simulate users working with the user interface of the application and are part of the automated tests run with each build hourly.
Tests do not serve any purpose, if they don’t cover enough scenarios. As part of the automated build we run a tool that reports on test coverage per package, per class and per method. We make sure that we have a coverage of more than 90% on a per method level.
A code base with very good test coverage, an easy to run automated build process and good integration tests is easy to maintain and extend in the future. Even a new development team can work successfully with little ramp-up time. The investment in extra time to improve test coverage or maintain it high pays off fast in the form of reduced costs and greater stability of the overall solution.