We ensure development quality through a set of documented procedures. These procedures follow international standards for IT project management (ITIL). We use these standards to describe the implementation and service of all of our products, including Inception CRM.
Quality Assurance requires a lot of documentation. But it’s not documentation for the sake of documentation. Good documentation helps manage expectations and brings clarity to the budget. It also helps us define the development approach best suited to the project.
For each project, we precisely describe requirements and their acceptance criteria. Risk Management is an integral part of this, as is validation. To make sure a product works properly, we test every component against its requirement, and we test the entire product against any identified risks.
We also follow best practices for software development. We use international programming standards and review all of our code before releasing it. And we use professional tools for source control, versioning, testing and deployment to make sure there are no hiccups along the way.
As most of our products are based on Microsoft .NET platform, we follow Microsoft .NET Framework Guidelines and Best Practices for writing source code.
The basic development tool is Microsoft Visual Studio, an integrated development environment that enables developers to design, develop, debug, make unit tests, prepare automated load tests, analyze runtime problems.
We perform code reviews either during development, prior to the submission of code to the source control, or after submission to the source control by evaluating the changes to the source code against the previous version.
Code review is performed for any code that impacts system-wide functionality or the degree of availability, integrity, confidentiality, speed, memory consumption, users, API, and security.
Requirements specifications include the customer’s requirements as well as the features of the product we’re delivering.
The User Requirements Specification (or “URS”) contains all of a customer’s high-level and specific needs. It describes both functional and non-functional requirements, from interface design to the customer’s password policy. It’s the basis for all of the work that follows.
Functional requirements specify the required functions. They are captured in a Functional and Design Specification (“FSDS”). The FSDS precisely describes all of the features and functions of a delivered product according to its required use cases.
This includes both system behaviors and workflows, as well as an app’s look and feel. Each feature and function is connected to its corresponding user requirement to make sure nothing is missed.
We address non-functional requirements, such as security and availability, in a Technology and Solution Architecture specification (or “TSA”). The TSA lists all of the technologies that will be used to develop, build and deploy a software product.
The TSA covers everything from the deployment model to the frameworks, coding languages, GUI components and software libraries used to make the product. It also describes the technical infrastructure, hosting (in case of client-server applications), and any connected or 3rd-party services.
Customer requirements generally determine our development approach. They tell us what technologies to use and how to deploy the product once its ready to be released. During development, we refine the product’s design according to the limits and possibilities of the selected tools.
Key factors that determine our development approach are the deployment model and connected services. System deployment and Installation follow the technical hardware and software configuration requirements for the product.
Native apps for iOS, Android, Windows and web browsers, for example, each require different technologies and deployment methods. Client-server applications have more dependencies than standalone applications, as do apps connected to third-party services.
In case the product is connected to external data services, we create additional specifications for data migration. The Data Migration Specification covers a wide range of requirements for the data migration process, including:
- Defining business rules for data validation, cleansing and transformation
- Defining business rules for data input
- Executing data cleansing according to business rules, supporting data input processes and resolving data quality queries
- Selecting a data modeling tool with reverse engineering capability (e.g. ERWin)
- Identifying all required data sources and the owner of each source
- Establishing data feeds
- Identifying (inactive and active) legacy systems and operational data stores
- Defining the data items required (customer definitions)
- Creating data models for the source data
Data Migration is a complex process than involves a number of steps, including:
- Selecting a data migration tool
- Creating extract files for each data source
- Resolving any errors identified by users (administrators)
- Creating extracts and performing data clean up
- Completing the migration (extract, transform and load) for each data source
- Running acceptance tests on the integrated target database (incl./esp. referential integrity)
To ensure the quality of our products, we establish acceptance criteria. The requirements for performance, safety, and testing determine what the acceptance criteria look like. We then test all of our acceptance criteria in order to ensure make sure the products meet the quality requirements.
These tests, which cover everything from development to installation and performance, form the basis of our quality guarantee. They also establish the parameters for a product’s quality documentation.
Defining testing methodologies to be carried out in each stage of the development project (as defined in the test documentation for each development stage) is critical to the success of a project. Major areas of testing that we address in our test plans include:
- Development testing
- Installation testing
- Operational testing
- Performance testing
Functional Risk Assessment
We assess functional risk once we’ve finalized the functional specification. We take into account everything that can go wrong in a live scenario and prepare ways to mitigate those risks in advance. As a result, the functional specification might be updated to account for any new features that we need to implement to address potential risks.
However, the main output of the functional risk assessment is the product validation strategy. A key part of this is testing. The best way to mitigate risk is simply to test any bugs, errors or unwanted behaviors out of the app. To that end, we prepare a robust testing plan that covers all important areas of the system.
Testing is a key component of product validation. It involves many separate testing activities, beyond those conducted by developers. While unit testing and code reviews are important, these are more routine functions.
Product validation, on the other hand, focuses more on the quality documentation that needs to be produced. This documentation provides important evidence that the product works as expected, and that quality has been assured.
The validation policy provides the scope for all the tests we conduct as part of the validation process. It identifies the types quality documents we need to produce, as well as their structure and contents.
In most cases, the validation policy defines the document structure and contents for Installation Qualification (IQ), Operation Qualification (OQ) and Performance Qualification (PQ). Each qualification contain its own validation protocols and test cases.
Validation protocols are essentially all of the test plans that we need to carry out in order to ensure product quality. We validate our software according by following the test cases described in each plan. Each test case has its own steps, which allow any developer to replicate another developers test to confirm that everything is working well.
We design test cases to confirm a number of things. One of them is product functionality (i.e., that we’ve delivered all required features and they are working properly). Another is product usability (i.e., how well does it perform in a live scenario). And a third is functional risk.
Validating a product against risks is important. It asks us to check whether any of the identified risks appear, and if so, to see how effectively the mitigation strategy addresses them. For example, if a connected service fails, does the user have a way to easily reboot it?
A product is released only once it has met all acceptance criteria and passed validation checks. However, it is normal for products to evolve after their initial release as new requirements are introduced.
Change control and configuration and change management follow similar procedures as initial development and release. This means that new requirements are define and a new development plan is established.
After that, new acceptance criteria are defined and the product is tested until they’re met. The new version is released only after complete validation of the entire product.