What is the process for a new Twona Release?

Some of you may have been wondering what happens before a new release of Twona AMS takes place. 

When developing any new functionality for our Twona system, we follow a clear structure, which is consistent with SaaS product development. It consists of a well-defined series of steps that allows us to design, develop, deploy, and maintain the system in the best way possible.

Here is a bit of what happens : 

Image source https://wallpapercave.com/bpm-wallpapers

Ideas and planning

Where do new functionality originate from? That depends. Sometimes new functionality is linked to user feedback which comes from tickets, or conversations with their customer success managers; other times, we follow the market to see what functionality is out there that could be interesting for our users and we evaluate it; a new feature could also be the result of an internal business objective that we aim at achieving. 

Whatever the source of this, we always assess what the impact of the functionality would be on other features already in the system, how it would affect user experience, whether this is something that all or most customers would benefit from and use, and how it would affect our existing infrastructure. 

During this phase, we also make sure that the user requirements (internal or external) are clearly defined, looking at all possible scenarios.

All this information is gathered in our Product Development board, in the form of cards, which will go through several process steps if they are validated by the product owners. If the card gets validated, the next phase will kick in, that would be the design phase.

Design 

When we talk about design, we are referring to the visual interpretation of the application screens that will contain the new functionality. These are often referred to as wireframes or mockups. When we create these, we also make sure that we cover the interactions and integration with existing features, and we involve the technical and customer team to generate a result that is consistent with the rest of the application and will be seamless for customer experience. 

Here we define a lot of possible scenarios of how the functionality would be used, where it would be accessed from, and how it will show to users with different access levels to the platform. 

Development 

Photo by Shahadat Rahman on Unsplash

Here is where our front and back-end developers get to work! They will be writing code (clean and modular ;)) to implement the wireframe design, integrate it with the existing software, and make sure that all is compatible and it will create no disruptions to user experience. 

The development takes place in servers separated from our own platform, to make sure that nothing is compromised.. When it is ready, it is moved to the testing environment. 

Testing

During the testing phase, several departments will be involved. 

Initially, the development team performs a test to confirm the functionality is working as expected, but more importantly, they perform some integration tests. These make sure that the new functionality is not going to break anything that was already in place. 

When this is confirmed, product owner and customer success get to test the functionality. Several eyes see more than two, so we always make sure that not only the originator of the request for new functionality gets to test it, but at least one other person does, although often testing is performed by at least 3 people, sometimes taking different roles and user permissions when performing the test.. 

If any issues are identified, it means we are back to the development team. A proper description of the issue is registered in our Product Board and the card about the functionality is sent back. The development team then works on the areas that are not working as expected, usually have questions for the requestors, and get back to coding. 

The full process above is repeated until the functionality passes all tests. 

When this happens, the functionality can be put into the production enviroment. 

Deployment

At Twona, we have a rule to not deploy (send to production) on a Friday afternoon. Although we very rarely experience any issues when doing a deployment, we want to make sure that if we do encounter an issue, this is sorted quicky, without robbing anyone of their weekend. 

So, we normally plan for deployments during the day. This is posible because there is little to no disruption to our client’s work when deployments take place, they are rather transparent to their operations. The reason for this is that we use a process where the old functionality/current version of our system does not get disconnected until the new one is in place, so the users will not notice anything until that happens and the new release is appearing in their screens.

The work starts by preparing the deployment environment : servers, databases, any infrastructure changes (if any , as we normally do these separately). When we start the deployment to the production environment, this is constantly monitored for any issues. When the release is completed, a notification is sent internally so that account managers can keep an eye on their clients for any potential ticket that may be related to the deployment. 

Documentation and Communication

This phase does not really start now, after the deployment, but from the moment the development starts on the new functionality. 

We make sure that new functionality is added to the online user guides, and start the preparation of newsletters with information on the changes. Very often, we distribute this information before the deployment is done so that our users get to know what is coming and how it will look when they log in to the system.  

We also organize online webinars regularly to communicate and explain about the new releases, especially when these cover several items, or when the functionality is very new or very different to what our users were experiencing before so we can make sure that they can continue to make the most out of our tool.  

Customer feedback, support, and continuous improvement

And mentioned before, we closely monitor our user experience after a release. This happens proactively through interactions with clients via their success managers and during trainings/online sessions about the new functionality, and more reactively through reaction to tickets that customers may have raised. 

We do receive a lot of positive feedback as a reaction to new features, but of course there are also improvement points that customers may notice. We definitely react immediately to any comment that would indicate a bug/malfunction of the functionality or any other area of the application as a result of the release, and those are given priority and sorted with urgency. 

However, we do take all feedback very seriously, and even though not all requests for improvement or changes are seeing the light immediately, they are always studied by the product team in combination with customer success to define whether these are individual/customized requests or they would benefit our customer base. 

Photo by Jon Tyson on Unsplash

As mentioned, many of our new features start off as a customer feedback request, which is why we appreciate any input we receive about the tool. We are always looking for opportunities to identify areas for further enhancement. 

Of course, we have our regular development and update cycles, but we aim to combine both our own views and expertise on the market with the needs of our customers to make the best artwork management system possible.

On-premises vs SaaS

Even though the corporate solutions landscape has rapidly evolved over the last decade, the decision between an on-premises software installation and a SaaS cloud solution is a common one that many organizations face. There are several key differences between the two that impact cost, functionality, and security.

  1. Cost: On-premises software requires a significant upfront investment in hardware, maintenance, and upgrades. It also requires the in-house expertise in the form of developers, engineers, infrastructure and security experts. In contrast, a SaaS solution is generally sold as a subscription service and eliminates the need for a large upfront investment. This means that the cost of a SaaS solution is more predictable and often more manageable.
  2. Functionality: On-premises software offers more customization options, but it also requires more expertise to set up and manage. Development and installation takes a significant amount of time as the complexity of the required functions increases, taking several months to years to setup a system. A SaaS solution, on the other hand, is managed by the vendor. It typically offers less customization but is easier to set up and use. If the SaaS solution offers a powerful API, customization can be further extended. This can lead to a more streamlined and efficient process with a significantly lower go-live time.
  3. Security: On-premises software is often perceived as more secure because the data is stored on the organization’s own servers. However it also requires more resources and expertise to manage and protect. A SaaS solution is managed by the vendor and typically offers a higher level of security than an on-premises solution, specially when large scale, well know infrastructure providers are used, such as Amazon. It also involves more trust in the vendor and their security practices, which is typically solved with Information Security audits.

In conclusion, when deciding between an on-premises software installation and a SaaS cloud solution, it’s important to consider the cost, functionality, and security implications of each option. While on-premises software offers more customization options, it also requires more resources and expertise to set up and manage. SaaS solutions are easier to use and offer more predictable costs, but they also involve more trust in the vendor and their security practices. Ultimately, the right solution will depend on the specific needs and resources of each organization, but let’s be honest. Who in its right mind would in 2023 decide to purchase an on-premises solution when there are SaaS alternatives on the market?

Is Software Validation outdated?

Image generated with Midjourney

Software validation is the process of ensuring that software systems meet the requirements set forth by regulatory bodies, such as the FDA in the United States. This is particularly important in highly regulated industries, such as the pharmaceutical industry, where software systems are used to manage and analyze critical data that is used to support the development and manufacture of drugs.

The origin of software validation can be traced back to the early days of computer technology in the pharmaceutical industry. In the 1970s, the FDA began to recognize the importance of software validation as a means of ensuring the accuracy and reliability of data generated by computer systems. This led to the development of guidelines and regulations for software validation, specifically in the pharmaceutical industry, such as the FDA’s “Guideline on General Principles of Software Validation” in 2002.

One key document that is created during the software validation process is the Master Validation Plan (MVP). The MVP is a comprehensive document that outlines the overall strategy and approach for validating the software. It includes details such as the scope of the validation, the validation team, and the schedule for validation activities. It is the first and foremost piece to documentation that needs to be created.

Following the MVP, you need to build three key documents: OQ, IQ and PQ.

Operational Qualification (OQ) and Installation Qualification (IQ) are used to ensure that the software system is installed and configured properly, and that it functions as intended in its intended environment.

Performance Qualification (PQ) is a process of testing software systems in order to verify that it performs as intended, and that it meets the acceptance criteria defined in the Qualification Protocol (QP).

As the technology and software development methodologies have evolved since the 70s, the need to adapt the validation model for modern SaaS cloud-based solutions has become increasingly important. With the advent of cloud computing, software systems are no longer installed and run on a single machine, but rather they are accessed through the internet from various devices and locations. This is the so called “single tenant system”, which is a radically different paradigm from the early on-site installations. This has led to the development of new guidelines and regulations for validating cloud-based software systems, such as the FDA’s “Guidance for Industry: Cloud Computing and Mobile Medical Applications” in 2013, although one might argue that those models are still outdated given the speed of the advancement of technology and cloud services.

In conclusion, software validation is a critical process in ensuring the accuracy and reliability of data generated by computer systems in highly regulated environments. However, application of outdated validation methods will only led to frustration and failure.

If you are about to embark on a validation process for a SaaS solution but your QA team has only experience on traditional on-site installations, do not rush. Take your time, read the available literature, get familiar with the tools and infrastructure used by your chosen vendor and if necessary, ask for additional budget to ensure the validation is not only successful, but more importantly, relevant.