What is the process for a new Twona Release?

Some of you may have been wondering what happens before a new release of Twona AMS takes place. 

When developing any new functionality for our Twona system, we follow a clear structure, which is consistent with SaaS product development. It consists of a well-defined series of steps that allows us to design, develop, deploy, and maintain the system in the best way possible.

Here is a bit of what happens : 

Image source https://wallpapercave.com/bpm-wallpapers

Ideas and planning

Where do new functionality originate from? That depends. Sometimes new functionality is linked to user feedback which comes from tickets, or conversations with their customer success managers; other times, we follow the market to see what functionality is out there that could be interesting for our users and we evaluate it; a new feature could also be the result of an internal business objective that we aim at achieving. 

Whatever the source of this, we always assess what the impact of the functionality would be on other features already in the system, how it would affect user experience, whether this is something that all or most customers would benefit from and use, and how it would affect our existing infrastructure. 

During this phase, we also make sure that the user requirements (internal or external) are clearly defined, looking at all possible scenarios.

All this information is gathered in our Product Development board, in the form of cards, which will go through several process steps if they are validated by the product owners. If the card gets validated, the next phase will kick in, that would be the design phase.

Design 

When we talk about design, we are referring to the visual interpretation of the application screens that will contain the new functionality. These are often referred to as wireframes or mockups. When we create these, we also make sure that we cover the interactions and integration with existing features, and we involve the technical and customer team to generate a result that is consistent with the rest of the application and will be seamless for customer experience. 

Here we define a lot of possible scenarios of how the functionality would be used, where it would be accessed from, and how it will show to users with different access levels to the platform. 

Development 

Photo by Shahadat Rahman on Unsplash

Here is where our front and back-end developers get to work! They will be writing code (clean and modular ;)) to implement the wireframe design, integrate it with the existing software, and make sure that all is compatible and it will create no disruptions to user experience. 

The development takes place in servers separated from our own platform, to make sure that nothing is compromised.. When it is ready, it is moved to the testing environment. 

Testing

During the testing phase, several departments will be involved. 

Initially, the development team performs a test to confirm the functionality is working as expected, but more importantly, they perform some integration tests. These make sure that the new functionality is not going to break anything that was already in place. 

When this is confirmed, product owner and customer success get to test the functionality. Several eyes see more than two, so we always make sure that not only the originator of the request for new functionality gets to test it, but at least one other person does, although often testing is performed by at least 3 people, sometimes taking different roles and user permissions when performing the test.. 

If any issues are identified, it means we are back to the development team. A proper description of the issue is registered in our Product Board and the card about the functionality is sent back. The development team then works on the areas that are not working as expected, usually have questions for the requestors, and get back to coding. 

The full process above is repeated until the functionality passes all tests. 

When this happens, the functionality can be put into the production enviroment. 

Deployment

At Twona, we have a rule to not deploy (send to production) on a Friday afternoon. Although we very rarely experience any issues when doing a deployment, we want to make sure that if we do encounter an issue, this is sorted quicky, without robbing anyone of their weekend. 

So, we normally plan for deployments during the day. This is posible because there is little to no disruption to our client’s work when deployments take place, they are rather transparent to their operations. The reason for this is that we use a process where the old functionality/current version of our system does not get disconnected until the new one is in place, so the users will not notice anything until that happens and the new release is appearing in their screens.

The work starts by preparing the deployment environment : servers, databases, any infrastructure changes (if any , as we normally do these separately). When we start the deployment to the production environment, this is constantly monitored for any issues. When the release is completed, a notification is sent internally so that account managers can keep an eye on their clients for any potential ticket that may be related to the deployment. 

Documentation and Communication

This phase does not really start now, after the deployment, but from the moment the development starts on the new functionality. 

We make sure that new functionality is added to the online user guides, and start the preparation of newsletters with information on the changes. Very often, we distribute this information before the deployment is done so that our users get to know what is coming and how it will look when they log in to the system.  

We also organize online webinars regularly to communicate and explain about the new releases, especially when these cover several items, or when the functionality is very new or very different to what our users were experiencing before so we can make sure that they can continue to make the most out of our tool.  

Customer feedback, support, and continuous improvement

And mentioned before, we closely monitor our user experience after a release. This happens proactively through interactions with clients via their success managers and during trainings/online sessions about the new functionality, and more reactively through reaction to tickets that customers may have raised. 

We do receive a lot of positive feedback as a reaction to new features, but of course there are also improvement points that customers may notice. We definitely react immediately to any comment that would indicate a bug/malfunction of the functionality or any other area of the application as a result of the release, and those are given priority and sorted with urgency. 

However, we do take all feedback very seriously, and even though not all requests for improvement or changes are seeing the light immediately, they are always studied by the product team in combination with customer success to define whether these are individual/customized requests or they would benefit our customer base. 

Photo by Jon Tyson on Unsplash

As mentioned, many of our new features start off as a customer feedback request, which is why we appreciate any input we receive about the tool. We are always looking for opportunities to identify areas for further enhancement. 

Of course, we have our regular development and update cycles, but we aim to combine both our own views and expertise on the market with the needs of our customers to make the best artwork management system possible.

Artwork Management? On the Cloud, please

Photo by Massimo Botturi on Unsplash

If you follow the news, you will have heard of the cyber attacks that many pharmaceutical companies suffered in since a few years. The pandemic, and the sheer need to work from home and for employees to remotely access their valuable data, left companies that were not well prepared for this change in terms of security measures, highly unprotected for these schemes.

Besides the obvious consequences of data loss, and potential stolen intellectual property, there are other larger results like job losses and regulatory fines. However, companies’ reputation and brand can be also affected. According to this Forbes Insight report, 46% of organizations had suffered damage to their reputation as a result of a data breach.

One of our customers was the target of such cyber attacks in 2022. A lot of their data which was sitting in internal servers was compromised, affecting several areas within the company. Employees could not access data for days, and loss of information was experienced. Luckily, the Artwork and Regulatory teams were dealing with their packaging designs using our cloud-based Artwork Management System, so they could continue working normally and no data was lost for them.

Even though the Artwork Management Software Market Share is Expected to Grow at a CAGR of 9.0% 2022 to 2028, using Cloud technologies is not always a given for Life Sciences companies. Many of them do not consider a SaaS solution when looking at a change on their artwork management system, and insist on having it installed in their own servers. The main reason for this is that they consider they can have a better control on the application, and they believe it will be more secure.

Wrong.

More secured?

Cloud service providers offer heavy physical security measures to protect their data centers: advanced security systems, including biometric authentication, surveillance cameras, or 24/7 monitoring. They also have redundancy in place to ensure that data is not lost due to natural disasters or other events. Providers will also hire security professionals solely dedicated to ensuring that their infrastructure and services are secure; and since they have access to the latest security technologies, adapting to new threats is very quick.

More control?

When it comes to “having more control” over the solution, the reality shows that, often, organizations rely on manual processes to detect any unusual activity, which can be time-consuming, prone to errors, and too late when a serious problem occurs. Cloud service providers on the contrary, have continuous monitoring in place to detect any unauthorized access or unusual activity. They use advanced security technologies such as intrusion detection and prevention systems, firewalls, and access controls. And they have the time and resources to continuosly do security audits and vulnerability assessments to identify and address any weaknesses in their infrastructure.

Furthermore, on-premise solutions may not be updated regularly, leaving them vulnerable to new threats. When updates do occur, systems often need to be shut down for a period of time, and users are not allowed to log in until the changes have been implemented and tested on the production environments. Cloud solutions are constantly being improved, and most changes do not require a full stop of system usage, as often new releases can be done within a few minutes and are transparent for the common user. This, in consequence, means that updates on cloud solutions tend to happen on a more regular basis.

One other set back for on premise solutions is that organizations may not have the resources available to scale up in case of increased usage or large growth, as this requires investing in additional own hardware and infrastructure for which one may not have the money or the floor space for! In a cloud environment, one can quickly add or remove storage capacity based on requirements without needing any extra space, and at a fraction of the cost of own infrastructure.

Considering all the above, we believe – and hope that we have also given you enough reasons to believe the same – that Cloud solutions are a better, more flexible and economical solution for companies looking at updating their Artwork Management platforms.

Our solution, Twona, is a SaaS solution, and hosted in the cloud. We review the infrastructure underneath it regularly, to make sure it is upgraded to the latest at any given time, and constantly work on ways to improve our solutions performance based on these. And, if you still need some more reassurance, I would like to add as a final notes that at Twona, we have a recovery protocol whereby in a matter of hours, we are able to replicate our set up and get our clients functioning again.

5 key components of a VMP for SaaS

Image created with Midjourney

Artwork management in the pharmaceutical industry is a critical process that requires accuracy and precision to ensure compliance with regulatory requirements and avoid errors in the packaging design that make it to the market and risk a product recall. The artwork management process is sometimes underestimated as it pertains to a non-core activity for brands and manufacturers of medicines and medical devices. However, it can be a critical aspect when facing an audit and even more when such audit is triggered by a product recall.

Multi-tenant SaaS solutions are becoming more popular with pharmaceutical companies, despite the traditional on-site installations, as they offer lower pricing points and less on-boarding hassle (lower financial costs and faster implementation). One key component that is still not fully understood is the fact that validation of on-site custom made solutions differs significantly from the validation approach required for multi-tenant SaaS applications.

The first key component is the Validation Master Plan (VMP) which outlines the validation process. Here we will discuss five key components that must be included in a VMP document for the validation of a multi-tenant SaaS Artwork Management System.

  1. Scope and Objectives – This section needs to clearly define the scope and objectives of the validation process. Let’s dive in. The scope should outline the functionalities of the multi-tenant SaaS solution that will be validated, including any third-party integrations. It is important to define the scope with care and discuss which components need to be included. One aspect that is often overlooked when validation a SaaS application is that in many cases, there will be external services (typically micro services) and infrastructure. These, as long as they only play a servicing role, can be left out of the scope since they are controlled by a third party. The objectives should detail the specific outcomes that the validation process aims to achieve, such as ensuring compliance with regulatory requirements or minimizing the risk of errors or data loss.
  2. Validation Strategy and Acceptance Criteria – The validation strategy should specify the validation approach, including the type of validation to be performed, such as installation qualification, operational qualification, and performance qualification. It is a good idea to include specifically which aspects of the solution will be relevant for the validation. The acceptance criteria should detail the testing methodology, including the type of testing to be performed, such as functional testing, user acceptance testing, and performance testing.
  3. Roles and Responsibilities – This section should clearly outline the roles of the validation team, project manager, system administrator, and any other key stakeholders involved in the validation process. It should also detail the responsibilities of each team member, including their participation in the validation activities and their expected deliverables.
  4. Testing Documentation – The documentation section should detail the testing documentation that will be used and delivered during the validation process. This section should include a list of all required testing documents, including test plans, test scripts, and test cases. It can also outline the testing schedule and the expected timeline for completing each testing activity.
  5. Change Control – The final component of a VMP document should detail the procedures for making changes to the multi-tenant SaaS solution after the validation process is complete. It should include a list of change control forms (or other methods for documenting the required changes), detailing the requirements for documenting changes, and the process for reviewing and approving changes.

The VMP document is an essential tool that will guide you through the validation process.There is however not a single way to create it. Depending on your scope and criteria, the contents of the document can change dramatically. One critical aspect is choosing carefully which components of the application you are going to validate. For applications that rely on external infrastructure or services, specially when managed by third parties, it might prove difficult to get all the components required to validate those services. Our advice is to focus on your application and ensure that all third party infrastrucure and services are only services and do not represent a core data processing unit of your set of features.

If you get the VMP right, the rest of the validation will be much more approachable than having to come back to the VMP to make changes. Spend your time wisely, get the VMP right and your validation will be a breeze.

Want to know more key differences between a traditional VMP and a VMP for a SaaS solution, let us know!

On-premises vs SaaS

Even though the corporate solutions landscape has rapidly evolved over the last decade, the decision between an on-premises software installation and a SaaS cloud solution is a common one that many organizations face. There are several key differences between the two that impact cost, functionality, and security.

  1. Cost: On-premises software requires a significant upfront investment in hardware, maintenance, and upgrades. It also requires the in-house expertise in the form of developers, engineers, infrastructure and security experts. In contrast, a SaaS solution is generally sold as a subscription service and eliminates the need for a large upfront investment. This means that the cost of a SaaS solution is more predictable and often more manageable.
  2. Functionality: On-premises software offers more customization options, but it also requires more expertise to set up and manage. Development and installation takes a significant amount of time as the complexity of the required functions increases, taking several months to years to setup a system. A SaaS solution, on the other hand, is managed by the vendor. It typically offers less customization but is easier to set up and use. If the SaaS solution offers a powerful API, customization can be further extended. This can lead to a more streamlined and efficient process with a significantly lower go-live time.
  3. Security: On-premises software is often perceived as more secure because the data is stored on the organization’s own servers. However it also requires more resources and expertise to manage and protect. A SaaS solution is managed by the vendor and typically offers a higher level of security than an on-premises solution, specially when large scale, well know infrastructure providers are used, such as Amazon. It also involves more trust in the vendor and their security practices, which is typically solved with Information Security audits.

In conclusion, when deciding between an on-premises software installation and a SaaS cloud solution, it’s important to consider the cost, functionality, and security implications of each option. While on-premises software offers more customization options, it also requires more resources and expertise to set up and manage. SaaS solutions are easier to use and offer more predictable costs, but they also involve more trust in the vendor and their security practices. Ultimately, the right solution will depend on the specific needs and resources of each organization, but let’s be honest. Who in its right mind would in 2023 decide to purchase an on-premises solution when there are SaaS alternatives on the market?

Is Software Validation outdated?

Image generated with Midjourney

Software validation is the process of ensuring that software systems meet the requirements set forth by regulatory bodies, such as the FDA in the United States. This is particularly important in highly regulated industries, such as the pharmaceutical industry, where software systems are used to manage and analyze critical data that is used to support the development and manufacture of drugs.

The origin of software validation can be traced back to the early days of computer technology in the pharmaceutical industry. In the 1970s, the FDA began to recognize the importance of software validation as a means of ensuring the accuracy and reliability of data generated by computer systems. This led to the development of guidelines and regulations for software validation, specifically in the pharmaceutical industry, such as the FDA’s “Guideline on General Principles of Software Validation” in 2002.

One key document that is created during the software validation process is the Master Validation Plan (MVP). The MVP is a comprehensive document that outlines the overall strategy and approach for validating the software. It includes details such as the scope of the validation, the validation team, and the schedule for validation activities. It is the first and foremost piece to documentation that needs to be created.

Following the MVP, you need to build three key documents: OQ, IQ and PQ.

Operational Qualification (OQ) and Installation Qualification (IQ) are used to ensure that the software system is installed and configured properly, and that it functions as intended in its intended environment.

Performance Qualification (PQ) is a process of testing software systems in order to verify that it performs as intended, and that it meets the acceptance criteria defined in the Qualification Protocol (QP).

As the technology and software development methodologies have evolved since the 70s, the need to adapt the validation model for modern SaaS cloud-based solutions has become increasingly important. With the advent of cloud computing, software systems are no longer installed and run on a single machine, but rather they are accessed through the internet from various devices and locations. This is the so called “single tenant system”, which is a radically different paradigm from the early on-site installations. This has led to the development of new guidelines and regulations for validating cloud-based software systems, such as the FDA’s “Guidance for Industry: Cloud Computing and Mobile Medical Applications” in 2013, although one might argue that those models are still outdated given the speed of the advancement of technology and cloud services.

In conclusion, software validation is a critical process in ensuring the accuracy and reliability of data generated by computer systems in highly regulated environments. However, application of outdated validation methods will only led to frustration and failure.

If you are about to embark on a validation process for a SaaS solution but your QA team has only experience on traditional on-site installations, do not rush. Take your time, read the available literature, get familiar with the tools and infrastructure used by your chosen vendor and if necessary, ask for additional budget to ensure the validation is not only successful, but more importantly, relevant.

Listen to your users

Photo by Jon Tyson on Unsplash

Black and white image showing the text in capital letters: "WE HEAR YOU".
Photo by Jon Tyson on Unsplash

User feedback is crucial when designing and programming a SaaS solution for large and complex corporate companies to manage their packaging design process. Without user feedback, the solution may not meet the specific needs and demands of the users, leading to dissatisfaction and a lack of adoption.

By gathering feedback from users throughout the design and development process, the solution can be tailored to their specific needs. This ensures that the software is user-friendly and easy to navigate, making the packaging design process more efficient and streamlined. User feedback also helps identify any potential issues or bugs that may have been missed during testing.

Incorporating user feedback into the design and development process also demonstrates that the company values the input of its users and is committed to providing a high-quality solution. This can lead to increased customer satisfaction and loyalty.

Additionally, user feedback can provide valuable insights into the industry and market trends, allowing the company to stay competitive and continuously improve the solution.

There is of course a fine line between listening to your users and building a custom application for each of them.

How to prevent this from happening? These are some ideas to make it work for everyone:

  • Making sure that any feedback you receive is properly tracked and reported – this way you can link similar ideas/requests and map them to your own product development timelines
  • Translating the feedback into clear requirements (SOPs) – without clear requirements, nothing can be built. Users need to be very specific when talking about their needs so that there is no possibility to get something lost in translation.
  • It is imperative that the need applies to a majority of your user base – if what a customer wants is not what another one needs, there is little room for an implementation that will affect all your users. While some features may not be used/needed by all, building something that will only apply to a handful of users defeats the purpose of increasing the quality of your software and will deteriorate your client satisfaction.

When all these three conditions apply, customer feedback can become a great tool to make sure you are designing and programming a SaaS solution that is built to last and that users identify and are comfortable working with.

At Twona we often incorporate feedback from our users into upcoming releases. It may take some time for things to appear in your screen – we work with an agile methodology, but until proper requirements are drafted and confirmation that the solution will be beneficial to most users, we may not proceed to put it in the planning; or after going thru the 3 steps above, realize that the request was not meeting but a customer needs and could not move forward; but in any case, we do take their input seriously and report each feedback input into our tracking system for discussion with the Product owner and engineer teams.

So if you are one of our clients, do not hesitate to let your Success Manager know about your ideas!