Monday, December 21, 2015

Web Server and ASP.NET Application Life Cycle in Depth

Web Server and ASP.NET Application Life Cycle in Depth
Introduction

In this article, we will try to understand what happens when a user submits a request to an ASP.NET web app. There are lots of articles that explain this topic, but none that shows in a clear way what really happens in depth during the request. After reading this article, you will be able to understand:
• What is a Web Server
• HTTP - TCP/IP protocol
• IIS
• Web communication
• Application Manager
• Hosting environment
• Application Domain
• Application Pool
• How many app domains are created against a client request
• How many HttpApplications are created against a request and how I can affect this behaviour
• What is the work processor and how many of it are running against a request
• What happens in depth between a request and a response
Start from scratch
All the articles I have read usually begins with "The user sends a request to IIS... bla bla bla". Everyone knows that IIS is a web server where we host our web applications (and much more), but what is a web server?
Let start from the real beginning.
A Web Server (like Internet Information Server/Apache/etc.) is a piece of software that enables a website to be viewed using HTTP. We can abstract this concept by saying that a web server is a piece of software that allows resources (web pages, images, etc.) to be requested over the HTTP protocol. I am sure many of you have thought that a web server is just a special super computer, but it's just the software that runs on it that makes the difference between a normal computer and a web server.

As everyone knows, in a Web Communication, there are two main actors: the Client and the Server.
The client and the server, of course, need a connection to be able to communicate with each other, and a common set of rules to be able to understand each other. The rules they need to communicate are called protocols. Conceptually, when we speak to someone, we are using a protocol. The protocols in human communication are rules about appearance, speaking, listening, and understanding. These rules, also called protocols of conversation, represent different layers of communication. They work together to help people communicate successfully. The need for protocols also applies to computing systems. A communications protocol is a formal description of digital message formats and the rules for exchanging those messages in or between computing systems and in telecommunications.


HTTP knows all the "grammar", but it doesn't know anything about how to send a message or open a connection. That's why HTTP is built on top of TCP/IP. Below, you can see the conceptual model of this protocol on top of the HTTP protocol:


TCP/IP is in charge of managing the connection and all the low level operations needed to deliver the message exchanged between the client and the server.
In this article, I won't explain how TCP/IP works, because I should write a whole article on it, but it's good to know it is the engine that allows the client and the server to have message exchanges.
HTTP is a connectionless protocol, but it doesn't mean that the client and the server don't need to establish a connection before they start to communicate with each other. But, it means that the client and the server don't need to have any prearrangement before they start to communicate.
Connectionless means the client doesn't care if the server is ready to accept a request, and on the other hand, the server doesn't care if the client is ready to get the response, but a connection is still needed.
In connection-oriented communication, the communicating peers must first establish a logical or physical data channel or connection in a dialog preceding the exchange of user data.
Now, let's see what happens when a user puts an address into the browser address bar.
• The browser breaks the URL into three parts:
o The protocol ("HTTP")
o The server name (www.Pelusoft.co.uk)
o The file name (index.html)
• The browser communicates with a name server to translate the server name "www.Pelusoft.co.uk" into an IP address, which it uses to connect to the server machine.
• The browser then forms a connection to the server at that IP address on port 80. (We'll discuss ports later in this article.)
• Following the HTTP protocol, the browser sents a GET request to the server, asking for the file "http://www.pelusoft.co.uk.com/index.htm". (Note that cookies may be sent from the browser to the server with the GET request -- see How Internet Cookies Work for details.)
• The server then sents the HTML text for the web page to the browser. Cookies may also be sent from the server to the browser in the header for the page.)
• The browser reads the HTML tags and formats the page onto your screen.


The current practice requires that the connection be established by the client prior to each request, and closed by the server after sending the response. Both clients and servers should be aware that either party may close the connection prematurely, due to user action, automated time-out, or program failure, and should handle such closing in a predictable fashion. In any case, the closing of the connection by either or both parties always terminates the current request, regardless of its status.
At this point, you should have an idea about how the HTTP - TCP/IP protocol works. Of course, there is a lot more to say, but the scope of this article is just a very high view of these protocols just to better understand all the steps that occur since the user starts to browse a web site.
Now it's time to go ahead, moving the focus on to what happens when the web server receives the request and how it can get the request itself.
As I showed earlier, a web server is a "normal computer" that is running a special software that makes it a Web Server. Let's suppose that IIS runs on our web server. From a very high view, IIS is just a process which is listening on a particular port (usually 80). Listening means it is ready to accept a connections from clients on port 80. A very important thing to remember is: IIS is not ASP.NET. This means that IIS doesn't know anything about ASP.NET; it can work by itself. We can have a web server that is hosting just HTML pages or images or any other kind of web resource. The web server, as I explained earlier, has to just return the resource the browser is asking for.



ASP.NET and IIS
The web server can also support server scripting (as ASP.NET). What I show in this paragraph is what happens on the server running ASP.NET and how IIS can "talk" with the ASP.NET engine. When we install ASP.NET on a server, the installation updates the script map of an application to the corresponding ISAPI extensions to process the request given to IIS. For example, the "aspx" extension will be mapped to aspnet_isapi.dll and hence requests for an aspx page to IIS will be given to aspnet_isapi (the ASP.NET registration can also be done using Aspnet_regiis). The script map is shown below:


The ISAPI filter is a plug-in that can access an HTTP data stream before IIS gets to see it. Without the ISAPI filter, IIS cannot redirect a request to the ASP.NET engine (in the case of a .aspx page). From a very high point of view, we can think of the ISAPI filter as a router for IIS requests: every time there is a resource requested whose file extension is present on the map table (the one shown earlier), it redirect the request to the right place. In the case of an .aspx page, it redirects the request to the .NET runtime that knows how to elaborate the request. Now, let's see how it works.
When a request comes in:
• IIS creates/runs the work processor (w3wp.exe) if it is not running.
• The aspnet_isapi.dll is hosted in the w3wp.exe process. IIS checks for the script map and routes the request to aspnet_isapi.dll.
• The request is passed to the .NET runtime that is hosted into w3wp.exe as well.



Finally, the request gets into the runtime
This paragraph focuses on how the runtime handles the request and shows all the objects involved in the process.
First of all, let's have a look at what happens when the request gets to the runtime.
• When ASP.NET receives the first request for any resource in an application, a class namedApplicationManager creates an application domain. (Application domains provide isolation between applications for global variables, and allow each application to be unloaded separately.)
• Within an application domain, an instance of the class named Hosting Environment is created, which provides access to information about the application such as the name of the folder where the application is stored.
• After the application domain has been created and the Hosting Environment object instantiated, ASP.NET creates and initializes core objects such as HttpContext, HttpRequest, and HttpResponse.
• After all core application objects have been initialized, the application is started by creating an instance of the HttpApplication class.
• If the application has a Global.asax file, ASP.NET instead creates an instance of the Global.asax class that is derived from the HttpApplication class and uses the derived class to represent the application.
Those are the first steps that happens against a client request. Most articles don't say anything about these steps. In this article, we will analyze in depth what happens at each step.
Below, you can see all the steps the request has to pass though before it is elaborated.


Application Manager
The first object we have to talk about is the Application Manager.
Application Manager is actually an object that sits on top of all running ASP.NET AppDomains, and can do things like shut them all down or check for idle status.
For example, when you change the configuration file of your web application, the Application Manager is in charge to restart the AppDomain to allow all the running application instances (your web site instance) to be created again for loading the new configuration file you may have changed.
Requests that are already in the pipeline processing will continue to run through the existing pipeline, while any new request coming in gets routed to the new AppDomain. To avoid the problem of "hung requests", ASP.NET forcefully shuts down the AppDomain after the request timeout period is up, even if requests are still pending.
Application Manager is the "manager", but the Hosting Environment contains all the "logic" to manage the application instances. It's like when you have a class that uses an interface: within the class methods, you just call the interface method. In this case, the methods are called within the Application Manager, but are executed in the Hosting Environment (let's suppose the Hosting Environment is the class that implements the interface).
At this point, you should have a question: how is it possible the Application Manager can communicate with the Hosting Environment since it lives in an AppDomain? (We said the AppDomain creates a kind of boundary around the application to isolate the application itself.) In fact, the Hosting Environment has to inherit from theMarshalByRefObject class to use Remoting to communicate with the Application Manager. The Application Manager creates a remote object (the Hosting Environment) and calls methods on it.
So we can say the Hosting Environment is the "remote interface" that is used by the Application Manager, but the code is "executed" within the Hosting Environment object.


HttpApplication
On the previous paragraph, I used the term "Application" a lot. HttpApplication is an instance of your web application. It's the object in charge to "elaborate" the request and return the response that has to be sent back to the client. An HttpApplication can elaborate only one request at a time. However, to maximize performance, HttpApplication instances might be reused for multiple requests, but it always executes one request at a time.
This simplifies application event handling because you do not need to lock non-static members in the application class when you access them. This also allows you to store request-specific data in non-static members of the application class. For example, you can define a property in the Global.asax file and assign it a request-specific value
You can't manually create an instance of HttpApplication; it is the Application Manager that is in charge to do that. You can only configure what is the maximum number of HttpApplications you want to be created by the Application Manager. There are a bunch of keys in the machine config that can affect the Application Manager behaviour:

processModel enable="true|false"
timeout="hrs:mins:secs|Infinite"
idleTimeout="hrs:mins:secs|Infinite"
shutdownTimeout="hrs:mins:secs|Infinite"
requestLimit="num|Infinite"
requestQueueLimit="num|Infinite"
restartQueueLimit="num|Infinite"
memoryLimit="percent"
webGarden="true|false"
cpuMask="num"
userName=""
password=""
logLevel="All|None|Errors"
clientConnectedCheck="hrs:mins:secs|Infinite"
comAuthenticationLevel="Default|None|Connect|Call|
Pkt|PktIntegrity|PktPrivacy"
comImpersonationLevel="Default|Anonymous|Identify|
Impersonate|Delegate"
responseDeadlockInterval="hrs:mins:secs|Infinite"
responseRestartDeadlockInterval="hrs:mins:secs|Infinite"
autoConfig="true|false"
maxWorkerThreads="num"
maxIoThreads="num"
minWorkerThreads="num"
minIoThreads="num"
serverErrorMessageFile=""
pingFrequency="Infinite"
pingTimeout="Infinite"
maxAppDomains="2000"


With maxWorkerThreads and minWorkerThreads, you set up the minimum and maximum number ofHttpApplications.
For more information, have a look at: ProcessModel Element.
Just to clarify what we have said until now, we can say that against a request to a WebApplication, we have:
• A Worker Process w3wp.exe is started (if it is not running).
• An instance of ApplicationManager is created.
• An ApplicationPool is created.
• An instance of a Hosting Environment is created.
• A pool of HttpAplication instances is created (defined with the machine.config).
Until now, we talked about just one WebApplication, let's say WebSite1, under IIS. What happens if we create another application under IIS for WebSite2?
• We will have the same process explained above.
• WebSite2 will be executed within the existing Worker Process w3wp.exe (where WebSite1 is running).
• The same Application Manager instance will manage WebSite2 as well. There is always an instance per Work Proces w3wp.exe.
• WebSite2 will have its own AppDomain and Hosting Environment.


It's very important to notice that each web application runs in a separate AppDomain so that if one fails or does something wrong, it won't affect the other web apps that can carry on their work. At this point, we should have another question:
What would happen if a Web Application, let's say WebSite1, does something wrong affecting the Worker Process (even if it's quite difficult)?
What if I want to recycle the application domain?
To summarize what we have said, an AppPool consists of one or more processes. Each web application that you are running consists of (usually, IIRC) an Application Domain. The issue is when you assign multiple web applications to the same AppPool, while they are separated by the Application Domain boundary, they are still in the same process (w3wp.exe). This can be less reliable/secure than using a separate AppPool for each web application. On the other hand, it can improve performance by reducing the overhead of multiple processes.
An Internet Information Services (IIS) application pool is a grouping of URLs that is routed to one or more worker processes. Because application pools define a set of web applications that share one or more worker processes, they provide a convenient way to administer a set of web sites and applications and their corresponding worker processes. Process boundaries separate each worker process; therefore, a web site or application in an application pool will not be affected by application problems in other application pools. Application pools significantly increase both the reliability and manageability of a web infrastructure.


Thursday, October 8, 2015

Change Request Management





Change Request Management

Introduction

If project success means completing the project on time, within budget and with the originally agreed upon features and functionality,few software projects are rated successful.


Statement of Problem

In any enterprise software project, managing the changes in requirements is a very difficult task and it could become chaotic. If it is not properly managed, the consequences could be very costly to the project and it could ultimately result in the project’s failure.

Some Reasons for Failure

Poor requirements management: We forge ahead with development without user input and a clear understanding of the problems we attempt to solve.
Inadequate change management: Changes are inevitable; yet we rarely track them or understand their impact.
Poor resource allocation: Resource allocation is not re-negotiated consistently with the accepted Change Requests.

Changing Requirements

Software requirements are subjected to continuous changes for bad and good reasons. The real problem however, is not that software requirements change during the life of a project, but that they usually change out of a framework of disciplined planning and control processes. If adequately managed, Change Requests (CR) may represent precious opportunities to achieve a better customer satisfaction and profitability. If not managed, instead, CR represents threats for the project success.

Change Request Management (CRM)

CRM addresses the organizational infrastructure required to assess the cost, and schedule, impact of a requested change to the existing product. Change Request Management addresses the workings of a Change Review Team or Change Control Board.

Change Request

A Change Request (CR) is a formally submitted artifact that is used to track all stakeholder requests (including new features, enhancement requests, defects, changed requirements, etc.) along with related status information throughout the project lifecycle.

Change Tracking

Change Tracking describes what is done to components for what reason and at what time. It serves as history and rationale of changes. It is quite separate from assessing the impact of proposed changes as described under 'Change Request Management'.

Change or Configuration Control Board (CCB)

CCB is the board that oversees the change process consisting of representatives from all interested parties, including customers, developers, and users. In a small project, a single team member, such as the project manager or software architect, may play this role.

CCB Review Meeting

The function of this meeting is to review Submitted Change Requests. An initial review of the contents of the Change Request is done in the meeting to determine if it is a valid request. If so, then a determination is made if the change is in or out of scope for the current release(s), based on priority, schedule, resources, level-of-effort, risk, severity and any other relevant criteria as determined by the group.

Why control change across the life cycle?

“Uncontrollable change is a common source of project chaos, schedule slips and quality problems.”

Impact analysis

Impact analysis provides accurate understanding of the implications of a proposed change, helping you make informed business decisions about which proposals to approve. The analysis examines the context of the proposed change to identify existing components that might have to be modified or discarded, identify new work products to be created, and estimate the effort associated with each task.”

Traceability

Traceability provides a methodical and controlled process for managing the changes that inevitably occur during application development. Without tracing, every change would require reviewing documents on an ad-hoc basis to see if any other elements of the project need updating.

Establishing a Change Control Process

The following activities are required to establish CRM:

Establish the Change Request Process
Establish the Change Control Board
Define Change Review Notification Protocols

Wednesday, July 29, 2015

10 things to be a GooD TesteR






So, here you go. Please prepend the condition “you are good at testing when” to each point and read through:

1. You understand priorities:

Software tester unknowingly becomes good time manager as the first thing he needs to understand is priority. Most of the time, you are given a module/functionality to test and timeline (which is always tight) and you need to give output. These regular challenges make you understand how to prioritize the things.
As a tester, you need to understand what should be tested and what should be given less priority, what should be automated and what should be tested manually, which task should be taken up first and what can be done at last moment. Once you are master of defining priorities, software testing would be really easy. But…….but my friend, understanding priority only comes with experience and so patience and an alert eye are the most helpful weapons.

2. You ask questions:

Asking questions is the most important part of software testing. If you fail at it, you are going to lose important bunch of information.

Questions can be asked:
• To understand requirement
• To understand changes done
• To understand how requirement has been implemented
• To understand how the bug fixed
• To understand bug fix effects
• To understand the product from other perspectives like development, business etc.
Can be beneficial to understand the overall picture and to define the coverage.

3. You can create numbers of ideas:

When you can generate numbers of ideas to test the product, you stand out of crowd as most of the time people feel self-satisfaction after writing ordinary functional and performance test cases.
As per me, a real tester’s job starts only after writing ordinary test cases. The more you think about how the product can be used in different ways, you will be able to generate ideas to test it and ultimately you will gain confidence in product, customer satisfaction and life long experience.
So, be an idea generator if you want to be good at testing.

4. You can analyze data:

Being a tester, you are not expected to do testing only. You need to understand the data collected from testing and need to analyze them for particular behaviour of application or product. Most of the time, we hear about non-reproducible bug, There is no bug that is non-reproducible. If it occurred once that means it’s going to pop out for the second time. But to reach out to the root cause, you need to analyze the test environment, the test data, the interruptions etc.
Also, as we all know, when it comes to automation testing, most of the time it’s about Analyzing test results because creating scripts and executing them for numerous time is not a big task but Analyzing the data generated after execution of those scripts, is the most important part.

5. You can report negative things in positive way:

A tester needs to learn tactics to deal with everyone around and needs to be good at communication. No one feels good when he/she is being told that whatever they did was completely or partially wrong. But it makes a whole lot of difference in reaction when you suggest doing something or rectifying something with better ideas and without egoistic voice.
Also details are important so provide details about what negative you saw and how it can affect the product/application overall.
No one would deny rectifying it.

6. You are good at reporting:

For the whole day you worked and worked and executed numbers of test cases and marked them as pass/fail in “test management tool”. What would be your status at the end of day? No one would be interested in knowing how many numbers of test cases you executed. People want short and sweet description about your whole day task.
So now onwards, write your “status report” as – what you did (at max 3 sentences), what you found (with bug numbers) and what you will do next.

7. You are flexible to support whenever it’s required:

Duty of software tester does not end after reporting bug. If the developer is not able to reproduce the bug, you are expected to support to reproduce it because then only the developer will be able to fix it.
Also tight timelines for software testing makes many testers ignorant for quality. The right approach should be proper planning and an extra effort to cover whatever is required.
8. You are able co-relate real time scenarios to software testing:
When you are able to co-relate testing with real life, it’s easy. Habituate yourself to think or constantly create test cases about how to test a pen, how to test a headphone, how to test a monument and see how it helps in near future. It will help your mind to constantly generate ideas and relate testing with practical things.

9. You are a constant learner:

Software testing is challenging because you need to learn new things constantly. It’s not about gaining expertise of specific scripting language; it’s about keeping up with latest technology, about learning automation tools, about learning to create ideas, about learning from experience and ultimately about constantly thriving.

10. You can wear end user’s shoes:

You are a good tester only when you can understand your customer. Customer is King and you need to understand his/her needs. If the product does not satisfy customer needs, no matter how useful it is, it is not going to work. And it is a tester’s responsibility to understand the customer.

Friday, June 26, 2015

21 CFR 11.10(k): Document Control






21 CFR 11.10(k): Document Control

Organizations that use FDA regulated computer systems must have a document control system. This document control system must include provisions for document approval, revision, and storage. They must also have defined procedures to use and administer the computer system.

Text of 21 CFR 11.10(k)

Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following:

11.10(k) Use of appropriate controls over systems documentation including:

(1) Adequate controls over the distribution of, access to, and use of documentation for system operation and maintenance.

(2) Revision and change control procedures to maintain an audit trail that documents time-sequenced development and modification of systems documentation.

Interpretation

Document control is required for any documentation for this system, including SOPs and validation documents. This may be accomplished through a company’s existing document control procedures. There should be change control procedures that cover changes in system documentation. This may be covered by company document control procedures.

Implementation

An organization needs policies for creating compliant documentation and making changes to that documentation. All expired versions of SOPs or other compliant documentation should be retained for future regulatory review. All computer systems require a procedure describing the operation, maintenance, security, and administration for the system.

If you need more information or assistance with training on document control or assessing your document control system, please contact us to arrange consultation services.

Frequently Asked Questions

Q: Does every computer system require an operation and use procedure?
A: The use of every compliant computer system should be proceduralized. This can be documented in a system-specific SOP, or it can be documented with the associated procedure where the computer system is used.

21 CFR 11.10(j): Policies for Using Electronic Signatures

21 CFR 11.10(j): Policies for Using Electronic Signatures

If an FDA regulated computer system uses electronic signatures, the organization must have procedures which define practices for using electronic signatures within the organization.

Text of 21 CFR 11.10(j)


Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following:


11.10(j) The establishment of, and adherence to, written policies that hold individuals accountable and responsible for actions initiated under their electronic signatures, in order to deter record and signature falsification.

Interpretation

There should be policies that clearly state that the electronic signing is the same as a person’s handwritten signature and that all responsibilities that apply to handwritten signatures also apply to electronic signatures.

Implementation

An organization requires a clear policy on the use of electronic signatures, including a statement signed by all employees who will use electronic signatures that they understand an electronic signature is legally equivalent to a hand-written signature.

If you need more information or assistance with training on policies for using electronic signatures or assessing policies for using electronic signatures, please contact us to arrange consultation services.

Compare this requirement with Annex 11 Section 14., Electronic Signatures.

Frequently Asked Questions

Q: Are all employees required to sign our policy on the use of electronic signatures?
A: Only employees who will use a computer system with electronic signatures are required to be trained on the use of electronic signatures.

21 CFR 11.10(i): Education, Training and Experience

21 CFR 11.10(i): Education, Training and Experience

Individuals who use FDA regulated computer systems should have the appropriate education, training, or experience to operate the system.

Text of 21 CFR 11.10(i)

Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following:


11.10(i) Determination that persons who develop, maintain, or use electronic record/electronic signature systems have the education, training, and experience to perform their assigned tasks.

Interpretation

All users (including system administrators) must be trained before they are assigned tasks in the system. All users should be appropriately trained on the process regulated by the computer system.

Implementation

Describe any training that a person should receive before they are allowed to use this system. Include relevant SOPs, STMs, etc.

If you need training on 21 CFR 11 or validation, would like assistance assessing your training system systems to see they are compliant, or would need to proceduralize your training program, please contact us to arrange consultation services.

Compare this requirement with Annex 11 Section 2., Personnel.

Frequently Asked Questions

Q: How do we document adequate training?
A: Before being granted system access, a user should be trained according to your companies procedures for training. This training should be documented and retained. In addition, an organization should retain a CV for all employees and contractors who perform GxP operations.

21 CFR 11.10(h): Input Checks

21 CFR 11.10(h): Input Checks

FDA regulated computer systems should have the appropriate controls in place to ensure that data inputs are valid. This verification is called an input check.

Text of 21 CFR 11.10(h)

Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following:
11.10(h) Use of device (e.g., terminal) checks to determine, as appropriate, the validity of the source of data input or operational instruction.
Interpretation

The system should be able to perform an input check to ensure the source of the data being input is valid. In some cases, this means a monitor should be available such that someone entering data can see what they entered. This can also mean that data is restricted to particular input devices or sources. Data should not be entered into a regulated computer system without the owner knowing the source of the data.

Implementation

Document how data is input into the system. If data is being collected from another external system, describe the connection to that source and how the system verifies the identity of the source data.

If you need more information or assistance with training on input checks or assessing your systems to see if they have adequate input checks, please contact us to arrange consultation services.

Compare this requirement with Annex 11 Section 6., Accuracy Checks.

Frequently Asked Questions

Q: Does every data field require verification before entry into the system?
A: In general, only critical fields require data verification. However, as much as a program can restrict extraneous data entry through drop-down lists, restricted numeric ranges, or date ranges, etc., will generally improve the quality of the data. When users are allowed to enter any possible value, they will enter any and all possible unexpected values.

Q: Do I need to specifically validate that my system accepts data from a keyboard or mouse?
A: Generally speaking, we document that the use of the keyboard and mouse is tested implicitly throughout the validation and do not create a specific test case to verify input from these devices. If a system uses another data entry source, such as a bar code reader, we generally do include a test to verify that data is successfully entered into the system.

21 CFR 11.10(g): Authority Checks

FDA regulated computer systems should enforce user roles within a system. This process of verifying a user role within a system is called an authority check. For example, only a member of the QA group should be able to provide QA approval, and only a system administrator should be able to create a new user.

Text of 21 CFR 11.10(g)


Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following:
11.10(g) Use of authority checks to ensure that only authorized individuals can use the system, electronically sign a record, access the operation or computer system input or output device, alter a record, or perform the operation at hand.
Interpretation

The system should authorize users before allowing them to access or alter records. This may include different levels of security within the system. The number of security groups in a system will be dependent upon the complexity of the system and the amount of granularity that an organization requires for use of a computer system. For example, a laboratory instrument may have only a few user groups (Standard User, Tester, Administrator, etc.), while a large eDMS may have dozens of user groups.

Implementation

Document the levels of security within the system. Verify appropriate implementation of user-level security during the validation process.

If you need more information or assistance with training on authority checks or assessing your systems to see if they have adequate authority checks, please contact us to arrange consultation services.

Compare this requirement with Annex 11 Section 12, Security and 15., Batch Release.

Frequently Asked Questions

Q: At a minimum, how many security levels should our system have?
A: There should a General level to allow use of the system (add or edit records but no rights to delete records) and an Administrator level that can delete records or perform user administration tasks.

21 CFR 11.10(f): Operational System Checks

FDA regulated computer systems should have sufficient controls or operational system checks to ensure that users must follow required procedures. For example, if a computer system regulates the release of a manufactured product, the computer system should not authorize the release until the appropriate Quality approval has been provided.

Text of 21 CFR 11.10(f)

Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following:


11.10(f) Use of operational system checks to enforce permitted sequencing of steps and events, as appropriate.

Interpretation

The system should not allow steps to occur in the wrong order. For example, should it be necessary to create, delete, or modify records in a particular sequence, operational system checks would ensure that the proper sequence is followed. Another example would be system checks that prevent changes to a record after it has been reviewed and signed.

Implementation

Document how the computer system prevents steps from occurring in the wrong order. If it is necessary to create, delete, or modify records in a particular sequence, explain how operational system checks will ensure that the proper sequence of events is followed.

If you need more information or assistance with training on operational system checks or assessing your systems to see if they have adequate operational system checks, please contact us to arrange consultation services.

Frequently Asked Questions

Q: Can you provide some examples of operational system checks?
A: An operation system check is any system control that enforces a particular workflow. For example, when approving a batch release, a system might require an electronic signature from manufacturing and quality control before the batch status can be changed to released. Another system may have a requirement that once an electronic signature is attached to a record, the record can no longer be modified. In this case, applying the electronic signature would trigger a control locking the record from future edits until the electronic signature is removed.

21 CFR 11.10(e): Audit Trails





Correlation in JMeter

Text of 21 CFR 11.10(e)

Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following:
11.10(e) Use of secure, computer-generated, time-stamped audit trails to independently record the date and time of operator entries and actions that create, modify, or delete electronic records. Record changes shall not obscure previously recorded information. Such audit trail documentation shall be retained for a period at least as long as that required for the subject electronic records and shall be available for agency review and copying.
Interpretation

Audit trails are required for all systems that record GxP data. Audit trails should be generated independently of the operator and include the local date and time of the actions that alter the record. They cannot overwrite the old data, and they must be stored as long as the record itself is stored.

Implementation

Audit trails are required. They should be generated independently of the operator and include the local date and time of the actions that alter the record. In general, no system user, including system administrators, should have the ability to modify the audit trail. Audit trails cannot overwrite the older data, including other audit trail records, and they must be stored as long as the record itself is stored. In addition, certain functions, like applying or removing an electronic signature to a record should be tracked in the audit trail.

If you need more information or assistance with training on audit trails or assessing your systems, please contact us to arrange consultation services. We also offer software that provides MS Excel and MS Access with a 21 CFR 11 compliant audit trail.

Compare this requirement with Annex 11 Section 9., Audit Trails.

Frequently Asked Questions

Q: What does an audit trail require to track?
A: The text of the regulation states the audit trail must record “operator entries and actions that create, modify, or delete electronic records.” This means that all user data entry, edits, or deletions that modify program data should be tracked in the audit trail.

Q: How do you provide Excel Spreadsheets or Access databases with an audit trail?
A: Ofni Systems has created tools to make several common programs compliant with 21 CFR 11 and Annex 11. ExcelSafe provides Excel spreadsheets with a 21 CFR 11 compliant audit trail. Similarly, the Part 11 Toolkit provides Access databases with an audit trail.

Q: What is the distinction between an event log and an audit trail. Does an event log meet this requirement?
A: Event logs usually describe a table within a computer system designed to record important system functionality, such as when a user logs into a system or when a system error occurs. An audit trail records changes to system data. Unless the event log is recording operator-initiated changes to system data, an event log does not meet the requirements of 21 CFR 11.10(e).

21 CFR 11.10(d): Limited System Access






FDA regulated computer systems must have controls in place to ensure that only authorized users can operate the system; in practice, this means that FDA regulated computer systems are expected to require a password.

Text of 21 CFR 11.10(d)


Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following:
11.10(d) Limited system access to authorized individuals.
Interpretation

Security is required for electronic records and/or the systems that generate or access these records. The system and records generated by or contained within the system should only be available to authorized individuals.

Implementation

Security is usually a combination of physical controls, such as locks on doors which prevent unauthorized personnel from accessing restricted areas of the facility, and logical controls, such as program passwords which require that users log-in before accessing system functionality. Two primary tools for enforcing limited system access are user passwords to access a system and program time-outs to put the system into a locked state when the program is not used for an extended period of time. Document and test (this is usually done as part of system validation) who can access the system and the security that prevents others from gaining access to the system or records.

If you need more information or assistance with training on limited system access, assessing your systems, or writing SOPs on limited system access, please contact us.

Compare this requirement with Annex 11 Section 12., Security.

Frequently Asked Questions

Q: Do all validated computer systems require passwords?
A: User-specific passwords are usually considered superior to other controls, such as a general password to access as system or procedural controls when enforcing compliance. However, if technological solutions are not possible, procedural controls can be considered acceptable, provided that they provide a similar level of system control.

Q: Can you help me make my Access database or Excel spreadsheet compliant with this regulation?
A: The Part 11 Toolkit provides Access databases with all of the technological tools required for compliance with 21 CFR 11, including password protection. ExcelSafe provides similar technological tools to Excel spreadsheets.

21 CFR 11.10(c): Protection of Records






There must be procedures in place to ensure that data in FDA regulated computer systems is retained throughout the required lifetime of the data.

Text of 21 CFR 11.10(c)


Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following:
11.10(c) Protection of records to enable their accurate and ready retrieval throughout the records retention period.
Interpretation

Data stored within computer systems should be protected throughout the full record retention period. During this period, electronic records should be able to be accessed or retrieved within a reasonable period of time. Organizations need to plan for such common contingencies as hard drive or server failure.

Implementation

Create, implement, and follow procedures of Data Backup and Recovery, Data Archiving, and Disaster Recovery/Business Continuity.

If you need more information or assistance with training on protection of records, assessing your systems or writing SOPs on protection of records, please contact us to arrange consultation services.

Compare this requirement with Annex 11 Section 7., Data Storage, Section 16., Business Continuity, and Section 17., Archiving.

Frequently Asked Questions

Q: How long must data be protected?
A: It depends on the type of data. One of the central points of 21 CFR 11 is that electronic records must be treated identically to paper records; therefore, electronic records must be retained for the same length of time as paper. For example, most clinical data must be retained for at least two years beyond the final disposition of the research drug. Most manufacturing records must be retained for up to seven years beyond the expiration of the manufactured product. Organizations that use computer systems must be prepared to retain their electronic data for years into the future.

Q: What is the distinction between Data Backup, Data Recovery, Data Archiving, and Disaster Recovery?
A: Data backup is the process of ensuring that computer system data is routinely saved to a secondary location. Data recovery is the process of restoring a file from this backup file location to general use. Data archiving the the process of removing older or less utilized data from a computer system in order to improve system performance. Disaster recovery is the process of recreating a computer system in the event of a serious system failure.

21 CFR 11.10(b): Accurate Generation of Records






Computer systems used in FDA regulated environments must be able to accurately reproduce all system data in electronic and human readable forms.

Text of 21 CFR 11.10(b)

Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following:
11.10(b) The ability to generate accurate and complete copies of records in both human readable and electronic form suitable for inspection, review, and copying by the agency. Persons should contact the agency if there are any questions regarding the ability of the agency to perform such review and copying of the electronic records.
Interpretation

All computer systems need the ability to generate or export accurate and complete copies of records stored within them. Computer systems must be able to provide both electronic copies (export to file capabilities) as well as paper copies or printouts. If a computer system is able to export data to a file, it is assumed that the file may be printed. Audit trail information and any associated electronic signature information must also be available.

Implementation

Verify that your computer system does not allow data to be modified without being tracked in the audit trail. Verify that data with regulatory impact can be retrieved from the computer, and that this data can be printed and exported to some electronic format. This is usually documented as part of the validation process.

Compare this requirement with Annex 11 Section 8., Printouts

Contact Ofni Systems if you need assistance with assessing your computer systems compliance status or implementing reporting functionality.

Frequently Asked Questions

Q: What file formats are acceptable for file outputs?
A: The FDA does not have a list of acceptable file formats, but files provided to an FDA regulator should be able to be read by that regulator. Ofni Systems recommends using common file formats, such as TXT, DOC, XLS, etc. wherever technologically possible. Converting files to a PDF format is a good technique to ensure that provided data is not accidentally altered.

Validation Document Resources
» Validation Plans
» User Requirement Specification
» Functional Requirements
» Design Specification
» Testing Protocols
» Installation Qualification
» Operational Qualification
» Performance Qualification
» Traceability Matrix
» Risk Assessment
» Testing Deviations
» Summary Report
» Change Control for Validated Systems

21 CFR 11.10(a):Validation of Systems






Organizations who use computer systems in FDA regulated environments must document the operations the system performs, the system configuration required to operate correctly, and the testing that demonstrates that the system operates according to the defined specifications. This process is called validation.

Text of 21 CFR 11.10(a)

Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following:
11.10(a) Validation of systems to ensure accuracy, reliability, consistent intended performance, and the ability to discern invalid or altered records.
Interpretation

Validation is required for computer systems. This may include a Master or Project specific Validation Plan (VP), a Functional Requirements Specification (FRS), a System Design Specification (SDS), Test Protocols and Validation Summary Report (SR). Validation should include testing that demonstrates that the system can identify any changes made to a record.

Compare this requirement with Annex 11 Principle and Section 4., Validation

Implementation

Validate the computer system, following your organizations defined procedures for validation.

More information is available about computer system validation.

Frequently Asked Questions

Q: Which computer systems require validation?
A: If a computer system is used to provide information to regulatory body (such as the FDA), or meet requirements for the regulatory body, the system must be validated.

Q: I have a simple spreadsheet. Does it have to be validated?
A: 21 CFR 11 makes no distinction between types of computer systems; all computerized systems used to meet regulatory requirements should be validated.

Introduction to 21 CFR Part 11

Q: What are the requirements of 21 CFR 11?
A: 21 CFR 11 requires that closed computer systems must have a collection of technological and procedural controls to protect data within the system. Open computer systems must also include controls to ensure that all records are authentic, incorruptible, and (where applicable) confidential.

Q: What computer systems must be compliant with 21 CFR 11?
A: All computer systems which store data which is used to make Quality decisions or data which will be reported to the FDA must be compliant with 21 CFR 11. In laboratory situations, this includes any laboratory results used to determine quality, safety, strength, efficacy, or purity. In clinical environments, this includes all data to be reported as part of the clinical trial used to determine quality, safety, or efficacy. In manufacturing environments, this includes all decisions related to product release and product quality.

Q: What is computer system validation?
A: Validation is a systematic documentation of system requirements, combined with documented testing, demonstrating that the computer system meets the documented requirements. It is the first requirement identified in 21 CFR 11 for compliance. Validation requires that the System Owner maintain the collection of validation documents, including Requirement Specifications and Testing Protocols.
More information about requirements for computer system validation

Q: What is accurate record generation?
A: Accurate record generation means that records entered into the system must be completely retrievable without unexpected alteration or unrecorded changes. This is generally tested by verifying that records entered into the system must be accurately displayed and accurately exported from the system.
More information about requirements for accurate record generation

Q: How must records be protected?
A: Electronic records must not be corrupted and must be readily accessible throughout the record retention period. This is usually performed through a combination of technological and procedural controls.
More information about requirements for protection of records

Q: What is limited system access?
A: System owners must demonstrate that they know who is accessing and altering their system data. When controlled technologically, this is commonly demonstrated by requiring all users have unique user IDs along with passwords to enter the system.
More information about requirements for limited system access

Q: What is an audit trail?
A: An audit trail is an internal log in a program that records all changes to system data. This is tested by demonstrating that all changes made to data are recorded to the audit trail.
More information about audit trails

Q: What are operational system checks?
A: Operational system checks enforce sequencing of critical system functionality. This is demonstrated by showing that business-defined workflows must be followed. For example, data must be entered before it can be reviewed.
More information about operational system checks

Q: What are device checks?
A: Device checks are tests to ensure the validity of data inputs and operational instructions. Generally speaking, Ofni Systems does not suggest testing keyboards, mice, etc., because these input devices are implicitly tested throughout other testing. However, if particular input devices (optical scanners, laboratory equipment, etc.) these devices should be tested to ensure the accuracy of system inputs.
More information about input and device checks

Q: What training requirements are required for 21 CFR 11 compliant programs?
A: Users must be documented to have the education, training, and experience to use the computer system. Typically training can be covered by your company training procedures.
More information about education, training, and experience required for 21 CFR 11

Q: What is a policy of responsibility for using electronic signatures?
A: Users must state that they are aware that they are responsible for all data they enter or edit in a system. This can be accomplished technologically through accepting conditions upon signing into the system or procedurally by documenting this responsibility as part of training.
More information about policies for using electronic signatures

Q: What documentation requirements are required for 21 CFR 11 compliant programs?
A: Documentation must exist which defines system operations and maintenance. Typically these requirements are met by company document control procedures.
More information about document control systems

Q: What are the requirements for electronic signatures?
A: All electronic signatures must:
» Include the printed name of the signer, the date/time the signature was applied, and the meaning of the electronic signature.
» Be included in human readable form of the record. Electronic signatures must not be separable from their record.
» Must be unique to a single user and not used by anyone else.
» Can use biometrics to uniquely identify the user. If biometrics are not used, they need at least two distinct identifiers (for example, the user ID and a secret password).

Q: Does 21 CFR 11 have any requirements for passwords or identification codes?
A: Yes. Procedural controls should exists to ensure that:
» No two individuals have the same user ID and password.
» Passwords are periodically checked and expire.
» Loss management procedures exists to deauthorize lost, stolen, or missing passwords.


Glossary
» Closed Systems are computer systems where system access is controlled by people who are responsible for the content of electronic records in the system. Most applications are considered to be closed systems.
» Open Systems are computer systems where system access is not controlled by people responsible for the content of electronic records in the system. The internet or wikis are examples of open systems.
» Procedural Controls are documented SOPs which ensure that a system is only used in a particular manner.
» Technological Controls are program-enforced compliance rules, like requiring that a user have a password to log into a computer system. Technological controls are generally considered to be more secure than procedural controls.
» Biometrics are means of identifying a person based on physical characteristics or repeatable actions. Some examples of biometrics include identifying a user based on a physical signature, fingerprints, etc.

SQL Queries FAQ


Q. How do you select all records from the table?
Select * from table_name;

Q. What is a join?
Join is a process of retrieve pieces of data from different sets (tables) and returns them to the user or program as one “joined” collection of data.

Q. How do you add record to a table?
A. INSERT into table_name VALUES (‘ALEX’, 33 , ‘M’);

Q. How do you add a column to a table?
ALTER TABLE Department ADD (AGE, NUMBER);

Q. How do you change value of the field?
A. UPDATE EMP_table set number = 200 where item_munber = ‘CD’;
update name_table set status = 'enable' where phone = '4161112222';
update SERVICE_table set REQUEST_DATE = to_date ('2006-03-04 09:29', 'yyyy-mm-dd hh24:MM') where phone = '4161112222';

Q. What does COMMIT do?
A. Saving all changes made by DML statements

Q. What is a primary key?
A. The column (columns) that has completely unique data throughout the table is known as the primary key field.

Q. What are foreign keys?
A. Foreign key field is a field that links one table to another table’s primary or foreign key.

Q. What is the main role of a primary key in a table?
A. The main role of a primary key in a data table is to maintain the internal integrity of a data table.

Q. Can a table have more than one foreign key defined?
A table can have any number of foreign keys defined. It can have only one primary key defined.

Q. List all the possible values that can be stored in a BOOLEAN data field.
There are only two values that can be stored in a BOOLEAN data field: -1(true) and 0(false).

Q. What is the highest value that can be stored in a BYTE data field?
A. The highest value that can be stored in a BYTE field is 255. or from -128 to 127. Byte is a set of Bits that represent a single character. Usually there are 8 Bits in a Byte, sometimes more, depending on how the measurement is being made. Each Char requires one byte of memory and can have a value from 0 to 255 (or 0 to 11111111 in binary).

Q. Describe how NULLs work in SQL?
The NULL is how SQL handles missing values. Arithmetic operation with NULL in SQL will return a NULL.

Q. What is Normalization?
A. The process of table design is called normalization.

Q. What is Trigger?
A. Trigger will execute a block of procedural code against the database when a table event occurs. A2. A trigger defines a set of actions that are performed in response to an insert, update, or delete operation on a specified table. When such an SQL operation is executed, in this case the trigger has been activated.

Q. Can one select a random collection of rows from a table?
Yes. Using SAMPLE clause. Example:
SELECT * FROM EMPLOYEES SAMPLE(10);
10% of rows selected randomly will be returned.

Q. You issue the following query:
SELECT FirstName FROM StaffListWHERE FirstName LIKE '_A%‘
Which names would be returned by this query? Choose all that apply.
Allen
CLARK
JACKSON
David

Q. Write a SQL SELECT query that only returns each city only once from Students table? Do you need to order this list with an ORDER BY clause?
A. SELECT DISTINCT City FROM Students;

Q. What is DML and DDL?
DML and DDL are subsets of SQL. DML stands for Data Manipulation Language and DDL – Data Definition Language.
DML consist of INSERT, UPDATE and DELETE
DDL commands
CREATE TABLE, ALTER TABLE, DROP TABLE, RENAME TABLE, CREATE INDEX, ALTER INDEX, DROP INDEX.
CREATE/ALTER/DROP VIEW

Q. Write SQL SELECT query that returns the first and last name of each instructor, the Salary, and gives each of them a number.
A. SELECT FirstName, LastName, Salary, ROWNUM FROM Instructors;

Q. Is the WHERE clause must appear always before the GROUP BY clause in SQL SELECT ?
A. Yes. The proper order for SQL SELECT clauses is: SELECT, FROM, WHERE, GROUP BY, HAVING, ORDER BY. Only the SELECT and FROM clause are mandatory.

Q. Which of the following statements are Data Manipulation Language commands?
INSERT
UPDATE
GRANT
TRUNCATE
CREATE
Ans. A and B. The INSERT and UPDATE statements are Data Manipulation Language (DML) commands. GRANT is a Data Control Language (DCL) command. TRUNCATE and CREATE are Data Definition Language (DDL) commands

Question: Describe SQL comments.
A. SQL comments are introduced by two consecutive hyphens (--) and ended by the end of the line.

Q. Difference between TRUNCATE, DELETE and DROP commands?
A. The DELETE command is used to remove 'some or all rows from a table.
TRUNCATE removes ALL rows from a table. The operation cannot be rolled back
The DROP command removes a table from the database. All the tables' rows, indexes and privileges will also be removed.

Test Case

What is a test case?

The IEEE definition of test case is “Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.” The aim is to divide the software function into small units of function that is testable with input, and producing result that is measurable.

So, basically a test case is a feature/function description that should be executed with a range of input, given certain preconditions, and the outcome measured against expected result.

By the way, there is a common misconception relating to test cases and test scripts, or even test suite. Many people use them interchangeably, and that is a mistake. In short, a test script (or test suite) is a compilation of multiple test cases.

The test cases provide important information to the client regarding the quality of their product. The approach to test case writing should be such as to facilitate the collection of this information.
1.Which features have been tested/ will be tested eventually?
2.How many user scenarios/ use cases have been executed?
3.How many features are stable?
4.Which features need more work?
5.Are sufficient input combinations exercised?
6.Does the app give out correct error messages if the user does not use it the way it was intended to be used?
7.Does the app respond to the various browser specific functions as it should?
8.Does the UI conform to the specifications?
9.Are the features traceable to the requirement spec? Have all of them been covered?
10.Are the user scenarios traceable to the use case document? Have all of them been covered?
11.Can these tests be used as an input to automation?
12.Are the tests good enough? Are they finding defects?
13.Is software ready to ship? Is testing enough?
14.What is the quality of the application?

Approach to test case writing

The approach to organizing test cases will determine the extent to which they are effective in finding defects and providing the information required from them. Various approaches have been listed by Cem Kaner in his paper at http://www.kaner.com/pdfs/GoodTest.pdf
•Function: Test each function/ feature in isolation
•Domain : Test by partitioning different sets of values
•Specification based: Test against published specifications
•Risk based: Imagine a way in which a program could fail and then design tests to check whether the program will actually fail.
•User: Tests done by users.
•Scenario/ use case based: Based on actors/ users and a set of actions they are likely to perform in real life.
•Exploratory: the tester actively controls the design of tests as those tests are performed and uses information gained while testing to design new and better tests.

Since the goal should be to maximize the extent to which the application is exercised, a combination of two or more of these works well. Exploratory testing in combination with any of these approaches will give the focus needed to find defects creatively.

Pure exploratory testing provides a rather creative option to traditional test case writing, but is a topic of separate discussion.

Test case writing procedure

◦Description- Explain the function under test. Clearly state exactly what attribute is under test and under what condition.
◦Prerequisites- Every test needs to follow a sequence of actions, which lead to the function under test. It could be a certain page that a user needs to be on, or certain data that should be in the system (like registration data in order to login to the system), or certain action. State this precondition clearly in the test case. This helps to define specific steps for manual testing, and more so for automated testing, where the system needs to be in a particular base state for the function to be tested.
◦Steps- Sequence of steps to execute the specific function.
◦Input- Specify the data used for a particular test or if it is a lot of data, point to a file where this data is stored.
◦Expected result – Clearly state the expected outcome in terms of the page/ screen that should appear after the test, changes that should happen to other pages, and if possible, changes that should happen to the database.

Thursday, June 25, 2015

Computer system Validation(CSV)



Computer system validation (CSV) is the process of establishing documented evidence that a computerized system will consistently perform as intended in its operational environment. CSV is of particular importance to industries requiring high-integrity systems that maintain compliant with current regulations (EU, FDA) under all circumstances.


CSV requires the adoption of rigid standards for verification and validation of deliverables throughout the software life cycle. It requires a rigorous test methodology with test specifications that are traceable to system requirements.

Software validation is not only employed on new development projects. Today, more and more systems are built using commercial off-the-shelf software products, software components from other vendors or earlier versions of the same system. The major objective of software validation is to ensure that the resulting system performs its functions correctly without unintended side effects, and that it meets its safety, security, reliability and auditing requirements.

One issue that often arises in planning a CSV effort is how to ensure the objectivity of the staff performing the validation tasks. Independent validation and verification (IV&V) addresses this issue. IV&V is the performance of computer system validation tasks by a team that is separate from the software development group.

The starting point of an effective CSV project is a risk assessment (RA). To manage the risks the assessments will be executed at several moments during the software life cycle.

21 CFR Part 11 FAQ


Can a vendor guarantee compliant software for Part 11?
It is not possible for any vendor to offer a turnkey 'Part 11 compliant system'. Any vendor who makes such a claim is incorrect. Part 11 requires both procedural controls (i.e. notification, training, SOPs, administration) and administrative controls to be put in place by the user in addition to the technical controls that the vendor can offer. At best, the vendor can offer an application containing the required technical elements of a compliant system.

Does Part 11 apply to electronic systems that can print records but do not have a durable storage media (i.e. flash memory or memory buffer, etc.)?
The question is really not that much for the storage media, it's more whether the operator can manipulate the data before they are printed. The real problem is that most of this equipment does not have functions required by part 11.

What is the definition of hybrid system? Could you give an example of one?
A 'Hybrid System' is defined as an environment consisting of both Electronic and Paper-based Records (Frequently Characterized by Handwritten Signatures Executed on Paper). A very common example of a Hybrid System is one in which the system user generates an electronic record using a computer-based system (e-batch records, analytical instruments, etc.) and then is required to sign that record as per the Predicate Rules (GLP, GMP. GCP). However, the system does not have an electronic signature option, so the user has to print out the report and sign the paper copy. Now he has an electronic record and a paper/handwritten signature. The 'system' has an electronic and a paper component, hence the term, hybrid.

If using a 'hybrid system' approach to e-signatures, how do you link the handwritten signature to the e-record?
Since Part 11 does not require that electronic records be signed using electronic signatures, e-records may be signed with handwritten signatures that are applied to electronic records or handwritten signatures that are applied to a piece of paper. If the handwritten signature is applied to a piece of paper, it must link to the electronic record. The FDA will publish guidance on how to achieve this link in the future, but for now it is suggested that you include in the paper as much information as possible to accurately identify the unique electronic record (e.g., at least file name, size in bytes, creation date and a hash or checksum value.) Hoverer, the master record is still the electronic record. Thus, signing a printout of an electronic record does not exempt the electronic record from Part 11 compliance.

What are some examples of audio data that may be captured in the Pharmaceutical Industry? Specific Examples?

Audio recordings of regulated patient information or experimental observations are infrequent, but sometimes acquired. Also, audio conferences discussing projects, reports, data are common in the pharma industry. If the data therein is required to be maintained by predicate rules, and the audio file is saved to durable media, Part 11 would apply.

I keep electronic records but have signatures on paper (hybrid systems). Is there a deadline for converting to electronic signatures?
No. There is no deadline for converting to electronic signatures. Having handwritten signatures on paper is acceptable if signature are linked to electronic records so signers cannot repudiate records.

When does an audit trail begin?

Audit Trail initiation requirements differ for data vs. textual materials. For data: If you are generating, retaining, importing or exporting any electronic data, the Audit Trail begins from the instant the data hits the durable media. For textual documents: if the document is subject to approval and review, the Audit Trail begins upon approval and release of the document.

Should execution of a signature be audit trailed?
Yes, execution of a signature must be audit trailed.

Are e-mails controlled documents?
If the text in an email supports such activities as change control approvals or failure investigations, then the e-mails have to be managed in a compliant way.

Can a single restricted login suffice as an electronic signature?
No. The operator has to indicate intent when signing something, and he has to re-enter the user ID/password (shows awareness that he is executing a signature) and give the meaning for the e-sig. To support this, Part 11 §11.50, states that signed e-records shall contain information associated with the signing that indicates the printed name of the signer, the date/time, and the meaning, and that these items shall be included in any human readable form of the record.

When are e-signatures required?
The predicate rules mandate when a regulated document needs to be signed.

Should a company individually certify that every associate's electronic signature is legally binding?
No. The required one-time e-sig certification is for an organization as a whole. Its intent is to certify that a company recognizes that its e-signatures are equivalent to their hand-written signatures.

FDA has issued a new guideline on date and time. It is not mandatory that it is local?
You are correct. The just-released draft Guidance Document on Time Stamps for E-Records and E-Sigs can be found here.

The Agency has reconsidered their position on local date and time stamp requirements. The draft guidance document reflects their current thinking, and supersedes the position in comment #101 of the Rule with respect to the time zone that should be recorded. The document states, "You should implement time stamps with a clear understanding of what time zone reference you use. Systems documentation should explain time zone references as well as zone acronyms or other naming conventions."

Does outsourcing of a computer make a system an open system? Additionally would the external access of an external vendor for maintenance work (e.g. using a modem) to a computer system make that an open system?
According to the Rule, the definition of closed system is "an environment in which system access is controlled by persons who are responsible for the content of electronic records that are on the system.'' The agency agrees that the most important factor in classifying a system as closed or open is whether the persons responsible for the content of the electronic records control access to the system containing those records. A system is closed if persons responsible for the content of the records control access. If those persons do not control such access, then the system is open because the records may be read, modified, or compromised by others to the possible detriment of the persons responsible for record content. Hence, those responsible for the records would need to take appropriate additional measures in an open system to protect those records from being read, modified, destroyed, or otherwise compromised by unauthorized and potentially unknown parties.

What do you mean by linking e-records to e-signatures?
Part 11 Sec. 11.70 states that electronic signatures and handwritten signatures executed to electronic records must be linked (i.e. verifiably bound) to their respective records to ensure that signatures could not be excised, copied, or otherwise transferred to falsify another electronic record. The agency does not, however, intend to mandate use of any particular 'linking' technology. FDA recognizes that, because it is relatively easy to copy an electronic signature to another electronic record and thus compromise or falsify that record, a technology-based link is necessary. The agency does not believe that procedural or administrative controls alone are sufficient to ensure that objective because such controls could be more easily circumvented than a straightforward technology based approach.

Can you share a sample FDA Warning Letter, or is that proprietary information?
The FDA Warning Letters can be found on he FDA web site at http://www.fda.gov/foi/warning.htm. The letters are considered public information.

What is 'grand fathering'?
"Grand fathering" simply means the possibility that the rule may not apply to any system in existence before the rule came into effect. Part 11 does not allow for grandfathering of legacy systems. Therefore, systems installed before August 20, 1997 must be made compliant or replaced.

What is GxP?
This refers to the "Good Practices" whose rulings are observed within the pharmaceutical industry. These are Good Laboratory Practice (GLP), Good Automated Manufacturing Practice (GAMP), Good Manufacturing Practice (GMP) and Good Clinical Practice (GCP). The 'x' is merely a placeholder.

What is a 'Predicate Rule'?
Any requirements set forth in the Act (Federal Food, Drug and Cosmetic Act), the PHS Act (Public Health Service Act), or any FDA regulation (GxP: GLP, GMP, GCP, etc.). The predicate rules mandate what records must be maintained; the content of records; whether signatures are required; how long records must be maintained, etc. If there is no FDA requirement that a particular record be created or retained, then 21 CFR Part 11 most likely does not apply to the record.

Are HIPAA regulations considered a predicate rule with regard to medical records maintained electronically?
How can you make sure that e-records are still readable throughout the retention period (with focus on the formats)? Currently mostly proprietary formats are in use (e.g. in the lab area) and the possibility to read these formats in a few years is difficult (especially if the vendor is changed).

Printing or converting into PDF or similar is only a partly solution. 'What would/could be a long-term solution here?
There are several possible solutions being considered for long-term data re-processability. They include data migration, data emulation and system 'Time Capsules". As of today, there are no set standards, or widely accepted procedures to ensure long-term data viability.

What is 'metadata'?
Literally, it can be defined as 'data about data'. In practical terms, the types of metadata that can be associated with an electronic record may include: details of the record's creation, author, creation date, ownership, searchable keywords that can be used to classify the document, details of the type of data found in the document, and the relationships between different data components. Metadata must be stored as an integral part of the electronic document it describes.

If you use Electronic Signatures, do you have to comply with Electronic Record Requirements?
Use of Electronic Signatures implies that your system is an Electronic Record system and, therefore, must be in compliance with all provisions of 21 CFR Part 11.

Do you have a format or example for the certification for e-signatures that a company can send to the FDA?
For the exact wording for the e-sig certification, please consult the FDA website at www.fda.gov. One can also find wording for the certification in the preamble of the final Rule. The response to comment #120 is "…The final rule instructs persons to send certifications to FDA's Office of Regional Operations (HFC-100), 5600 Fishers Lane, Rockville, MD 20857. Persons outside the United States may send their certifications to the same office. The agency offers, as guidance, an example of an acceptable Sec. 11.100(c) certification: Pursuant to Section 11.100 of Title 21 of the Code of Federal Regulations, this is to certify that [name of organization] intends that all electronic signatures executed by our employees, agents, or representatives, located anywhere in the world, are the legally binding equivalent of traditional handwritten signatures."

Which kind of media (CD Roms, WORMs, etc.) can be considered "21CFRPart11 compliant" from point of view of good retention period?
In an effort to remain technologically neutral, the FDA does not specify the kind of media that one must use for archiving. There are studies currently underway from independent sources that are trying to test the 'lifetime' of such media as CD ROM, although there is no set standard lifetime for such media. Some companies are doing their own tests on media lifetime.

What are some examples of audio data that may be captured in the Pharmaceutical Industry? Specific Examples?
Audio recordings of regulated patient information or experimental observations are infrequent, but sometimes acquired. Also, audio conferences discussing projects, reports, data are common in the pharma industry. If the data therein is required to be maintained by predicate rules, and the audio file is saved to durable media, Part 11 would apply.

How do you recommend handling CROs and vendors in a timely basis?
The data that a CRO generates is ultimately the responsibility of the company that hires the CRO to do the research. That company must be on top of the CRO, their record keeping practices and their adherence to GxP. If a CRO is sending results back to the study sponsor, a compliant, secure, closed system is best to use. Just like with vendors, it is wise to audit the CROs and the vendors to make sure that they are up on their Part 11 (and GxP compliance).

What must a vendor do to claim that their hardware and software are 'compliant' with 21 CFR Part 11?
No vendor can claim that his or her software products are certified Part 11 compliant. A vendor, instead, can say that he has all of the Technical Controls for 21 CFR Part 11 compliance built in to his product. Remember, it is the responsibility of the user to implement the Procedural and Administrative (and correctly and consistently) Controls along with using products with the correct Technical Controls for overall Part 11 compliance.

Does Part 11 apply to instruments themselves that are not connected to computers but that have microprocessors within?
If such a system does not generate electronic records according to the definition of e-records in Part 11 (data starting its life written to durable media), and/or these e-records are not subject to the GxP regulations, then Part 11 does not apply.

Are electronic signatures always required on the creation of electronic records?
The 'Predicate Rules' (GxP) regulations determine what records must be signed, not Part 11. Not all e-records need to be signed. Check your predicate rules for what records must be signed, when and by whom.

Is a 'Gap Analysis' a necessary step to become 21 CFR Part11 compliant?
A Gap Analysis is not a specified requirement of Part 11, however, during the process of becoming Part 11 compliant, most firms undergo a Gap Analysis as part of their assessment/remediation phase.

If a GLP computer is in a lab with physical access control to the doors to the lab, but the application software on that lab computer has no logical access control, does this system comply with Part 11?
No. This is because there would be no way to control access to the system itself. There would be no record of who actually logged onto the system and when.

What are the expected means for reporting attempts at forging electronic signatures?
Although it is not specified in Part 11, most software programs that execute e-sigs and that have notification capabilities report attempts via an email notice to a database administrator.

What is an appropriate audit trail for an Excel Spreadsheet? Some indicate you should track every single cell change and others say it should be tracked the same way a document management system would do it (track final versions only, intermediate drafts don't count only after all changes have been made and approved)?
The audit trail for Excel should capture changes to both the data and to formulas. Things like formatting changes (alignment/font) to cells do not have to be audit trailed.

Please further elaborate/define "Hashing"
Hashing can be used for accessing data or for data security. A hash is a number generated from a string of text. The hash is substantially smaller than the text itself, and is generated by a formula in such a way that it is unlikely that some other text will produce the same hash value. Hashes play a role in security systems where they're used to ensure that transmitted messages have not been tampered with. The sender generates a hash of the message, encrypts it, and sends it with the message itself. The recipient then decrypts both the message and the hash, produces another hash from the received message, and compares the two hashes. If they're the same, there is a very high probability that the message was transmitted intact.

In Part 11.300, controls for identification codes/passwords usage is listed under Subpart C -- Electronic Signatures. Are these requirements only applicable if your system is utilizing e-signatures? It seems that these should be applicable to any system with e-records.
The controls for password/user ID usage apply across the board for ERES systems. They apply to the proper management of electronic records in addition to executing compliant electronic signatures.

Given the fact that most of the systems needing to be complaint are usually found not to be compliant and are usually replaced, does it make sense to do a gap analysis or go directly to remediation?
Some feel that since most systems that have been assessed by gap analyses in the past have turned out to be non-compliant with Part 11, it would save time and money to not do a gap analysis. Like all compliance decisions that an organization must make, this is a personal one. The overall goal is to achieve compliance with Part 11 for applicable systems in order to provide reliability and trustworthiness for the ERES generated/managed by those systems.
How you get there is not regulated. Perhaps future FDA Part11 guidance documents will comment on the 'no gap analysis' methodology??

Is an audit of a vendor enough to ensure that the technical controls (in their product) are all present and compliant?
In addition to a vendor audit, one must scrutinize the product itself and its implementation in your facility. Do not forget that validation of the applicable systems in your own environment is the user responsibility (not to mention implementing the procedural and administrative controls for complete adherence to Part 11.)

Could you define and provide examples of systems that are critical to "data integrity"?
For Part 11, data integrity is related to the trustworthiness of the electronic records generated/managed by critical systems. The FDA is most concerned about systems that are involved with drug distribution, drug approval, manufacturing and quality assurance because these systems pose the most risk in terms of product quality and/or public safety.

Technical solutions may take sometime to implement, what is FDA position on timelines?
There is no fixed date for complete remediation. The Agency had stated often that they would take enforcement discretion if an organization takes the appropriate steps to put a plan in place that addresses what systems need to be compliant and what the firm will do to get the systems there. These plans must include all applicable systems, be detailed and have reasonable timelines and hold persons responsible for implementing those plans. Check out the FDA's "Enforcement Policy: Electronic Records; Electronic Signatures-Compliance Policy Guide; Guidance for FDA Personnel" from 1999 (www.fda.gov) if you want more information on enforcement.

What type of 'reporting' capability on audit trail data should be supported?
According to Part 11 §11.10 (e) audit trails must be secure, computer-generated and time-stamped to independently record the date and time of operator entries and actions that create, modify, or delete electronic records. Such audit trail documentation shall be retained for a period at least as long as that required for the subject electronic records and shall be available for agency review and copying. Audit trails should say 'who did what to your records and when (why for GLP)'. Part 11 does not specify the format for audit trials. This should be discussed in a forthcoming FDA guidance document for Part 11 audit trails.

For clinical data management systems, where does the audit trail begin.... after first entry or after the data has been verified and uploaded to the data management system?
The latter. Clinical research organizations are mandated to comply with 21 CFR Part 11, which requires tracking the activity and ownership of electronic clinical data in audit trails. If you are using Remote Data Entry (RDE) software for data entry, or especially a Web-based RDE, you need to exercise due diligence to protect your data from inadvertent or malicious changes.

How does the digital signature verify that the document hasn't been altered after signing?
A digital signature is computed using a set of rules and a mathematical algorithm such that the identity of the signatory and integrity of the data can be verified. Signature generation makes use of a private key to generate a digital signature. Signature verification makes use of a public key that corresponds to, but is not the same as, the private key. Each user possesses a private and public key pair. Public keys are obviously known to the public, while private keys are never shared. Anyone can verify the signature of a user by employing that user's public key. Only the possessor of the user's private key can perform signature generation. A hash function is used in the signature generation process to obtain a condensed version of data, called a message digest. The message digest is then incorporated into the mathematical algorithm to generate the digital signature. The digital signature is sent to the intended verifier along with the signed message. The verifier of the message and signature verifies the signature by using the sender's public key. The same hash function must also be used in the verification process. The hash function is specified in a separate standard.

For an HPLC system, are the parameters entered for a chromatographic run considered an electronic record?
For an analytical instrument, any information that is captured by a computerized workstation is considered either data or metadata. (Metadata is described as data-about-data. It's what puts the real data into logical context.) The second that any information hits the 'durable media' it then becomes an electronic record. Parameters that are typically captured by an HPLC system (i.e. flow rate, sample lot #, etc.) are considered metadata. This information should be saved and protected as part of the official electronic record.