Tuesday, October 18, 2016

Top 10 Proactive Controls 2016

10 Critical Security Areas That Web Developers Must Be Aware Of

Introduction

Insecure software is undermining our financial, healthcare, defense, energy, and other critical infrastructure worldwide. As our digital, global infrastructure gets increasingly complex and interconnected, the difficulty of achieving application security increases exponentially. We can
no longer afford to tolerate relatively simple security problems.

The goal of the Top 10 Proactive Controls project is to raise awareness about application security by describing the most important areas of concern that software developers must be aware of.

The Top Ten Proactive Controls 2016 is a list of security concepts that should be included in every software development project. They are ordered by order of importance, with control number 1 being the most important.

1. Verify for Security Early and Often
2. Parameterize Queries
3. Encode Data
4. Validate All Inputs
5. Implement Identity and Authentication Controls
6. Implement Appropriate Access Controls
7. Protect Data
8. Implement Logging and Intrusion Detection
9. Leverage Security Frameworks and Libraries
10. Error and Exception Handling


1 Verify for Security Early and Often

Control Description

In many organizations security testing is done outside of development testing loops, following a “scan-then-fix” approach. The security team runs a scanning tool or conducts a pen test, triages the results, and then presents the development team a list of vulnerabilities to be fixed. This is
often referred to as "the hamster wheel of pain". There is a better way.Security testing needs to be an integral part of a developer’s software engineering practice. Just as you can’t “test quality in”, you can’t “test security in” by doing security testing at the end of a project. You need to verify security early and often, whether through manual testing or automated tests and scans. Include security while writing testing stories and tasks. Include the Proactive Controls in stubs and drivers. Security testing stories should be defined such that the lowest child story can be implemented and accepted in a single iteration; testing a Proactive Control must be lightweight.

Consider maintaining a sound story template, “As a I want so that .” Consider data protections early. Include security up front when the definition of done is defined.Stretching fixes out over multiple sprints can be avoided if the security team makes the effort to convert scanning output into reusable Proactive Controls to avoid entire classes of problems . Otherwise, approach the output of security scans as an epoch, addressing the results over more than one sprint. Have spikes to do research and convert findings into defects,
write the defects in Proactive Control terms, and have Q&A sessions with the security team ensuring testing tasks actually verify the Proactive Control fixed the defect. Take advantage of agile practices like Test Driven Development , Continuous Integration and “ relentless testing ”. These practices make developers responsible for testing their own work, through fast, automated feedback loops.

2 Parameterize Queries

Control Description

SQL Injection is one of the most dangerous web application risks. SQL Injection is easy to exploit with many open source automated attack tools available. SQL injection can also deliver an impact to your application that is devastating. The simple insertion of malicious SQL code into your web application – and the entire database could potentially be stolen, wiped, or modified. The web application can even be used to run
dangerous operating system commands against the operating system hosting your database. The main concern with SQL injection is the fact, that the SQL query and its parameters are contained in one query string.In order to mitigate SQL injection, untrusted input should be prevented from being interpreted as part of a SQL command. The best way to do this is with the programming technique known as ‘Query Parameterization’. In this case, the SQL statements are sent to and parsed by the database server separately from any parameters.
Many development frameworks (Rails, Django, Node.js, etc.) employ an object-relational model (ORM) to abstract communication with a database. Many ORMs provide automatic query parameterization when using programmatic methods to retrieve and modify data, but developers should still be cautious when allowing user input into object queries (OQL/HQL) or other advanced queries supported by the framework. Proper defense in depth against SQL injection includes the use of technologies such as automated static analysis and proper database management system configuration. If possible,
database engines should be configured to only support parameterized queries.


3 Encode Data

Control Description

Encoding is a powerful mechanism to help protect against many types of attack, especially injection attacks. Essentially, encoding involves translating special characters into some equivalent form that is no longer dangerous in the target interpreter. Encoding is needed to stop
various forms of injection including command injection (Unix command encoding, Windows command encoding), LDAP injection (LDAP encoding) and XML injection (XML encoding). Another example of encoding is output encoding which is necessary to prevent cross site scripting (HTML entity encoding, JavaScript hex encoding, etc).

Web Development

Web developers often build web pages dynamically, consisting of a mix of static, developer built HTML/JavaScript and data that was originally populated with user input or some other untrusted source. This input should be considered to be untrusted data and dangerous, which requires
special handling when building a secure web application. CrossSite Scripting (XSS) occurs when an attacker tricks your users into executing malicious script that was not originally built into your website. XSS attacks execute in the user's browser and can have a wide variety of
effects.

Types of XSS
There are three main classes of XSS:
● Persistent
● Reflected
● DOM based


Persistent XSS (or Stored XSS) occurs when an XSS attack can be embedded in a website database or filesystem. This flavor of XSS is more dangerous because users will typically already be logged into the site when the attack is executed, and a single injection attack can affect many different users. Reflected XSS occurs when the attacker places an XSS payload as part of a URL and tricks a victim into visiting that URL. When a victim visits this URL, the XSS attack is launched. This type of XSS is less dangerous since it requires a degree of interaction between the attacker and the victim.DOM based XSS is an XSS attack that occurs in DOM, rather than in HTML code. That is, the page itself does not change, but the client side code contained in the page executes differently due to the malicious modifications that have occurred in the DOM environment. It can only be
observed on runtime or by investigating the DOM of the page.

For example, the source code of page http://www.example.com/test.html contains the following
code:

A DOM Based XSS attack against this page can be accomplished by sending the following
URL: http://www.example.com/test.html#
$escaper = new Zend\Escaper\Escaper('utf8')
;
// somewhere in an HTML template

escapeHtml($input);?>

4 Validate All Inputs

Control Description

Any data which is directly entered by, or influenced by, users should be treated as untrusted.An application should check that this data is both syntactically and semantically valid (in that order) before using it in any way (including displaying it back to the user). Additionally, the most
secure applications treat all variables as untrusted and provide security controls regardless of the source of that data. Syntax validity means that the data is in the form that is expected. For example, an application may allow a user to select a fourdigit “account ID” to perform some kind of operation. The application should assume the user is entering a SQL injection payload, and should check that the data entered by the user is exactly four digits in length, and consists only of numbers (in addition to utilizing proper query parameterization). Semantic validity means that the data is meaningful: In the above example, the application should assume that the user is maliciously entering an account ID the user is not permitted to access. The application should then check that the user has permission to access said account ID.

Input validation must be wholly serverside:

clientside controls may be used for convenience.
For example, JavaScript validation may alert the user that a particular field must consist of numbers, but the server must validate that the field actually does consist of numbers.

Background

A large majority of web application vulnerabilities arise from failing to correctly validate input, or not completely validating input. This “input” is not necessarily directly entered by users using a UI. In the context of web applications (and web services), this could include, but is not limited to:

● HTTP headers
● Cookies
● GET and POST parameters (including hidden fields)
● File uploads (including information such as the file name)Similarly, in mobile applications, this can include:
● Interprocess communication (IPC for example, Android Intents)
● Data retrieved from backend web services
● Data retrieved from the device file system Blacklisting vs Whitelisting There are two general approaches to performing input syntax validation, commonly known as blacklisting and whitelisting:
● Blacklisting attempts to check that a given user input does not contain “known to be malicious” content. This is similar to how an antivirus
program will operate: as a first line of defence, an antivirus checks if a file exactly matches known malicious content, and if
it does, it will reject it. This tends to be the weaker security strategy.
● Whitelisting attempts to check that a given user input matches a set of “known good” inputs. For example, a web application may allow you to select one of three cities the application will then check that one of these cities has been selected, and rejects all other possible input. Characterbased whitelisting is a form of whitelisting where an application will check that user input contains only “known good” characters, or matches a known format. For example, this may involve checking that a username contains only alphanumeric characters, and contains exactly two numbers. When building secure software, whitelisting is the generally preferred approach. Blacklisting is prone to error and can be bypassed with various evasion techniques (and needs to be updated with new “signatures” when new attacks are created).

Regular Expressions

Regular expressions offer a way to check whether data matches a specific pattern this is a great way to implement whitelist validation.
When a user first registers for an account on a hypothetical web application, some of the first pieces of data required are a username, password and email address. If this input came from a malicious user, the input could contain attack strings. By validating the user input to ensure that
each piece of data contains only the valid set of characters and meets the expectations for data length, we can make attacking this web application more difficult.

Let’s start with the following regular expression for the username.
^[az09_]{
3,16}$
This regular expression, input validation, whitelist of good characters only allows lowercase
letters, numbers and the underscore character. The size of the username is also being limited to
316 characters in this example.
Here is an example regular expression for the password field.
^(?=.*[az])(?=.*[
AZ])
(?=.*\d) (?=.*[@#$%]).{10,4000}$

This regular expression ensures that a password is 10 to 4000 characters in length and includes a uppercase letter, a lowercase letter, a number and a special character (one or more uses of @, #, $, or %).
Here is an example regular expression for an email address (per the HTML5 specification
http://www.w3.org/TR/html5/forms.html#validemailaddress).
^[azAZ09.!#$%&'*+/=?^_`{|}~
]+@[
azAZ09]+(?:\.[
azAZ09]+)*$
Care should be exercised when creating regular expressions. Poorly designed expressions may result in potential denial of service conditions (aka ReDDoS). A good static analysis or regular expression tester tool can help product development teams to proactively find instances of this
case.There are also special cases for validation where regular expressions are not enough. If your application handles markup untrusted input that is supposed to contain HTML it can be very difficult to validate. Encoding is also difficult, since it would break all the tags that are supposed
to be in the input. Therefore, you need a library that can parse and clean HTML formatted text.i A regular expression is not the right tool to parse and sanitize untrusted HTML.

PHP Example

Available as standard since v5.2, the PHP filter extension contains a set of the functions that can be used to validate the user input but also to sanitize it by removing the illegal characters. They also provide a standard strategy for filtering data.

Example of both validation and sanitization :

Caution : Regular Expressions

Please note, regular expressions are just one way to accomplish validation. Regular expressions can be difficult to maintain or understand for some developers. Other validation alternatives involve writing validation methods which expresses the rules more clearly.

Caution : Validation for Security

Input validation does not necessarily make untrusted input “safe” since it may be necessary to accept potentially dangerous characters as valid input. The security of the application should be enforced where that input is used, for example, if input is used to build an HTML response, then
the appropriate HTML encoding should be performed to prevent CrossSite Scripting attacks.Also, if input is used to build a SQL statement, Query Parameterization should be used. In both of these (and other) cases, input validation should NOT be relied on for security!

5 Implement Identity and Authentication Controls

Control Description

Authentication is the process of verifying that an individual or an entity is who it claims to be. Authentication is commonly performed by submitting a user name or ID and one or more items of private information that only a given user should know.Session Management is a process by which a server maintains the state of an entity interacting with it. This is required for a server to remember how to react to subsequent requests
throughout a transaction. Sessions are maintained on the server by a session identifier which can be passed back and forth between the client and server when transmitting and receiving requests. Sessions should be unique per user and computationally impossible to predict. Identity Management is a broader topic that not only includes authentication and session management, but also covers advanced topics like identity federation, single sign on, passwordmanagement tools, delegation, identity repositories and more.

Below are some recommendation for secure implementation, and with code examples for each of them.

Use Multi-Factor Authentication

Multifactor authentication (MFA) ensures that users are who they claim to be by requiring them
to identify themselves with a combination of:

● Something they know – password or PIN
● Something they own – token or phone
● Something they are – biometrics, such as a fingerprint


Please see Authentication Cheat Sheet for further details.

Mobile Application: Token-Based Authentication

When building mobile applications, it's recommended to avoid storing/persisting authentication credentials locally on the device. Instead, perform initial authentication using the username and password supplied by the user, and then generate a short-lived access token which can be
used to authenticate a client request without sending the user's credentials.

Implement Secure Password Storage

In order to provide strong authentication controls, an application must securely store user credentials. Furthermore, cryptographic controls should be in place such that if a credential (e.g.a password) is compromised, the attacker does not immediately have access to this information
Please see Password Storage Cheat Sheet for further details.

Implement Secure Password Recovery Mechanism

It is common for an application to have a mechanism for a user to gain access to their account in the event they forget their password. A good design workflow for a password recovery feature will use multifactor authentication elements ( for example ask security question something
they know, and then send a generated token to a device something they own).
Please see Forgot Password Cheat Sheet and Choosing and Using Security Questions Cheat_Sheet for further details.

Session: Generation and Expiration

On any successful authentication and reauthentication the software should generate a new session and session id. In order to minimize the time period an attacker can launch attacks over active sessions and hijack them, it is mandatory to set expiration timeouts for every session, after a specified period of inactivity. The length of timeout should be inversely proportional with the value of the data
protected.Please see Session Management Cheat Sheet further details
.
Require Reauthentication for Sensitive Features

For sensitive transactions, like changing password or changing the shipping address for a purchase, it is important to require the user to reauthenticate and if feasible, to generate a new session ID upon successful authentication.

PHP Example for Password Hash

Below is an example for password hashing in PHP using password_hash() function (available since 5.5.0) which defaults to using the bcrypt algorithm. The example uses a work factor of 15.
$cost] );
?>

6 Implement Access Controls

Control Description

Authorization (Access Control) is the process where requests to access a particular feature or resource should be granted or denied. It should be noted that authorization is not equivalent to authentication (verifying identity). These terms and their definitions are frequently confused.
Access Control design may start simple, but can often grow into a rather complex and design heavy security control. The following "positive" access control design requirements should be considered at the initial stages of application development. Once you have chosen a specific access control design pattern, it is often difficult and time consuming to reengineer access control in your application with a new pattern. Access Control is one of the main areas of application security design that must be heavily thought up front, especially when addressing requirements like multitenancy and horizontal (data specific) access control..

Force All Requests to go Through Access Control Checks

Most frameworks and languages only check a feature for access control if a programmer adds that check. The inverse is a more security-centric design, where all access is first verified. Consider using a filter or other automatic mechanism to ensure that all requests go through
some kind of access control check.

Deny by Default

In line with automatic access control checking, consider denying all access control checks for features that have not been configured for access control. Normally the opposite is true in that newly created features automatically grant users full access until a developer has added that
check.

Principle of Least Privilege

When designing access controls, each user or system component should be allocated the minimum privilege required to perform an action for the minimum amount of time.

Avoid Hard-Coded Access Control Checks

Very often, access control policy is hardcoded deep in application code. This makes auditing or proving the security of that software very difficult and time consuming. Access control policy and application code, when possible, should be separated. Another way of saying this is that your enforcement layer (checks in code) and your access control decision making process (the access control "engine") should be separated when possible.

Code to the Activity

Most web frameworks use role based access control as the primary method for coding enforcement points in code. While it's acceptable to use roles in access control mechanisms, coding specifically to the role in application code is an antipattern.Consider checking if the user
has access to that feature in code, as opposed to checking what role the user is in code. Such a check should take into context the specific data/user relationship. For example, a user may be able to generally modify projects given their role, but access to a given project should also be
checked if business/security rules dictate explicit permissions to do so.

So instead of hardcoding role check all throughout your code base:

if (user.hasRole("ADMIN)) || (user.hasRole("MANAGER")) {
deleteAccount();
}
Please consider the following instead:
if (user.hasAccess("DELETE_ACCOUNT")) {
deleteAccount();
}

Server-Side Trusted Data Should Drive Access Control

The vast majority of data you need to make an access control decision (who is the user and are they logged in, what entitlements does the user have, what is the access control policy, what feature and data is being requested, what time is it, what geolocation is it, etc) should be
retrieved "serverside" in a standard web or web service application. Policy data such as a user's role or an access control rule should never be part of the request. In a standard web application, the only clientside data that is needed for access control is the id or ids of the data
being accessed. Most all other data needed to make an access control decision should be
retrieved serverside.

7 Protect Data

Control Description

Encrypting data in Transit

When transmitting sensitive data, at any tier of your application or network architecture, encryption-in-transit of some kind should be considered. TLS is by far the most common and widely supported model used by web applications for encryption in transit. Despite published
weaknesses in specific implementations (e.g. Heartbleed), it is still the defacto and recommended method for implementing transport layer encryption..

Encrypting data at Rest

Cryptographic storage is difficult to build securely. It's critical to classify data in your system and determine that data needs to be encrypted, such as the need to encrypt credit cards per the PCIDSS compliance standard. Also, any time you start building your own low-level cryptographic functions on your own, ensure you are or have the assistance of a deep applied expert. Instead of building cryptographic functions from scratch, it is strongly recommended that peer reviewed and open libraries be used instead, such as the Google Key-Czar project, Bouncy Castle and the functions included in SDKs. Also, be prepared to handle the more difficult aspects of applied crypto such as key management, overall cryptographic architecture design as well as tiering and trust issues in complex software. A common weakness in encrypting data at rest is using an inadequate key, or storing the key along with the encrypted data (the cryptographic equivalent of leaving a key under the doormat). Keys should be treated as secrets and only exist on the device in a transient state, e.g. entered by the user so that the data can be decrypted, and then erased from memory. Other alternatives include the use of specialized crypto hardware such as a Hardware Security Module (HSM) for key management and cryptographic process isolation.

Implement Protection in Transit

Make sure that confidential or sensitive data is not exposed by accident during processing. It may be more accessible in memory; or it could be written to temporary storage locations or log files, where it could be read by an attacker.

Mobile Application: Secure Local Storage

In the context of mobile devices, which are regularly lost or stolen, secure local data storage requires proper techniques. When an application does not implement properly the storage mechanisms, it may lead to serious information leakage (example: authentication credentials, access token, etc.). When managing critically sensitive data, the best path is to never save that data on a mobile device, even using known methods such as a iOS keychain.

8 Implement Logging and Intrusion Detection

Control Description

Application logging should not be an afterthought or limited to debugging and troubleshooting.

Logging is also used in other important activities:

● Application monitoring
● Business analytics and insight
● Activity auditing and compliance monitoring
● System intrusion detection
● Forensics


Logging and tracking security events and metrics helps to enable "attack-driven-defense" :making sure that your security testing and controls are aligned with real-world attacks against your system.

To make correlation and analysis easier, follow a common logging approach within the system and across systems where possible, using an extensible logging framework like SLF4J with Logback or Apache Log4j2, to ensure that all log entries are consistent.

Process monitoring, audit and transaction logs/trails etc.. are usually collected for different purposes than security event logging, and this often means they should be kept separate. The types of events and details collected will tend to be different. For example a PCI DSS audit log
will contain a chronological record of activities to provide an independently verifiable trail that permits reconstruction, review and examination to determine the original sequence of attributable transactions.

It is important not to log too much, or too little. Make sure to always log the timestamp and identifying information like the source IP and user-id, but be careful not to log private or confidential data or opt-out data or secrets. Use knowledge of the intended purposes to guide
what, when and how much to log. To protect from Log Injection aka log forging , make sure to perform encoding on untrusted data before logging it.

The OWASP AppSensor Project explains how to implement intrusion detection and automated response into an existing Web application: where to add sensors or detection points and what response actions to take when a security exception is encountered in your application.For example, if a serverside edit catches bad data that should already have been edited at the client, or catches a change to a noneditable field, then you either have some kind of coding bug or (more likely) somebody has bypassed clientside validation and is attacking your app. Don’t just log this case and return an error: throw an alert, or take some other action to protect your system such as disconnecting the session or even locking the account in question. In mobile applications, developers use logging functionality for debugging purpose, which may lead to sensitive information leakage. These console logs are not only accessible using the Xcode IDE (in iOS platform) or Logcat (in Android platform) but by any third party application installed on the same device. For this reason, best practice recommends to disable logging
functionality into production release.

Disable logging in release Android application

The simplest way to avoid compiling Log Class into production release is to use the Android
ProGuard tool to remove logging calls by adding the following option in the proguardproject.txt

configuration file:

-assumenosideeffects class android.util.Log
{
public static boolean isLoggable(java.lang.String, int);
public static int v(...);
public static int i(...);
public static int w(...);
public static int d(...);
public static int e(...);
}
Disable logging in release iOS application.This technique can be also applied on iOS application by using the preprocessor to remove any
logging statements :
#ifndef DEBUG
#define NSLog(...)
#endif

9 Leverage Security Frameworks and Libraries

Control Description

Starting from scratch when it comes to developing security controls for every web application, web service or mobile application leads to wasted time and massive security holes. Secure coding libraries and software frameworks with embedded security help software developers guard against security-related design and implementation flaws. A developer writing a application from scratch might not have sufficient time and budget to implement security features and different industries have different standards and levels of security compliance. When possible, the emphasis should be on using the existing secure features of frameworks rather than importing third party libraries. It is preferable to have developers take advantage of what they're already using instead of forcing yet another library on them.

Web application security frameworks to consider include:

● Spring Security
● Apache Shiro
● Django Security
● Flask security

One must also consider that not all frameworks are immune from security flaws and some have a large attack surface due to the many features and third-party plugins available. A good example is the Wordpress framework (a very popular framework to get a simple website off the ground quickly), which pushes security updates, but cannot support the security in third-party plugins or applications. Therefore it is important to build in additional security where possible, updating frequently and verifying them for security early and often like any other software you
depend upon.

Vulnerabilities Prevented

● Secure frameworks and libraries will typically prevent common web application vulnerabilities such as those listed in the OWASP Top Ten, particularly those based on syntactically incorrect input (e.g. supplying a Javascript payload instead of a username).
● It is critical to keep these frameworks and libraries up to date as described in the using components with known vulnerabilities Top Ten 2013 risk.


10 Error and Exception Handling

Control Description

Implementing correct error and exception handling isn't exciting, but like input data validation, it is an important part of defensive coding, critical to making a system reliable as well as secure.

Mistakes in error handling can lead to different kinds of security vulnerabilities:

1. Leaking information to attackers, helping them to understand more about your platform and design CWE 209 . For example, returning a stack trace or other internal error details
can tell an attacker too much about your environment. Returning different types of errors in different situations (for example, "invalid user" vs "invalid password" on authentication errors) can also help attackers find their way in.

2. Not checking errors, leading to errors going undetected, or unpredictable results such as
CWE 391 . Researchers at the University of Toronto have found that missing error
handling, or small mistakes in error handling, are major contributors to catastrophic failures in distributed systems
https://www.usenix.org/system/files/conference/osdi14/osdi14paperyuan.pdf.

Error and exception handling extends to critical business logic as well as security features and framework code. Careful code reviews, and negative testing (including exploratory testing and pen testing), fuzzing ( https://www.owasp.org/index.php/Fuzzing ) and fault injection can all help
in finding problems in error handling. One of the most famous automated tools for this is Netflix's
Chaos Monkey .

Positive Advice

1. It’s recommended to manage exceptions in a centralized manner to avoid duplicated try/catch blocks in the code, and to ensure that all unexpected behaviors are correctly handled inside the application.
2. Ensure that error messages displayed to users do not leak critical data, but are still verbose enough to explain the issue to the user.
3. Ensure that exceptions are logged in a way that gives enough information for Q/A,forensics or incident response teams to understand the problem.
Vulnerabilities Prevented● All OWASP Top Ten

Top 10 Mapping

Each of the above controls help preventing one or more OWASP Top Ten.
Below there is a summary of the mapping between each OWASP Top 10 Proactive Controls
and the OWASP Top 10 it helps to mitigate.


OWASP Top 10 Proactive Controls OWASP Top 10 Prevented
C1: Verify for Security Early and Often All Top 10
C2: Parameterize Queries ➢ A1 Injection
C3: Encode Data ➢ A1 Injection
 ➢ A3 Cross Site Scripting (XSS) (in part)"
C4: Validate All Inputs ➢ A1 Injection (in part)
 ➢ A3 Cross Site Scripting (XSS) (in part)
 ➢ A10 Unvalidated Redirects and Forwards
C5: Identity and Authentication Controls ➢ A2 Broken Authentication and Session Management
C6: Implement Access Controls ➢ A4 Insecure Direct Object References
 ➢ A7 Missing Function Level Access Control
C7: Protect Data ➢ A6 Sensitive Data Exposure
C8: Implement Logging and Intrusion DetectionAll Top 10
C9: Leverage Security Features and LibrariesAll Top 10
C10: Error and Exception HandlingAll Top 10



Thursday, October 13, 2016

Introduction to HP ALM(Quality Center)

Introduction to HP ALM(Quality Center)


Quality Center was initially a test management tool developed by Mercury interactive.

It is now developed by HP as Application Life Cycle Management Tool (or) ALM that supports various phases of the software development life cycle.

ALM is a web based tool that helps organizations to manage the application lifecycle right from project planning, requirements gathering, until testing & deployment, which otherwise is a time consuming task

ALM also provides integration to all other HP products such as UFT and Load Runner.


Why use HP ALM?

The various stakeholders involved in a typical project are –
• Developer
• Tester
• Business Analysts
• Project Managers
• Product Owners

These stakeholders perform diverse set of activities that need to be communicated to all concerned team members.

If we do not maintain centralized repository to record, maintain and track all the artifacts related to the product, the project will unquestionably FAIL.

We also need a mechanism to document and collaborate on all testing and development activities.



Enter HP ALM!
• It enables all the stakeholders to interact and coordinate, to achieve the project goals.
• It provides robust tracking & reporting and seamless integration of various project related tasks.
• It enables detailed project analysis and effective management.
• ALM can connect to our email systems and send emails about any changes(like Requirement change, Defect raising, etc..) to all desired team members.

Evolution of ALM

It is important to understand the history of ALM.



• Quality Center was earlier known as Test Director which was developed by Mercury Interactive.
• In 2008, Version 8 was released, and the product was renamed as Quality Center.
• Later, HP acquired Mercury Interactive and rebranded all mercury products as HP.
• So Mercury Quality Center became HP Quality Center

• In 2011, Version 11 was released, and Quality center was rechristened as HP ALM.


Architecture of QC

Now let us understand the technology part of HP-ALM. ALM is an enterprise application developed using Java 2 Enterprise Edition (J2EE) that can have MS SQL Server or Oracle as its back end. ALM has 3 components – Client, Application Server and Database Server.
1. HP ALM client: when an end user/tester accesses the URL of ALM, the client components are downloaded on the client's system. ALM client components help the user to interact with the server using .NET and COM technologies over a secured connection (HTTPS).
2. ALM server/Application server: Application server usually runs on a Windows or Linux platform which caters to the client requests. App server makes use of the Java Database Connectivity (JDBC) driver to communicate between the application server and database servers.

3. Database servers: The Database layer stores three schemas.

• Site Administration schema: It Stores information related to the domains, users, and site parameters.

• Lab Project: This schema stores lab information related to functional and performance testing on remote hosts, Performance Center server data.

• Project schema: Stores project information, such as work item/data created by the user under the project area. Each project has its own schema and they are created on the same database server as the Site Administration schema.


HP ALM Editions:

HP ALM is a commercially licensed tool and HP distributes ALM in 4 different flavors.

ALM Edition Feature Comparison

Each one of the license allows users to access certain ALM functionalities. Following Table lists the features that a particular license give you =


Let's study why would you purchase a particular version and whom is it suited for
• HP ALM Essentials – This is for corporates that need just the basic features for supporting their entire software life cycle. It has access to requirements management, test management and defect management.
• HP QC Enterprise Edition – This license holds good for corporates who would like to use ALM exclusively for testing purposes. It also provides integration with Unified Functional Tester (UFT).
• HP ALM Performance Center Edition – This license best suits for organizations who would like to use HP ALM to drive HP-Load runner scripts. It helps the users to maintain, manage, schedule, execute and monitor performance tests.

ALM Workflow

To learn the ALM workflow, Let's first study a typical test process-


• We being with planning and drafting, Release details. Determine no of Cycles in each release & Scope of each release
• For a given Release and Cycle, we draft the Requirements Specifications.
• Based on the requirements, Test plans and test cases are created.
• Next stage is executing the created tests plan
• Next stage in this test processes is tracking and fixing the defects detected in the execution stage
• During all stages, analysis is done, and reports and graphs are generated for test metric generation.

HP ALM provides a module catering to each stage of the Testing Process.

Sunday, September 25, 2016

Open source DevOps Tools




DevOps Tools

1. Nagios (& Icinga)

Infrastructure monitoring is a field that has so many solutions… from Zabbix to Nagios to dozens of other open-source tools. Despite the fact that there are now much newer kids on the block, Nagios is a veteran monitoring solution that is highly effective because of the large community of contributors who create plugins for the tool. Nagios does not include all the abilities that we had wanted around the automatic discovery of new instances and services, so we had to work around these issues with the community’s plugins. Fortunately, it wasn’t too hard, and Nagios works great.

We also looked into Icinga, which was originally created as a fork of Nagios. Its creators aim to take Nagios to the next level with new features and a modern user experience. There is a debate within the open source community about the merits of Nagios and its stepchild, but for now we are continuing to use Nagios and are satisfied with its scale and performance. The switch to newer technology, such as Icinga, may be appropriate in the future as we progress.


2. Monit

Sometimes the simplest tools are the most useful, as proven by the simple watchdog Monit. Its role is to ensure that any given process on a machine is up and running appropriately. For example, a failure occurs in Apache, Monit will help to restart the Apache process. It is very easy to setup and configure and is especially useful for multi-service architecture with hundreds of micro-services. If you are using Monit, make sure to monitor the restarts that it executes in order to surface problems and implement solutions (rather than just restarting and ignoring the failure). You can do this by monitoring Monit’s log files and ensuring that you are alerted to every restart.


3. ELK – Elasticsearch, Logstash, Kibana – via Logz.io

Stack is the most common log analytics solution in the modern IT world. It collects logs from all services, applications, networks, tools, servers, and more in an environment into a single, centralized location for processing and analysis. We use it for analytical purposes (e.g., to troubleshoot problems, monitor services, and reduce the time it takes to solve operational issues). Another use for this tool is for security and auditing (e.g., to monitor changes in security groups and changes in permissions). After receiving alerts on these issues, it is easy to act on unauthorized users and activities. We also use ELK for business intelligence, such as monitoring our users and their behavior. You can set up your own ELK or buy it as-a-service. We’ve written a guide for the community on using ELK to monitor your application performance.



4. Consul.io


Consul is a great fit for service discovery and configuration in modern, elastic applications that are built from microservices. The open-source tool makes use of the latest technology in providing internal DNS names for services. It acts as a kind of broker to help you sign and register names, enabling you to access service names instead of specific machines. If, for example, you have a cluster of multiple machines, you can simply register them as a single entity under Consul and access the cluster easily. We praise this tool for its efficiency, although we still feel there is more that can be done with it. If you also use it, it would be great to hear about your own use case.


5. Jenkins

Everyone knows Jenkins, right? It’s not the fastest or the fanciest, but it’s really easy to start to use and it has a great ecosystem of plugins and add-ons. It is also optimized for easy customization. We have configured Jenkins to build code, create Docker containers (see the next item), run tons of tests, and push to staging/production. It’s a great tool, but there are some issues regarding scaling and performance (which isn’t so unusual). We’ve explored other cool solutions such as Travis and CircleCI, which are both hosted solutions that don’t require any maintenance on our side. For now, however, since we’ve already invested in Jenkins, we’ll continue with it.


6. Docker

Everything that can be said about how Docker is transforming IT environments has already been said. It’s great…life changing, even — (although we’re still experiencing some challenges with it). We use Docker in production for most services. It eases configuration management, control issues, and scaling by allowing containers to be moved from one place to another.

We have developed our SaaS solution with a twelve-layer pipeline of data processing. Together with Jenkins and Docker, we have been able to run a full pipeline across all layers on a single Mac. It would be wrong to say that there aren’t any complications with Docker, as even small containers can take a significant amount of time to build. However, we want to ensure that our developers are as satisfied as possible and enable them to work rapidly. With all of the management involved in storage, security, networking — and everything surrounding containers — this can be a challenge.

We see Docker progressing and look forward to welcoming the company’s new management and orchestration solutions. For those who might be having issues with Docker, we’ve also compiled a list of challenges and solutions when migrating to Docker.


7. Ansible

Again, simplicity is key. Ansible is a configuration management tool that is similar to Puppet and Chef. Personally, we found those two to have more overhead and complexity to our use case– so we decided to go with Ansible instead. We know that Puppet and Chef probably have a richer feature set, but simplicity was our desired KPI here. We see some tradeoffs between configuration management using Ansible and the option to simply kill and spin new application instances using a Docker container. With Docker, we almost never upgrade machines but opt to spin new machines instead, which reduces the need to upgrade our EC2 cloud instances. Ansible is used mostly for deployment configuration mostly. We use it to push changes and re-configure newly-deployed machines. In addition, its ecosystem is great, with an easy option to write custom applications.



8. Collectd/Collectl

Collectd/l are nifty little tools that gather and store statistics about the system on which they run and are much more flexible than other tools. They allow users to measure the values of multiple system metrics and unlike other log collection tools that are designed to measure specific system parameters, Collectd/l can monitor different parameters in parallel. We use these two tools to measure customer performance parameters and ship them to our ELK-as-a-Service platform. We’ve specifically wrapped a Collectl agent in a Docker container and push it with Ansible to all of our servers. It collects information every couple of seconds and then ships it to ELK to allow us to run reports and send alerts. If you’d like to see a specific example of how we do this process in our environment and how others can do the same, we’ve created a guide for everyone.



9. Git (GitHub)




githubGit was created 10 years ago following the Linux community’s need for SCM (Source Control Management) software that could support distributed systems. Git is probably the most common source management tool available today. After running Git internally for a short period of time, we realized that we were better suited with GitHub. In addition to its great forking and pull request features, GitHub also has plugins that can connect with Jenkins to facilitate integration and deployment. I assume that mentioning Git to modern IT teams is not breaking news, but I decided to add to it to the list due to its great value to us.

DevOps






A clipped compound of development and operations is a culture, movement or practice that emphasizes the collaboration and communication of both software developers and other information-technology (IT) professionals while automating the process of software delivery and infrastructure changes.



Monday, September 19, 2016

Correlation in JMeter





Correlation in JMeter

What is correlation and why it is required?

Correlation is the most important aspect of scripting. It generally includes fetching dynamic data from preceding requests/calls and posting it to the subsequent requests.

Let's take an example to find out why exactly we need correlation-
Suppose we have recorded a scenario in which -
>User enters login details and click OK button
>Home page opens and user take further actions

Now, if we just playback this script, the test will fail even for a single user. This is because of the authentication mechanism used. When we login to a website, session variables are dynamically created. These session variables are passed to the subsequent requests and help validation and authentication of the actions performed. So, one cannot just record and playback the requests having these variables. Here, we need to correlate the web requests with the dynamic variables. And for correlation, we need to use the "Regular Expression Extractor" which makes use of regular expressions.

A brief insight to regular expressions-
Regular expressions are used to fetch data from a string based on a search pattern. Basically, what we do is- in order to extract any value (generally a dynamically created value) from a string (text response), we define a left bound of the variable then some wildcard characters and then right bound- (Left Bound)(Wildcard Characters)(Right Bound)

E.g. for if we have text response like-
.......__EVENTVALIDATION"value="weriudflsdfspdfusdjfsisdpfjpsdfohsdihffgdfgpdfjsdjfpj" />...
And we need to extract the value of Event validation variable using regular expressions; the regular expression for the same will be-
__EVENTVALIDATION" value="(.+?)" />
where, Left Bound = __EVENTVALIDATION" value="
Wildcard characters = (.+?)
Right Bound = " />

If you do not want to get deeper into regular expressions, then the wildcard characters (.+?) would suffice in most of the cases. For more information on regular expressions and meaning of each wild card character visit http://www.regular-expressions.info/tutorialcnt.html.

Regular Expression Extractor-


Coming back to JMeter, consider an example where we have two operations-
1. User launch website
2. User fill details and click on OK button
Now, the call user launch website creates a dynamic variable event validation that we can check in Response Data tab of "View Result Tree" listener for the call. The value of this variable is then passed to subsequent call related to "User fill details and click on OK button" as Http post parameter.

Steps for correlating the Event validation values-

1. Run the script containing the both the above stated operations
2. Go to Response tab (Text mode) in "View Result Tree" listener of "User launch website" operation. BTW, we see the second operation "User fill details and click on OK button" in red because it is not yet correlated.

See more at: http://testingattheedge.blogspot.in


3. Create a Regular expression for extracting the value of Event validation variable's value. As stated above the Regular Expression. for this will be- __EVENTVALIDATION" value="(.+?)" />

4. Go to http request under "User Launch Website" transaction controller-> Add -> Post Processor -> Regular Expression Extractor.


Adding "Regular Expression Extractor" control


Regular Expression Extractor Parameters Filled

5. The reference name inserted is the name of the variable created that will capture the Event validation value generated by the http request under "User launch website" operation.

6. Now pass this variable to the subsequent http request under "User fill details and click on OK button" as post request- overriding the already present hard-coded value of Event Validation variable.


Request without correlation (Hard-coded values)



Request with correlation (Dynamic values)

7. Run the Test plan again. All green? That's it.

Customers report strange hissing sound when the iPhone 7 is under stress







THOSE lucky enough to get their hands on a new iPhone 7 have been reporting a strange defect with the device.


When the handset is pushed to its processing limits it has a hissy fit, according to users. Basically, if the phone is working overtime by, for example, running lots of applications at once, it begins to make a faint hissing sound.

The noise was first pointed out by Stephen Hackett at 512 Pixels and was quickly followed by other users reporting the same thing.

In terms of malfunctions it absolutely pales in comparison to the exploding battery issue in Samsung’s Note 7 that prompted a global recall recently. But nevertheless, Apple customers and technologists have been debating the exact cause of the curious hissing sound.

“Some suspect coil whine or similar electromagnetic effects, but there’s no guarantee that this is the case,” claimed Jon Fingas from engadget.

Not all news phones are making the unsettling noise though. A number of customers reported putting their iPhone 7 under enormous stress and heard no hissing sound at all, leading some to speculate the problem could be a manufacturing issue rather than an inherent design quirk.

Samsung begins to replace Galaxy Note 7





Samsung Electronics has begun delivering the new Galaxy Note 7 to users after a worldwide recall following reports of several handsets exploding.

"The exchange program began today (Monday) and is being carried out without problems," a Samsung spokesperson told EFE.

Note 7 users who had returned their old devices will receive a new smartphone of the same model and color unless they opted for a refund.

In South Korea, the exchange is taking place 17 days after the company announced the recall and will take place in other parts of the world in the coming days, except Canada and Singapore, where the company began handing out the new devices last week.

The new Galaxy Note 7 has a green battery indicator that separates it from the earlier version.

Of the 2.5 million Note 7 sold worldwide since its launch on August 19, around 400,000 were sold in South Korea and one million in the US.

Samsung recalled Note 7 on September 2 after admitting that in 35 cases the devices had caught fire while they were being charged owing to faulty batteries.

Wednesday, March 23, 2016

Assertions in JMeter





Assertions in JMeter

Assertion help verify that your server under test returns the expected results.
Following are some commonly used Assertion in JMeter:


• Response Assertion
• Duration Assertion
• Size Assertion
• XML Assertion
• HTML Assertion



Response Assertion


The response assertion lets you add pattern strings to be compared against various fields of the server response.
For example, you send a user request to the website http://www.google.com and get the server response. You can use Response Assertion to verify if the server response contains expected pattern string (e.g. "OK").

Duration Assertion

The Duration Assertion tests that each server response was received within a given amount of time. Any response that takes longer than the given number of milliseconds (specified by the user) is marked as a failed response.
For example, a user request is sent to www.google.com by JMeter and get a response within expected time 5 ms then test case pass, else, test case failed.


Size Assertion

The Size Assertion tests that each server response contains the expected number of byte in it. You can specify that the size be equal to, greater than, less than, or not equal to a given number of bytes.
JMeter sends a user request to www.google.com and gets response packet with size less than expected byte 5000 bytes àtest case pass. If else, test case failed.

XML Assertion
The XML Assertion tests that the response data consists of a formally correct XML document.


HTML Assertion
The HTML Assertion allows the user to check the HTML syntax of the response data. It means the response data must be met the HTML syntax.



Handson - Assertion

We will continue on the script we developed in the earlier.
In this test, we are using Response Assertion to compare the response packet from www.google.com matches your expected string.

Roadmap of test:


The response assertion control panel lets you add pattern strings to be compared against various fields of the response.
Step 1) Add Response Assertion

Right-Click Thread Group -> Add -> Assertions -> Response Assertion


Response Assertion Pane displays as below figure:




Step 2) Add Pattern to test

When you send a request to Google server, it may return some response code as below:
404: Server error
200: Server OK
302: Web server redirect to other page. This usually happens when you access google.com from outside USA. Google re-directs to country specific website. As shown below, google.com redirects to google.co.in for Users.


Assume that you want to verify that the web server google.com responses code contains pattern 302,
On Response Field To Test, choose Response Code,
On Response Assertion Panel, click Add -> a new blank entry display -> enter 302 in Pattern to Test.


Step 3) Add Assertion Results
Right click Thread Group, Add -> Listener -> Assertion Results



Step 4) Run your test
Click on Thread Group -> Assertion Result
When you ready to run test, click Run button on the menu bar, or short key Ctrl+R.
The test result will display on Assertion Results pane. If Google server response code contains the pattern 302, the test case is passed. You will see the message displayed as follows:


Now back to Response Assertion Panel, you change the Pattern to test to from 302 to 500.


Because Google server response code doesn't contain this pattern, you will see the test case Failed as following:


Troubleshooting:


Any issue while running the above scenarios ... do the following:

1. Check whether you are connecting to internet via a proxy. If yes, remove the proxy.
2. Open a new instance of JMeter
3. Open the AssertionTestPlan.jmx in JMeter
4. Click on Thread Group -> Assertion Result
5. Run the Test



For More about JMeter follow on http://testingattheedge.blogspot.in

Thursday, March 17, 2016

Parameterization in Jmeter





Parameterization in Jmeter:

We parameterize the input to run the test with different set of data for each user. We will provide the data in a file and provide as input for a field.

For this we will create a .csv file in Excel and save it. For this example, we will parameterize four fields. They are from Port, to Port, passFirst0 and passLast0.

The details to corresponding fields are as follows:





We have our requests as shown below:


To add the data to the test we need to add the config element as below:
Right Click on Thread Group -> Add -> Config Element -> CSV Data Set Config



The CSV Data Set details were entered as follows:



Parameters for the CSV Data Set config is as below:


Now once the CSV Data Set is configured, we need to set the variables at appropriate position as shown:



The value is set as ${Variablename} for the required fields. For example here passFirst0 is set as ${FirstName} and passLast0 is set as ${LastName}.

Let us now verify the values are passed correctly to the required parameters from the csv file. To perform this we will add a Listener -> View Results tree as below:


In Thread group set the Number of users as 2 and run. We see that there are two sample results as we ran for two users. We check the first thread we see that from Port is Frankfurt and to Port is London which is same as first record of our .csv file.


Now, let us verify the second thread results, we see that from Port is London and to Port is New York. So for two different customers there are two different input provided.


In the same way we can check for the customer firstname and lastname also.