Perfromance Testing in Cloud

Saas(Software as a service) is the key concept behind Cloud. It says someone(Vendors) has every resources you want over cloud and you just need to pay as per required and avail the services.

Definition by wiki:

“In Computer science, cloud computing describes a type of outsourcing of computer services, similar to the way in which the supply of electricity is outsourced. Users can simply use it. They do not need to worry where the electricity is from, how it is made, or transported. Every month, they pay for what they consumed. “

  • YOU ASK your vendor the required resources(System, CPU, RAM, Memory, Bandwidth, Geographical Location of servers) YOU GET it.

  • YOU ASK your vendor your service to be hourly, weekly, Monthly or any specific time duration YOU GET it.

  • YOU ASK your vendor your service to charge on Script execution basis YOU GET it

Cloud computing has created a trend for outsourcing of computing, storage and networking in order to create a more dynamic and efficient infrastructure. It has almost all the characteristics to solve the challenges faced for Performance Testing. In Performance testing, more the script/execution to be realistic more accurate the result will be. So Cloud has become one of the integral part of Performance Testing.

Some of the Cloud Providers for Performance Testing:

http://www.soasta.com/

https://www.pronq.com/software/stormrunner-load

http://blazemeter.com/

http://www.neotys.com/product/neotys-cloud-platform.html

New Features In Jmeter-2.11

Apache Jmeter has six successful releases with a time gap of two years only. With every release we can see a lot of new features, enhancements, Non-Functional Changes and Bug fixes. This shows a continuous involvement of community to enhance the product to its best.

Lets pick some of the good features added to Jmeter2.11:

1) Summariser in Non GUI mode:

In Previous versions of Jmeter you need to add a listener named Generate Summary Result to the jmx file. So that you can view the Summary in NON GUI mode after 3 minutes(180 sec) by default.

With 2.11 you can view the summary in non gui mode by default. You can also tweak the summary result by editing the following properties files.

Jmeter Properties:

#—————————————————————————
# Summariser – Generate Summary Results – configuration (mainly applies to non-GUI mode)
#—————————————————————————
# Define the following property to automatically start a summariser with that name
# (applies to non-GUI mode only)
summariser.name=summary
# interval between summaries (in seconds) default 30 seconds
summariser.interval=30
# Write messages to log file
summariser.log=true
# Write messages to System.out
summariser.out=true

summariser.name=summary

By default summariser name would be “Summary”. You can edit and give a name of your own convenience. No Name means disabled summariser.

summariser.interval=30

30Sec is the default interval which can be editable

summariser.log=true

If it is true then all the summariser information is going to be append to the Jmeter.log file. Changing it to false do not add any information of summariser.

summariser.out=true

Decides whether to show the summariser information to standard output.

You can view the command prompt as below.

    Summerizer

2)Introduction of “Save as Test Fragment”

  TestFragment

What is Test Fragment:
For a complex piece of Test Plan it is very tough to maintain, debug and execute the script. Code re-usability with a complex script is almost 0%. So it is always a best practice to split your test plan by functional components. Lets say for a banking application, you can create a jmx file for
1.login and logout
2.Account Summary & mini statement
3.Transaction
4.Add Beneficiary
5.Registration etc…

So the main idea of Test Fragmentation is to split the complex test Script by functional components for better maintenance and code re-usability.

So with Jmeter2.11 you can select a group of element and save them as a test Fragment. Later just grab them and merge to which ever script you want using Include and Module Controller. I found it very time saving as it saves time and able to reuse the script/code efficiently.

3)Instant Xpath Tester:

 XpathTester

I personally felt the RegExp Tester is a great help to test any regular expression before applying it to the script. Jmeter 2.11 has given one addition to it by featuring Xpath Tester. You can take any Xpath Expression and put it in the View Result Tree and test it if it is appropriate.

Checkpoints for Janalyser2.0

  • Change in Jmeter Properties File:

          You need to change some of the properties in the Jmeter Properties file(apache-jmeter-                                      2.9\bin\jmeter.properties)


# Results file configuration
#——————————————————
jmeter.save.saveservice.output_format=csv
jmeter.save.saveservice.assertion_results_failure_message=true
jmeter.save.saveservice.default_delimiter=,
#——————————————————

  • Supported Files:[Select the correct format from the dropdown and upload]

    • Jmeter Result in CSV Format

    • Jmeter Result in XML Format

    • Jmeter Log File

    • Customized Jmeter Results

  • Mandatory Files:

    • Jmeter CSV file

    • Jmeter XML file

    • Jmeter Log File[OPTIONAL]

  • Upload any file in ZIP format. For UNZIPPED files it will show unsupported file.[Do NOT ZIP Folder only ZIP the file and upload]

  • Select the Correct TimeZone

  • ZIP file should not be Greater than 1 MB.

  • Checkpoint for Jmeter CSV file:

    • CSV file must have a Header

    • Delimeter for CSV file must be COMMA(,).

    • TimeStamp must be Unix timestamp for the First Coloumn.

  • Supported Browsers:

    • Mozilla Firefox

    • Google Chrome

    • Safari

    • Internet Explorer[Limited Access]

  • Click on Refresh icon if you do not get the rest result and it is showing Pending.

  • By default all files are stored in Amazon S3 secured storage server.

 

Features of Janalyser2.0

A Solutions for Jmeter Result Analysis

Analysis of performance Metrics plays key role in Performance Testing. For a better analysis we need a Better report. JAnalyser accomplish this task very efficiently. It is equipped with rich features for analysis and powerful enough to provide on demand services for corporate Users.

JAnalyser makes the Jmeter complete by removing the gap between Jmeter Results and Management Reports.

Features of JAnalyser:

  • Creates a detailed analysis of Jmeter Results and external files such as PerfMon.dstst)
  • You can analyse the Performance results in both CSV and XML.
  • You can create his/her own CSV file and upload to view the result. This is an addition feature added to the Janalyser2.0
  • You can upload a log file from your system like performance log of your load generator machine and upload it to analyse result.
  • Merging of Jmeter Results: I personally found this feature very useful. You can add 2 graphs and show it to your client. An easier way to show comparison of graphs and prepare analysis report.
  • Filter Result: You can filter Result with respect to the Thread Group and Time Duration.
  • You can generate Analysis Report in PDF and HTML format.
  • You can share the test Result

Finding Bottlenecks using Summary Report

  Finding Bottlenecks using Summary Report

Image

Let’s find how to calculate the Throughput :

Add Summary Report to the Thread Group/Request you are sending. There you can get the above report .
Throughput = 1/[Total Time]

Where Total time = [Avg bites] * [1/(KB/Sec)]
Where using [1/(KB/Sec)] we are getting the Time consumed in Sec. For 1 KB of data .
Then simply for one sample i.e Avg bites of that sample = [Avg bites] * [1/(KB/Sec)]
So Throughput is for one sample = 1/[{Avg bites} * {1/(KB/Sec)}]

Eg :
Throughput = 1/[Total Time]
Total time = (6.041 KB) * [1/90] =0.0671 sec
Then Throughput = 1/ 0.0671 = 14.9031297/Sec

Now to decide who is faulty :

Response time is the value counted just before the request is being sent till the whole the last byte of response is produced .

Here Throughput is the measure for the Sever performance .

Case 1 : When we find a scenario where the Response time for the request is high but the Throughput is much lower. This signifies that the Server is not capable enough to sustain/execute the request . Which ask for the tuning in the server side.

Case 2 : When the Response time is high but Throughput in comparison to the Response time is much higher . This implies that the request is taking more time because of fault in the application. We should not blame the server processing time for this. Now it’s time to consider other factors and tune them to make the application performance better.

Why do performance Testing

quality survey

In this competitive world every one wants a great user/client satisfaction. An Excellent feedback from User. Apart from Design, Functionality and Security, Responsiveness of the application plays a tremendous role is user/Client satisfaction.

No body likes to suffer a Performance issue(Response time, Stability, Consistency) for a webpage rather they prefer to navigate to a competitor’s site. And the consequence would be as  below:

  • No doubt- Delay in response time cause
                  Loss in revenue
                  Damage brand image
  • There is a significant correlations between page-load time and the likelihood of a user to convert. So Loss in user Conversions.
  • Lower response time negatively impact user satisfaction.
  • Added to that it also looses Google ranking. Moves down in the Google search result. 😦

Few Survey report and Case studies:

Stop Watch

According to the Aberdeen Group, a 1-second delay in page-load time equals

  • 11% fewer page views
  • 16% decrease in customer satisfaction
  • 7% loss in conversions.

Source: Aberdeen Group – The Performance of Web Applications: Customers are Won or Lost in One Second (2008)

 The Akamai study, published in September 2009, interviewed 1,048 online shoppers and found that:

  • 47% of people expect a web page to load in two seconds or less.
  • 40% will abandon a web page if it takes more than three seconds to load.
  • 52% of online shoppers claim that quick page loads are important for their loyalty to a site.
  • 14% will start shopping at a different site if page loads are slow, 23% will stop shopping or
  • even walk away from their computer.
  • 64% of shoppers who are dissatisfied with their site visit will go somewhere else to shop next time.

Source: http://www.akamai.com/html/about/press/releases/2009/press_091409.html

We can see the consequence of having performance issue in over all aspect. It is just the outside view of what happens “When we do not do performance tuning of our application.”

Now we will drill down to the basic reasons we do performance testing…

Determine the Readiness of the application for release

  • You have a reflection(Might not be 100% ) of your Production Environment as Test Environment. It enables you to execute the test with different scenarios and load levels and predict/estimate the performance characteristic.
  • Stakeholders can decide upon the following notes:

                       Readiness of application for release
                       Degree of End User satisfaction
                       Degree of stability and scalability of the application when it undergoes a massive increase in user base                                                             or volume of data.

  • Performance Testing gives you a platform to decide label of performance tuning required for the application before it goes for a release.
  • In future there is a chance of increase in user base or volume of data which might create scalability and stability issues. It will lead to loss in Revenue and hamper brand credibility due to user dissatisfaction. Performance Testing helps in predicting the cost involved in design/Infrastructure rebuilt, Performance tuning, etc.

Determine the Readiness of Infrastructure 

  • Evaluate and predict the current infrastructure and it’s capability.
  • Determining the degree of scalability it can handle and the cost associated with that.
  • There is possible chances of getting different configuration of system to accomplish the same task. So Performance evaluation of each system would provide a comparison and to choose the best.

Monitor the application performance

  • Creating Baseline for the application performance with different scenario.
  • Monitoring the deviation of the performance characteristics after adding a new functionality with the baseline.
  • Plotting the comparative data between application’s current and desired performance characteristics.

Improve efficiency of performance tuning

  •  Analyse the performance of the application in different scenarios(Load, Stress. Spike, Endurance Testing etc)
  • Provide the matrices on the Speed, throughput, Scalability, resource utilization, Stability of the product or application before the release and Identify the bottleneck.
  • Provide an opportunity to decide on fixing the bottleneck.

Courtesy: http://msdn.microsoft.com/en-us/library/bb924375.aspx

Controllers in Jmeter

ONCE ONLY CONTROLLER

The child request below this will be executed once only. In the next iteration Jmeter will skip the Request under ONCE ONLY CONTROLLER.

  • Test Plan
    #  Thread Group
      ** Once Only Controller
                  Login.aspx
           **Interleave Controller
                  SearchStudent.aspx
                  RegisterNewStudent.aspx
            AssignMark.aspx
    #View Result Tree

Here the Login.aspx we be executed once only . In the next iteration the Login.aspx will be skipped.

INTERLEAVE CONTROLLER

It is one of the most important Logic controller in Jmeter. While doing performance testing we do not know how the user will navigate between the web application . I mean different user will choose to navigate according to their own requirement.
Example:
A user will login. Then a can go for search for a student in an application. The user may go for the Registration of a new student .
But the a user can not do both the stuff simultaneously. Hence we have put INTERLEAVE CONTROLLER to move through every possible navigation.

Example:

  • Test Plan
    #  Thread Group
    ** Once Only Controller
             – Login.aspx
    **Interleave Controller
            – SearchStudent.aspx
            – RegisterNewStudent.aspx
            – AssignMark.aspx

Then the Execution will be like :

  1. Login.aspx
  2. SearchStudent.aspx
  3. AssignMark.aspx
  4. AssignMark.aspx
  5. RegisterNewStudent.aspx
  6. AssignMark.aspx

Random Controller 

It does the same as the interleave controller but in a random fashion.

  • Test Plan
    #  Thread Group
    ** Once Only Controller
            – Login.aspx
    ** Random Controller
            – SearchStudent.aspx
            – RegisterNewStudent.aspx
            – OpenExistingStident.aspx
            – AssignMark.aspx
    ** View Result Tree

Then the Execution will be like:

  1. Login.aspx
    ————————-
  2. SearchStudent.aspx 
  3. AssignMark.aspx
    ————————-
  4. OpenExistingStident.aspx
  5. AssignMark.aspx
    ————————–
  6. RegisterNewStudent.aspx 
  7. AssignMark.aspx

Random Order Controller

It picks up all the request under it(Child Request) and execute them in a random order.

  • Test Plan
    #  Thread Group
    ** Once Only Controller
            – Login.aspx
    ** Random Order Controller
           – SearchStudent.aspx
           – RegisterNewStudent.aspx
           – OpenExistingStident.aspx
           – AssignMark.aspx
    ** View Result Tree

Then the Execution will be like :

  1. Login.aspx
    ————————-
  2. OpenExistingStident.aspx
  3. SearchStudent.aspx
  4. RegisterNewStudent.aspx
    ————————–
  5. AssignMark.aspx .

Random Order Controller Vs Random Controller

Random Controller picks only one Request from its children and switch to the next node but Random Order Controller sends all the request at one go but all in a random order .

LOOP CONTROLLER

If we put a HTTP request under the Loop controller then the specific sample will be loop to the server for the same number of time. We can do ENDURANCE TEST using this Logic controller.
TIP : Difference between the LOOP in the Thread group and Loop controller is that the scope for former is the thread group completely but the later is specific to the HTTP request which are child to that.

Transaction Controller

Transaction controller generates another controller which measures the over all time taken to perform the nested element[All the processing within the controller scope] The Transaction Controller is used to combine a group of samples into one combined
result. Suppose you have three actions that you are more concerned about how long all three take that than the results for each of the three. You would set it up like this:

  • Test Plan
    #  Thread Group
    ** Once Only Controller
            – Login.aspx
    ** Transaction Controller
           – SearchStudent.aspx
           – RegisterNewStudent.aspx
           – OpenExistingStident.aspx
           – AssignMark.aspx
    ** View Result Tree

Then Jmeter will bind the child request under the Transaction controller and show the the over all time taken by the child requests.

Simple Controller

This controller only helps in binding the child samples.

Recording Controller

Recording controller is used to record individual request sent in conjunction with the proxy server

* Add the proxy server
* Add a Recording controller under the proxy server .

If we change the Proxy setting to the Local Host in the browser then all the request we are sending using the proxy server through the Localhost will be recorded by the Recording Tool .