Basic Overview of How git Works

Here is a basic overview of how Git works:

  1. Create a “repository” (project) with a git hosting tool (like Bitbucket)
  2. Copy (or clone) the repository to your local machine
  3. Add a file to your local repo and “commit” (save) the changes
  4. “Push” your changes to your master branch
  5. Make a change to your file with a git hosting tool and commit
  6. “Pull” the changes to your local machine
  7. Create a “branch” (version), make a change, commit the change
  8. Open a “pull request” (propose changes to the master branch)
  9. “Merge” your branch to the master branch
git init

The word init means initialize. The command sets up all the tools Git needs to begin tracking changes made to the project.

git clone creates a local copy of a project that already exists remotely. The clone includes all the project’s files, history, and branches.

git branch shows the branches being worked on locally.

We have a Git project. A Git project can be thought of as having three parts:

  1. Working Directory: where you’ll be doing all the work: creating, editing, deleting and organizing files
  2. Staging Area: where you’ll list changes you make to the working directory
  3. Repository: where Git permanently stores those changes as different versions of the project

git status

You can check the status of the modified file in working directory with “git status”.

In order for Git to start tracking files need to be added to the staging area we need to add the files to staging area with:

git add filename

we can check the differences between the working directory and the staging area with:

git diff filename

commit is the last step in our Git workflow. A commit permanently stores changes from the staging area inside the repository.

git commit -m “Added file line”

Often with Git, you’ll need to refer back to an earlier version of a project. Commits are stored chronologically in the repository and can be viewed with:

git log

 

How to make changes to your Last Commit:

One of the common undos takes place when you commit too early and possibly forget to add some files, or you mess up your commit message. If you want to try that commit again, you can run commit with the --amend option:

$ git commit --amend

This command takes your staging area and uses it for the commit. If you’ve made no changes since your last commit (for instance, you run this command immediately after your previous commit), then your snapshot will look exactly the same and all you’ll change is your commit message.

The same commit-message editor fires up, but it already contains the message of your previous commit. You can edit the message the same as always, but it overwrites your previous commit.

As an example, if you commit and then realize you forgot to stage the changes in a file you wanted to add to this commit, you can do something like this:

$ git commit -m 'initial commit'
$ git add forgotten_file
$ git commit --amend

After these three commands, you end up with a single commit — the second commit replaces the results of the first.

How to Unstage a Staged File:

git reset HEAD <file>

How to Unmodify a Modified File:

How can you easily unmodify it — revert it back to what it looked like when you last committed (or initially cloned, or however you got it into your working directory)?

git checkout — <file>

It will discard changes in working directory for the file and undo the changes made after the last commit/clone.

You should also realize that this is a dangerous command: any changes you made to that file are gone — you just copied another file over it. Don’t ever use this command unless you absolutely know that you don’t want the file.

Advertisements

Perfromance Testing in Cloud

Saas(Software as a service) is the key concept behind Cloud. It says someone(Vendors) has every resources you want over cloud and you just need to pay as per required and avail the services.

Definition by wiki:

“In Computer science, cloud computing describes a type of outsourcing of computer services, similar to the way in which the supply of electricity is outsourced. Users can simply use it. They do not need to worry where the electricity is from, how it is made, or transported. Every month, they pay for what they consumed. “

  • YOU ASK your vendor the required resources(System, CPU, RAM, Memory, Bandwidth, Geographical Location of servers) YOU GET it.

  • YOU ASK your vendor your service to be hourly, weekly, Monthly or any specific time duration YOU GET it.

  • YOU ASK your vendor your service to charge on Script execution basis YOU GET it

Cloud computing has created a trend for outsourcing of computing, storage and networking in order to create a more dynamic and efficient infrastructure. It has almost all the characteristics to solve the challenges faced for Performance Testing. In Performance testing, more the script/execution to be realistic more accurate the result will be. So Cloud has become one of the integral part of Performance Testing.

Some of the Cloud Providers for Performance Testing:

http://www.soasta.com/

https://www.pronq.com/software/stormrunner-load

http://blazemeter.com/

http://www.neotys.com/product/neotys-cloud-platform.html

New Features In Jmeter-2.11

Apache Jmeter has six successful releases with a time gap of two years only. With every release we can see a lot of new features, enhancements, Non-Functional Changes and Bug fixes. This shows a continuous involvement of community to enhance the product to its best.

Lets pick some of the good features added to Jmeter2.11:

1) Summariser in Non GUI mode:

In Previous versions of Jmeter you need to add a listener named Generate Summary Result to the jmx file. So that you can view the Summary in NON GUI mode after 3 minutes(180 sec) by default.

With 2.11 you can view the summary in non gui mode by default. You can also tweak the summary result by editing the following properties files.

Jmeter Properties:

#—————————————————————————
# Summariser – Generate Summary Results – configuration (mainly applies to non-GUI mode)
#—————————————————————————
# Define the following property to automatically start a summariser with that name
# (applies to non-GUI mode only)
summariser.name=summary
# interval between summaries (in seconds) default 30 seconds
summariser.interval=30
# Write messages to log file
summariser.log=true
# Write messages to System.out
summariser.out=true

summariser.name=summary

By default summariser name would be “Summary”. You can edit and give a name of your own convenience. No Name means disabled summariser.

summariser.interval=30

30Sec is the default interval which can be editable

summariser.log=true

If it is true then all the summariser information is going to be append to the Jmeter.log file. Changing it to false do not add any information of summariser.

summariser.out=true

Decides whether to show the summariser information to standard output.

You can view the command prompt as below.

    Summerizer

2)Introduction of “Save as Test Fragment”

  TestFragment

What is Test Fragment:
For a complex piece of Test Plan it is very tough to maintain, debug and execute the script. Code re-usability with a complex script is almost 0%. So it is always a best practice to split your test plan by functional components. Lets say for a banking application, you can create a jmx file for
1.login and logout
2.Account Summary & mini statement
3.Transaction
4.Add Beneficiary
5.Registration etc…

So the main idea of Test Fragmentation is to split the complex test Script by functional components for better maintenance and code re-usability.

So with Jmeter2.11 you can select a group of element and save them as a test Fragment. Later just grab them and merge to which ever script you want using Include and Module Controller. I found it very time saving as it saves time and able to reuse the script/code efficiently.

3)Instant Xpath Tester:

 XpathTester

I personally felt the RegExp Tester is a great help to test any regular expression before applying it to the script. Jmeter 2.11 has given one addition to it by featuring Xpath Tester. You can take any Xpath Expression and put it in the View Result Tree and test it if it is appropriate.

Checkpoints for Janalyser2.0

  • Change in Jmeter Properties File:

          You need to change some of the properties in the Jmeter Properties file(apache-jmeter-                                      2.9\bin\jmeter.properties)


# Results file configuration
#——————————————————
jmeter.save.saveservice.output_format=csv
jmeter.save.saveservice.assertion_results_failure_message=true
jmeter.save.saveservice.default_delimiter=,
#——————————————————

  • Supported Files:[Select the correct format from the dropdown and upload]

    • Jmeter Result in CSV Format

    • Jmeter Result in XML Format

    • Jmeter Log File

    • Customized Jmeter Results

  • Mandatory Files:

    • Jmeter CSV file

    • Jmeter XML file

    • Jmeter Log File[OPTIONAL]

  • Upload any file in ZIP format. For UNZIPPED files it will show unsupported file.[Do NOT ZIP Folder only ZIP the file and upload]

  • Select the Correct TimeZone

  • ZIP file should not be Greater than 1 MB.

  • Checkpoint for Jmeter CSV file:

    • CSV file must have a Header

    • Delimeter for CSV file must be COMMA(,).

    • TimeStamp must be Unix timestamp for the First Coloumn.

  • Supported Browsers:

    • Mozilla Firefox

    • Google Chrome

    • Safari

    • Internet Explorer[Limited Access]

  • Click on Refresh icon if you do not get the rest result and it is showing Pending.

  • By default all files are stored in Amazon S3 secured storage server.

 

Features of Janalyser2.0

A Solutions for Jmeter Result Analysis

Analysis of performance Metrics plays key role in Performance Testing. For a better analysis we need a Better report. JAnalyser accomplish this task very efficiently. It is equipped with rich features for analysis and powerful enough to provide on demand services for corporate Users.

JAnalyser makes the Jmeter complete by removing the gap between Jmeter Results and Management Reports.

Features of JAnalyser:

  • Creates a detailed analysis of Jmeter Results and external files such as PerfMon.dstst)
  • You can analyse the Performance results in both CSV and XML.
  • You can create his/her own CSV file and upload to view the result. This is an addition feature added to the Janalyser2.0
  • You can upload a log file from your system like performance log of your load generator machine and upload it to analyse result.
  • Merging of Jmeter Results: I personally found this feature very useful. You can add 2 graphs and show it to your client. An easier way to show comparison of graphs and prepare analysis report.
  • Filter Result: You can filter Result with respect to the Thread Group and Time Duration.
  • You can generate Analysis Report in PDF and HTML format.
  • You can share the test Result

Finding Bottlenecks using Summary Report

  Finding Bottlenecks using Summary Report

Image

Let’s find how to calculate the Throughput :

Add Summary Report to the Thread Group/Request you are sending. There you can get the above report .
Throughput = 1/[Total Time]

Where Total time = [Avg bites] * [1/(KB/Sec)]
Where using [1/(KB/Sec)] we are getting the Time consumed in Sec. For 1 KB of data .
Then simply for one sample i.e Avg bites of that sample = [Avg bites] * [1/(KB/Sec)]
So Throughput is for one sample = 1/[{Avg bites} * {1/(KB/Sec)}]

Eg :
Throughput = 1/[Total Time]
Total time = (6.041 KB) * [1/90] =0.0671 sec
Then Throughput = 1/ 0.0671 = 14.9031297/Sec

Now to decide who is faulty :

Response time is the value counted just before the request is being sent till the whole the last byte of response is produced .

Here Throughput is the measure for the Sever performance .

Case 1 : When we find a scenario where the Response time for the request is high but the Throughput is much lower. This signifies that the Server is not capable enough to sustain/execute the request . Which ask for the tuning in the server side.

Case 2 : When the Response time is high but Throughput in comparison to the Response time is much higher . This implies that the request is taking more time because of fault in the application. We should not blame the server processing time for this. Now it’s time to consider other factors and tune them to make the application performance better.

Why do performance Testing

quality survey

In this competitive world every one wants a great user/client satisfaction. An Excellent feedback from User. Apart from Design, Functionality and Security, Responsiveness of the application plays a tremendous role is user/Client satisfaction.

No body likes to suffer a Performance issue(Response time, Stability, Consistency) for a webpage rather they prefer to navigate to a competitor’s site. And the consequence would be as  below:

  • No doubt- Delay in response time cause
                  Loss in revenue
                  Damage brand image
  • There is a significant correlations between page-load time and the likelihood of a user to convert. So Loss in user Conversions.
  • Lower response time negatively impact user satisfaction.
  • Added to that it also looses Google ranking. Moves down in the Google search result. 😦

Few Survey report and Case studies:

Stop Watch

According to the Aberdeen Group, a 1-second delay in page-load time equals

  • 11% fewer page views
  • 16% decrease in customer satisfaction
  • 7% loss in conversions.

Source: Aberdeen Group – The Performance of Web Applications: Customers are Won or Lost in One Second (2008)

 The Akamai study, published in September 2009, interviewed 1,048 online shoppers and found that:

  • 47% of people expect a web page to load in two seconds or less.
  • 40% will abandon a web page if it takes more than three seconds to load.
  • 52% of online shoppers claim that quick page loads are important for their loyalty to a site.
  • 14% will start shopping at a different site if page loads are slow, 23% will stop shopping or
  • even walk away from their computer.
  • 64% of shoppers who are dissatisfied with their site visit will go somewhere else to shop next time.

Source: http://www.akamai.com/html/about/press/releases/2009/press_091409.html

We can see the consequence of having performance issue in over all aspect. It is just the outside view of what happens “When we do not do performance tuning of our application.”

Now we will drill down to the basic reasons we do performance testing…

Determine the Readiness of the application for release

  • You have a reflection(Might not be 100% ) of your Production Environment as Test Environment. It enables you to execute the test with different scenarios and load levels and predict/estimate the performance characteristic.
  • Stakeholders can decide upon the following notes:

                       Readiness of application for release
                       Degree of End User satisfaction
                       Degree of stability and scalability of the application when it undergoes a massive increase in user base                                                             or volume of data.

  • Performance Testing gives you a platform to decide label of performance tuning required for the application before it goes for a release.
  • In future there is a chance of increase in user base or volume of data which might create scalability and stability issues. It will lead to loss in Revenue and hamper brand credibility due to user dissatisfaction. Performance Testing helps in predicting the cost involved in design/Infrastructure rebuilt, Performance tuning, etc.

Determine the Readiness of Infrastructure 

  • Evaluate and predict the current infrastructure and it’s capability.
  • Determining the degree of scalability it can handle and the cost associated with that.
  • There is possible chances of getting different configuration of system to accomplish the same task. So Performance evaluation of each system would provide a comparison and to choose the best.

Monitor the application performance

  • Creating Baseline for the application performance with different scenario.
  • Monitoring the deviation of the performance characteristics after adding a new functionality with the baseline.
  • Plotting the comparative data between application’s current and desired performance characteristics.

Improve efficiency of performance tuning

  •  Analyse the performance of the application in different scenarios(Load, Stress. Spike, Endurance Testing etc)
  • Provide the matrices on the Speed, throughput, Scalability, resource utilization, Stability of the product or application before the release and Identify the bottleneck.
  • Provide an opportunity to decide on fixing the bottleneck.

Courtesy: http://msdn.microsoft.com/en-us/library/bb924375.aspx