Wednesday, December 31, 2008
"Even if a snake is not poisonous, it should pretend to be venomous."
"The biggest guru-mantra is: Never share your secrets with anybody! It will destroy you."
"There is some self-interest behind every friendship. There is no friendship without self-interests. This is a bitter truth."
"Before you start some work, always ask yourself three questions - Why am I doing it, What the results might be and Will I be successful. Only when you think deeply and find satisfactory answers to these questions, go ahead."
"As soon as the fear approaches near, attack and destroy it."
"The world's biggest power is the youth and beauty of a woman."
"Once you start working on something, don't be afraid of failure and don't abandon it. People who work sincerely are the happiest."
"The fragrance of flowers spreads only in the direction of the wind. But the goodness of a person spreads in all direction."
"Whores don't live in company of poor men, citizens never support a weak company and birds don't build nests on a tree that doesn't bear fruits."
"God is not present in idols. Your feelings are your god. The soul is your temple."
"A man is great by deeds, not by birth."
"Never make friends with people who are above or below you in status. Such friendships will never give you any happiness."
"Treat your kid like a darling for the first five years. For the next five years, scold them. By the time they turn sixteen, treat them like a friend. Your grown up children are your best friends."
"Books are as useful to a stupid person as a mirror is useful to a blind person."
"Education is the best friend. An educated person is respected everywhere. Education beats the beauty and the youth."
Friday, December 12, 2008
If I could offer you only one tip for the future, sunscreen would be it. The long term benefits of sunscreen have been proved by scientists whereas the rest of my advice has no basis more reliable than my own meandering experience…I will dispense this advice now.
Enjoy the power and beauty of your youth; oh nevermind; you will not understand the power and beauty of your youth until they have faded. But trust me, in 20 years you'll look back at photos of yourself and recall in a way you can't grasp now how much possibility lay before you and how fabulous you really looked….You're not as fat as you imagine.
Don't worry about the future; or worry, but know that worrying is as effective as trying to solve an algebra equation by chewing bubblegum. The real troubles in your life are apt to be things that never crossed your worried mind; the kind that blindside you at 4pm on some idle Tuesday.
Do one thing everyday that scares you
Don't be reckless with other people's hearts, don't put up with people who are reckless with yours.
Don't waste your time on jealousy; sometimes you're ahead, sometimes you're behind…the race is long, and in the end, it's only with yourself.
Remember the compliments you receive, forget the insults; if you succeed in doing this, tell me how.
Keep your old love letters, throw away your old bank statements.
Don't feel guilty if you don't know what you want to do with your life…the most interesting people I know didn't know at 22 what they wanted to do with their lives, some of the most interesting 40 year olds know still don't.
Get plenty of calcium.
Be kind to your knees, you'll miss them when they're gone.
Maybe you'll marry, maybe you won't, maybe you'll have children, maybe you won't, maybe you'll divorce at 40, maybe you'll dance the funky chicken on your 75th wedding anniversary…what ever you do, don't congratulate yourself too much or berate yourself either – your choices are half chance, so are everybody else's. Enjoy your body, use it every way you can…don't be afraid of it, or what other people think of it, it's the greatest instrument you'll ever own..
Dance…even if you have nowhere to do it but in your own living room.
Read the directions, even if you don't follow them.
Do NOT read beauty magazines, they will only make you feel ugly.
Get to know your parents, you never know when they'll be gone for good.
Be nice to your siblings; they are the best link to your past and the people most likely to stick with you in the future.
Understand that friends come and go,but for the precious few you should hold on. Work hard to bridge the gaps in geography in lifestyle because the older you get, the more you need the people you knew when you were young.
Live in New York City once, but leave before it makes you hard; live in Northern California once, but leave before it makes you soft.
Accept certain inalienable truths, prices will rise, politicians will philander, you too will get old, and when you do you'll fantasize that when you were young prices were reasonable, politicians were noble and children respected their elders.
Respect your elders.
Don't expect anyone else to support you. Maybe you have a trust fund, maybe you have a wealthy spouse; but you never know when either one might run out.
Don't mess too much with your hair, or by the time it's 40, it will look 85.
Be careful whose advice you buy, but, be patient with those who supply it. Advice is a form of nostalgia, dispensing it is a way of fishing the past from the disposal, wiping it off, painting over the ugly parts and recycling it for more than it's worth.
But trust me on the sunscreen...
Tuesday, December 9, 2008
In their book The Secret: What Great Leaders Know—And Do, authors Ken Blanchard and Mark Miller, vice president of training and development for Chick-fil-A, use the acronym SERVE to help readers remember these simple principles for success.
S stands for See the Future.
E stands for Engage and Develop People.
R stands for Reinvent Continuously.
V stands for Value Results and Relationships
E stands for Embody the Values
How you handle a situation and what you learn through the process, is what determines whether it is good or bad. Remember
“Tough times don’t last, but Tough people do”
S stands for See the Future. This has to do with the important visionary role that leaders play in an organization. A compelling vision allows people to be proactive and move toward what they want rather than reactively moving away from what they don’t want. A vision builds trust, collaboration, interdependence, motivation, and mutual responsibility for success. Vision helps people make smart choices, because their decisions are being made with the end result in mind.
Consider these questions as you think about Seeing the Future in your organization:
- Where do you want your team to be in five years?
- How many members of your team could tell you what the team is trying to achieve?
E stands for Engage and Develop People. As a leader, once the vision and direction are set, you have to focus on engaging and developing your people so that they can live according to the vision.
People need to be trained in self leadership. While many organizations teach managers how to delegate, there is less emphasis on developing individuals to pick up the ball and run with it. Organizations on the leading edge have learned that developing self leaders is a powerful way to positively impact the bottom line.
For example, one of our clients, Bandag Manufacturing, experienced the value of self leadership after a major equipment breakdown at its California plant. Rather than laying off the affected workforce, the company opted to train them in self leadership. When the plant’s ramp-up time was compared to the company’s other eight plants that had experienced similar breakdowns in the past, the California plant reached pre-breakdown production levels faster than any other. The manufacturer studied other measures, too, and concluded that the determining factor in the plant’s successful rebound was primarily due to the proactive behavior of the workers, who were fully engaged and armed with the skills of self leadership.
Consider these questions as you think about Engaging and Developing People:
- To what extent have you successfully engaged each member of your team?
- How are you encouraging the development of your people?
R stands for Reinvent Continuously. Great leaders are always seeking answers to questions like these:
- How can we do the work better?
- How can we do it with fewer errors?
- How can we do it faster?
- How can we do it for less?
- What systems or processes can we change to enhance performance?
One of the biggest challenges leaders face when they look to re-invent processes to better serve the customer is inertia. Many people assume that an organizational structure is permanent. In many cases, the organizational structure no longer serves the business—the people are simply serving the structure.
It’s good to have a plan; it’s good to have your structure in place. But always be watchful and determine whether it’s serving you, your customers, and your people well. If it’s not, change it.
V stands for Value Results and Relationships. Great leaders—those who lead at a higher level—value both results and relationships. Both are critical for long-term survival. Not either/or, but both/and. For too long, many leaders have felt that they needed to choose. The way to maximize your results as a leader is to have high expectations for both results and relationships. If leaders can take care of their customers and create a motivating environment for their people, profits and financial strength are the applause they get for a job well done. Success is both results and relationships.
Consider these questions as you think about Valuing Results and Relationships:
- How much emphasis do you place on getting results?
- How many of your people would say that you have made a significant investment in their lives?
- What are the ways in which you have expressed appreciation for work well done in the last thirty days?
E stands for Embody the Values. All genuine leadership is built on trust. Embody the Values is all about walking your talk.
Many organizations—including The Ken Blanchard Companies—were negatively impacted by the events of September 11, 2001. In Blanchard’s case, the company lost $1.5 million that month. To have any chance of ending the fiscal year in the black, the company would have to cut about $350,000 a month in expenses.
The leadership team had some tough decisions to make. One of the leaders suggested that the staffing level be cut by at least 10 percent to stem the losses and help get the company back in the black—a typical response in most companies.
As they do before making any major decision, members of the leadership team checked the decision to cut staff against the rank-ordered organizational values of ethical behavior, relationships, success, and learning. Was the decision to let people go at such a difficult time ethical? To many, the answer was no. There was a general feeling that the staff had made the company what it was; putting people out on the street at a time like this was not the right thing to do. Did the decision honor the high value that the organization placed on relationships? No, it did not. But what could be done? The company could not go on bleeding money and be successful.
The leadership team decided to draw on the knowledge and talents of the entire staff. At an all-company meeting, the books were opened to show everyone how much the company was losing, and from where. This open-book policy unleashed a torrent of ideas and commitment. Small task forces were organized to look for ways to increase revenues and cut costs. This participation resulted in departments throughout the company finding all kinds of ways to minimize spending and maximize income.
Things were tough for awhile, but over the next two years, the finances gradually turned around—as they will this time also. By 2004, the company produced the highest sales in its history.
The Importance of Good Leadership
Continually doing a good job in each of these areas is a significant task, yet it’s worth it. We believe that servant leadership has never been more applicable to the world of leadership than it is today. Not only are people looking for deeper purpose and meaning as they meet the challenges of today’s changing world, they are also looking for principles that actually work. Servant leadership works. Servant leadership is about getting people to a higher level by leading people at a higher level.
Thursday, November 6, 2008
Wednesday, November 5, 2008
Thursday, October 16, 2008
While the crisis may seem to be wreaking havoc on IT companies in the short-term, it offers many opportunities in the long-term। Here are seven ways how US credit crunch will help the Desi IT companies.
It's perhaps the best time for Indian IT companies to go for acquisitions. The falling valuation of IT companies is an opportunity for Indian companies to broaden their portfolio. Experts say that US-based IT companies are increasingly looking at cost-cutting to sustain. However, valuations have already dipped by nearly 30 per cent, which has triggered many prospective Indian companies to hunt for acquisitions there.
The recent counter bid by HCL Tech for UK-based Axon, whose acquisition was announced by Infosys last month, highlights the urgency among Indian outsourcers to expand their markets, grab bigger spending clients and beat the US slowdown. Similarly, the latest deal tracker by Grant Thornton, the number of outbound deals was higher than the inbound deals. Till August -- just before the financial market meltdown -- more than 10 per cent of the deals in the M&A space were in the IT sector, as against 3.91 per cent last year.
The acquisitions will primarily help Indian IT companies to expand their market। They will help companies acquire skill sets and contracts and relationships that have higher billing rates and that are more complex value-add services.
What can be a better time to reduce dependence on the US market? All big Indian IT companies get more than 50 per cent of their revenues from the US markets. The financial turmoil in the US is making Indian IT companies look beyond their key market and explore new territories.
Indian IT firms such as Infosys and rival Tata Consultancy Services are rapidly expanding in Europe, Asia, the Middle-East and Latin America to cut their dependence on the United States. In fact, the last two quarters of the leading IT companies have also shown a jump in revenues from other geographies.
The No 1 software exporter TCS is making large investments in Latin America and the Asia-Pacific region, including India। Infosys Technologies too plans to cut its dependence on the US down to about 40 per cent from the present 60 per cent.
Traditionally, Indian IT biggies earn more than 50 per cent of their revenues from BFSI (banking, financial services and insurance) segment. With the global financial crisis, these companies are now looking at other verticals and expanding their portfolio to drive profits.
The recent battle over Axon between Infosys and HCL also shows the Indian companies' zeal to target European markets increasingly.
Analysts expect that over the next couple of months, Indian IT firms may not sign any big contracts in the BFSI sector, but instead they may bag opportunities in areas like retailing, transport, healthcare and manufacturing.
As the industry diversifies, it is expected to see growth coming from unpenetrated areas and verticals। Traction in manufacturing, life sciences and retail verticals helped TCS drive growth in Q1. The company CEO also said that new verticals like retail, manufacturing and life sciences are growing. Satyam has generated some $440 million that accounted for 21 per cent of its revenue from Europe last year.
Many analysts also believe that the global financial turmoil may in a way also boost India's outsourcing industry as the focus shifts towards cost-cutting, making them shift work to cheaper locations.
According to them, existing contracts could continue and outsourcing could be stepped up but there could also be massive restructuring of offshore deals.
The software industry body Nasscom too believes that the financial crisis will lead to more business coming to India. For example, the tremors of job cuts as a result of HP-EDS integration are also likely to be felt the most in high-cost locations. The US is reported to suffer at least 50 per cent of the total 24,600 job cuts announced.
Recently, Gartner too has said that the consolidation among large financial services players such as Bank of America's acquisition of Merrill Lynch and Lloyd TSB's takeover of HBOS will provide huge integration opportunities for Indian IT software companies.
The domestic IT services market has, in fact, been growing at a faster pace than the total IT industry growth. However, the market is largely ruled by global tech players, with IBM leading the show.
So, the US slowdown may finally make Indian IT companies look in their own backyard and realise the latent market potential.
The overall domestic market, comprising hardware, software and services (IT-BPO), grew a 42 per cent in FY2007, and is forecast to reach $23.2 billion in the current fiscal, according to Nasscom. Out of this, IT services such as application development and consulting alone account for $7.9 billion, up from $5.5 billion, last year.
Interestingly, of the $50 billion IT services revenue of companies operating from India, about $10 billion already comes from the domestic market। In effect, India might be a small market in the global IT services context, but it's the third largest revenue generator for IT companies after the US (60 per cent of the total) and the UK (18 per cent).
As the heat of global slowdown spreads, it's time for IT players to modify their recruitment strategies, keeping them in tune with the changing market conditions and demands.
Recently, Microsoft Corp had said that it was reviewing its hiring plans in light of the tough economic conditions, but denied reports that it had instituted a company-wide hiring freeze.
Also, many IT companies are going in for hiring more number of trained manpower, rather than freshers. Recently, TCS, which recruits about 18,000 employees every year, decided to make significant cuts in recruitment patterns to tide over the crisis. The company now plans to hire more experienced candidates rather than go in for fresh recruits.
Wipro too has recast its hiring plans. The company has introduced stringent measures while taking in any fresh recruits. The company has even set up a Talent Quality Group within Talent Acquisition division to ensure quality hiring.
At the same time, Wipro has also started campus hiring in US and UK if certain media reports are to be believed.
The adage of learning from the past mistakes rightly fits here. The crisis at two iconic institutions in the US has brought in the urgent need for greater financial accountability.
So it is time for business intelligence vendors to step in to offer advice on financial regulation. This means more work for tech vendors specialising in the domain. These tech vendors therefore could experience an increase in demand for data management software from financial institutions to monitor their practices.
In fact, a few analysts believe the stringent regulations that will come into the financial sector after the crisis will create massive opportunities for companies specialising in business intelligence.
Monday, October 13, 2008
Our life is like a song -
Sad and Happy, fast and slow
In our life time we meet them
In one go and never get to know What is right and what is wrong
It is like a rainbow -
There are times when the colors are bright
And we stand bold and other we have to bow, why we are not to know
It is like a ship -
which in all condition has to row
No matter whether fast or slow,We are just to go-go-go
Where? No body has and no body will know.
Wednesday, October 1, 2008
There are many different ways to go about performance testing enterprise applications, some of them more difficult than others. The type of performance testing you will do depends on what type of results you want to achieve. For example, for repeatability, benchmark testing is the best methodology. However, to test the upper limits of the system from the perspective of concurrent user load, capacity planning tests should be used. This article discusses the differences and examines various ways to go about setting up and running these performance tests.
Performance testing a J2EE application can be a daunting and seemingly confusing task if you don't approach it with the proper plan in place. As with any software development process, you must gather requirements, understand the business needs, and lay out a formal schedule well in advance of the actual testing. The requirements for the performance testing should be driven by the needs of the business and should be explained with a set of use cases. These can be based on historical data (say, what the load pattern was on the server for a week) or on approximations based on anticipated usage. Once you have an understanding of what you need to test, you need to look at how you want to test your application.
Early on in the development cycle, benchmark tests should be used to determine if any performance regressions are in the application. Benchmark tests are great for gathering repeatable results in a relatively short period of time. The best way to benchmark is to change one and only one parameter between tests. For example, if you want to see if increasing the JVM memory has any impact on the performance of your application, increment the JVM memory in stages (for example, going from 1024 MB to 1224 MB, then to 1524 MB, and finally to 2024 MB) and stop at each stage to gather the results and environment data, record this information, and then move on to the next test. This way you'll have a clear trail to follow when you are analyzing the results of the tests. In the next section, I discuss what a benchmark test looks like and the best parameters for running these tests.
Later on in the development cycle, after the bugs have been worked out of the application and it has reached a stable point, you can run more complex types of tests to determine how the system will perform under different load patterns. These types of tests are called capacity planning, soak tests, and peak-rest tests, and are designed to test "real-world"-type scenarios by testing the reliability, robustness, and scalability of the application. The descriptions I use below should be taken in the abstract sense because every application's usage pattern will be different. For example, capacity-planning tests are generally used with slow ramp-ups (defined below), but if your application sees quick bursts of traffic during a period of the day, then certainly modify your test to reflect this. Keep in mind, though, that as you change variables in the test (such as the period of ramp-up that I talk about here or the "think-time" of the users) the outcome of the test will vary. It is always a good idea to run a series of baseline tests first to establish a known, controlled environment to compare your changes with later.
The key to benchmark testing is to have consistently reproducible results. Results that are reproducible allow you to do two things: reduce the number of times you have to rerun those tests; and gain confidence in the product you are testing and the numbers you produce. The performance-testing tool you use can have a great impact on your test results. Assuming two of the metrics you are benchmarking are the response time of the server and the throughput of the server, these are affected by how much load is put onto the server. The amount of load that is put onto the server can come from two different areas: the number of connections (or virtual users) that are hitting the server simultaneously; and the amount of think-time each virtual user has between requests to the server. Obviously, the more users hitting the server, the more load will be generated. Also, the shorter the think-time between requests from each user, the greater the load will be on the server. Combine those two attributes in various ways to come up with different levels of server load. Keep in mind that as you put more load on the server, the throughput will climb, to a point.
Figure 1. The throughput of the system in pages per second as load increases over time
Note that the throughput increases at a constant rate and then at some point levels off.
At some point, the execute queue starts growing because all the threads on the server will be in use. The incoming requests, instead of being processed immediately, will be put into a queue and processed when threads become available.
Figure 2. The execute queue length of the system as load increases over time
Note that the queue length is zero for a period of time, but then starts to grow at a constant rate. This is because there is a steady increase in load on the system, and although initially the system had enough free threads to cope with the additional load, eventually it became overwhelmed and had to start queuing them up.
When the system reaches the point of saturation, the throughput of the server plateaus, and you have reached the maximum for the system given those conditions. However, as server load continues to grow, the response time of the system also grows even as the throughput plateaus.
Figure 3. The response times of two transactions on the system as load increases over time
Note that at the same time as the execute queue (above) starts to grow, the response time also starts to grow at an increased rate. This is because the requests cannot be served immediately.
To have truly reproducible results, the system should be put under a high load with no variability. To accomplish this, the virtual users hitting the server should have 0 seconds of think-time between requests. This is because the server is immediately put under load and will start building an execute queue. If the number of requests (and virtual users) is kept consistent, the results of the benchmarking should be highly accurate and very reproducible.
One question you should raise is, "How do you measure the results?" An average should be taken of the response time and throughput for a given test. The only way to accurately get these numbers though is to load all the users at once, and then run them for a predetermined amount of time. This is called a "flat" run.
Figure 4. This is what a flat run looks like. All the users are loaded simultaneously.
The users in a ramp-up run are staggered (adding a few new users every x seconds). The ramp-up run does not allow for accurate and reproducible averages because the load on the system is constantly changing as the users are being added a few at a time. Therefore, the flat run is ideal for getting benchmark numbers.
This is not to discount the value in running ramp-up-style tests. In fact, ramp-up tests are valuable for finding the ballpark in which you think you later want to run flat runs. The beauty of a ramp-up test is that you can see how the measurements change as the load on the system changes. Then you can pick the range you later want to run with flat tests.
The problem with flat runs is that the system will experience "wave" effects।
Figure 6. The throughput of the system in pages per second as measured during a flat run
Note the appearance of waves over time. The throughput is not smooth but rather resembles a wave pattern.
Note the appearance of waves over a period of time. The CPU utilization is not smooth but rather has very sharp peaks that resemble the throughput graph's waves.
Additionally, the execute queue experiences this unstable load, and therefore you see the queue growing and shrinking as the load on the system increases and decreases over time.
Figure 8. The execute queue of the system over time as measured during a flat run
Note the appearance of waves over time. The execute queue exactly mimics the CPU utilization graph above.
Note the appearance of waves over time. The transaction response time lines up with the above graphs, but the effect is diminished over time.
This occurs when all the users are doing approximately the same thing at the same time during the test. This will produce very unreliable and inaccurate results, so something must be done to counteract this. There are two ways to gain accurate measurements from these types of results. If the test is allowed to run for a very long duration (sometimes several hours, depending on how long one user iteration takes) eventually a natural sort of randomness will set in and the throughput of the server will "flatten out." Alternatively, measurements can be taken only between two of the breaks in the waves. The drawback of this method is that the duration you are capturing data from is going to be short.
For capacity-planning-type tests, your goal is to show how far a given application can scale under a specific set of circumstances. Reproducibility is not as important here as in benchmark testing because there will often be a randomness factor in the testing. This is introduced to try to simulate a more customer-like or real-world application with a real user load. Often the specific goal is to find out how many concurrent users the system can support below a certain server response time. As an example, the question you may ask is, "How many servers do I need to support 8,000 concurrent users with a response time of 5 seconds or less?" To answer this question, you'll need more information about the system.
To attempt to determine the capacity of the system, several factors must be taken into consideration. Often the total number of users on the system is thrown around (in the hundreds of thousands), but in reality, this number doesn't mean a whole lot. What you really need to know is how many of those users will be hitting the server concurrently. The next thing you need to know is what the think-time or time between requests for each user will be. This is critical because the lower the think-time, the fewer concurrent users the system will be able to support. For example, a system that has users with a 1-second think-time will probably be able to support only a few hundred concurrently. However, a system with a think-time of 30 seconds will be able to support tens of thousands (given that the hardware and application are the same). In the real world, it is often difficult to determine exactly what the think-time of the users is. It is also important to note that in the real world users won't be clicking at exactly that interval every time they send a request.
This is where randomization comes into play. If you know your average user has a think-time of 5 seconds give or take 20 percent, then when you design your load test, ensure that there is 5 seconds +/- 20 percent between every click. Additionally, the notion of "pacing" can be used to introduce more randomness into your load scenario. It works like this: After a virtual user has completed one full set of requests, that user pauses for either a set period of time or a small, randomized period of time (say, 2 seconds +/- 25 percent), and then continues on with the next full set of requests. Combining these two methods of randomization into the test run should provide more of a real-world-like scenario.
Now comes the part where you actually run your capacity planning test. The next question is, "How do I load the users to simulate the load?" The best way to do this is to try to emulate how users hit the server during peak hours. Does that user load happen gradually over a period of time? If so, a ramp-up-style load should be used, where x number of users are added ever y seconds. Or, do all the users hit the system in a very short period of time all at once? If that is the case, a flat run should be used, where all the users are simultaneously loaded onto the server. These different styles will produce different results that are not comparable. For instance, if a ramp-up run is done and you find out that the system can support 5,000 users with a response time of 4 seconds or less, and then you follow that test with a flat run with 5,000 users, you'll probably find that the average response time of the system with 5,000 users is higher than 4 seconds. This is an inherent inaccuracy in ramp-up runs that prevents them from pinpointing the exact number of concurrent users a system can support. For a portal application, for example, this inaccuracy is amplified as the size of the portal grows and as the size of the cluster is increased.
This is not to say that ramp-up tests should not be used. Ramp-up runs are great if the load on the system is slowly increased over a long period of time. This is because the system will be able to continually adjust over time. If a fast ramp-up is used, the system will lag and artificially report a lower response time than what would be seen if a similar number of users were being loaded during a flat run. So, what is the best way to determine capacity? Taking the best of both load types and running a series of tests will yield the best results. For example, using a ramp-up run to determine the range of users that the system can support should be used first. Then, once that range has been determined, doing a series of flat runs at various concurrent user loads within that range can be used to more accurately determine the capacity of the system.
A soak test is a straightforward type of performance test. Soak tests are long-duration tests with a static number of concurrent users that test the overall robustness of the system. These tests will show any performance degradations over time via memory leaks, increased garbage collection (GC), or other problems in the system. The longer the test, the more confidence in the system you will have. It is a good idea to run this test twice—once with a fairly moderate user load (but below capacity so that there is no execute queue) and once with a high user load (so that there is a positive execute queue).
These tests should be run for several days to really get a good idea of the long-term health of the application. Make sure that the application being tested is as close to real world as possible with a realistic user scenario (how the virtual users navigate through the application) testing all the features of the application. Ensure that all the necessary monitoring tools are running so problems will be accurately detected and tracked down later.
Peak-rest tests are a hybrid of the capacity-planning ramp-up-style tests and soak tests. The goal here is to determine how well the system recovers from a high load (such as one during peak hours of the system), goes back to near idle, and then goes back up to peak load and back down again.
The best way to implement this test is to do a series of quick ramp-up tests followed by a plateau (determined by the business requirements), and then a dropping off of the load. A pause in the system should then be used, followed by another quick ramp-up; then you repeat the process. A couple things can be determined from this: Does the system recover on the second "peak" and each subsequent peak to the same level (or greater) than the first peak? And does the system show any signs of memory or GC degradation over the course of the test? The longer this test is run (repeating the peak/idle cycle over and over), the better idea you'll have of what the long-term health of the system looks like.
This article has described several approaches to performance testing. Depending on the business requirements, development cycle, and lifecycle of the application, some tests will be better suited than others for a given organization. In all cases though, you should ask some fundamental questions before going down one path or another. The answers to these questions will then determine how to best test the application.
These questions are:
- How repeatable do the results need to be?
- How many times do you want to run and rerun these tests?
- What stage of the development cycle are you in?
- What are your business requirements?
- What are your user requirements?
- How long do you expect the live production system to stay up between maintenance downtimes?
- What is the expected user load during an average business day?
By answering these questions and then seeing how the answers fit into the above performance test types, you should be able to come up with a solid plan for testing the overall performance of your application.