Skip navigation.

Performance & Load Testing

Syndicate content
The better way to load performance test your website. Join 4,000+ customers in saving time money.
URL: http://loadstorm.com
Updated: 3 days 14 min ago

Too Much Cache is Like a Krispy Kreme Burger

Thu, 09/02/2010 - 10:49
pimg src=http://loadstorm.com/sites/loadstorm.com/files/too-much-cache-Krispy-Kreme.jpg alt=too much cache is like a Krispy Kreme burger class=picture /Have you ever had a a href=http://www.foodnetwork.com/recipes/paulas-home-cooking/the-ladys-brunch-burger-recipe/index.html target=_blankKrispy Kreme Burger/a? It's definitely over the top. Too much of a good thing./p pIf you have read many of my posts about performance tuning, you know that I am a big proponent of caching. Cache whenever you can and RAM trumps disk are two of my mantras of web performance tuning. So, I wondered...can you over-cache?/p pApparently you can - in some situations. As I think about how operating systems use memory, it is obviously a resource to be utilized carefully because it is not an inexhaustible supply. Back in the days (1985) we used to watch memory consumption like a hawk on our VAX 11-780. Even more so on those PDP machines. But as RAM got bigger and operating systems got more sophisticated in handling paging, I stopped worrying about memory much. /p pI found a couple of articles about Windows using a href=http://blogs.msdn.com/b/ntdebugging/archive/2007/11/27/too-much-cache.aspx target=_blanktoo much cache/a and a href=http://blogs.msdn.com/b/ntdebugging/archive/2007/10/10/the-memory-shell-game.aspx target=_blankThe Memory Shell Game/a. While the general principles apply to web applications using too much of a system resource, it doesn't really address the specifics of how caching can be overdone./p pThe first article describes how the cache manager will throttle I/O into the cache manager in order to prevent applications from overtaking physical RAM with write I/O. This is called excessive cached write I/O./p pAdditionally, excessive cached read I/O is where an application actively reads many files continuously through the cache manager causing more physical pages to be consumed. When the cache manager grows too big, other processes will get paged out to disk. This can have a catastrophic affect on performance./p pBoth of these cache problems are more prevalent on 64 bit systems, so I would conclude that web developers will be seeing more of this issue in the future. /p pNow that I work in web performance field, memory has again become important to me. As load increases, it is a key performance indicator to measure memory consumption. /p pEven so, rarely have I ever seen caching have a negative performance impact on web applications. These articles do show a downside to caching that can affect the overall performance of a server. That is a lesson to learn. But I just don't see a way to make the jump from the data in these articles to over-caching in a web system./p pCaching is good. Caching is a performance engineer's friend./p
Categories: Load & Perf Testing

Stressing Out Your Access Management System

Fri, 08/27/2010 - 12:36
pIdentity Access Management (IAM) is a very complex aspect of IT. Finding someone that really understands it is a challenge. I have found one of those guys./p pimg src=http://loadstorm.com/sites/loadstorm.com/files/Corbin-Links.jpg alt=Corbin Links Stress Tester class=picture-left /br / a href=http://www.linkedin.com/in/corbinlinks target=_blankCorbin Links/a is an expert in the implementation of various forms of access management systems - commonly called Single Sign On by those of us that need to simplify. He is the author of a trilogy of books entitled a href=http://www.linksbusinessgroup.com/store target=_blankIAM Success Tips Volume 1-3/a./p pCorbin has gotten involved with load testing because many times his projects for large enterprises requires his team to ensure performance of IAM. We at LoadStorm are fortunate in having Corbin as a user of our load testing tool, and he has been a delightful customer. Recently, he asked if I would come on his podcast to share some thoughts about performance testing relative to IAM./p pBelow is a summary of our interview Corbin calls, a href=http://www.linksbusinessgroup.com/stressing-out-your-access-management-system target=_blankStressing Out Your Access Management System/a./p h2Stress Testing for IAM/h2 pMany times IAM implementations sidestep stress testing because of inadequate resources or skills. Any enterprise will have 3-4 environments of IAM at a minimum. That necessitates much more testing. Usually there will be both internal and public facing environments. Typical provisioning of IAM will typically take dozens of tests and tuning cycles. /p pimg src=http://loadstorm.com/sites/loadstorm.com/files/IAM-Success-Tips.jpg alt=IAM Success Tips Book class=picture /Two of the biggest challenges to stress testing an access management system are budget and time. It is common to skip load testing because it comes at the end of a project that is already over budget anyway. If the funding issue doesn't get you, then having someone available to actually build and run the tests might. /p pCorbin has been on projects that took 2 months just to install a big tool like LoadRunner and get the scripts created. Cost and staffing for LoadRunner were enormous drains on the project. He told me that IAM involves big heavy enterprise infrastructure that can run well into the millions of dollars. So, the erroneous perception of the CTO was that the load testing tool should be extremely expensive too. Corbin is changing that paradigm and saving his customers hundreds of thousands of dollars./p h2IAM Demands MORE Stess Testing/h2 pCorbin believes stress testing must happen at a minimum of 2 points in the cycle. It should be moved up into the development and debug stage, as well as test need to be run during the certification stage. /p pIAM implementers must be ready with a long term tool and staffing strategy so you don't have to go hit up purchasing for more test runs later. He emphasizes that SaaS models are very beneficial because of the speed and simplicity of the testing. Also, he advocates purchasing subscriptions with unlimited testing runs as far superior to paying by the test. /p pSubsequent releases or configuration changes can hurt your performance, so enterprise systems need more regular testing. In access management especially, there are thousands of parameters that define what happens in a transaction through authentication and access rules. If you change one of those, the entire system has changed and must be tested again. /p pSince the application rules are accessed dynamically, which determines the path from one point of the application to another, it is important to have the ability to re-test constantly with confidence. He also emphasizes how Http versus Https is a big deal in IAM implementation. You must test both of them thoroughly and independently. There is lots of overhead incurred with secured transactions such as credentialing and they must be performance tested with care./p pCorbin recommends a fixed price stress testing tool to eliminate the hassle and worry about the cost of extra test runs - which are almost guaranteed to occur. In his experience, access management systems may need to run stress tests maybe 5-10 times a day. No one has time to continually set it all up again. Worse, there are so many meetings in an enterprise setting that have to be called to get testing done again. /p h2The Importance of Good Performance in IAM/h2 pResponse times are crucial in access management systems. Both B2C external pools and large internal user pools must be considered in stress test coverage. Internal users will be more patient on a corporate portal because they must use the systems to do their job. However, in a B2C model the external users will not wait. /p pImplementers of IAM need to ensure that users will not have to wait because slower performance results in lost customers. See this blog post entitled a href=http://loadstorm.com/2009/web-performance-tuning target=_blankWeb Performance Tuning = 10% Profit/a for the facts about how 7-12% revenue is lost by making people wait./p pThis is a great cost justification for load testing. In most of Corbin's access management system implementations stress testing is usually viewed as a cost center issue. If you have these statistics about revenue and performance, it will help you get further up the food chain to fund the testing to guarantee IAM will perform as well as it can./p pHe points out that there is incentive on the internal portal too. Many times companies don't understand the value of IAM. If people start to wait for it, they will circumvent the access management system. Offline systems like spreadsheets and ad hoc document storage will become popular. If the system doesn't work quickly, users will get creative to not use it./p h2Unique Requirements of IAM/h2 pEnterprise grade access management tools sometimes use front end login pages that are comprised of an insane number of objects. Many tiny pieces of Javascript and images. Corbin has seen one IAM tool that had over 100 nested CSS stylesheets for a login page. /p pAuthentication checks on all of the objects create security overhead that is in addition to the normal requests of a web page! This extra processing poses challenges that most load testers haven't encountered in other web applications. Corbin emphasizes the need to optimize these processes because they can kill performance adoption, which will in turn kill adoption of the access management system./p pExtensive re-directs are common in access management systems. Front-end processes capture data about a user, then it must make a decision about the user before allowing the request to be serviced by the web application. There are many load testing tools that Corbin has used that can't handle multiple re-directs. It is very important to test for that in IAM./p pAccess management implementers are usually over-tasked, and they aren't load testing experts. So what should they focus on for quick certification? Hammer on the front door. Focus on the login process. It's important to stress test with a file of many unique users and create a scenario that goes through a successful authentication. You need a heavy volume of concurrent users getting logged in, so find the peak number of users in your situation, such as 8:00am, and add about 25-50% of users. Then run a load test at that higher peak volume to have ensure you can have faith in the performance of your system./p pBecause access management utilizes methods such as sticky sessions and secure session cookies, you must test with many different users in the scenarios./p h2Gracious Host/h2 pCorbin asked me about the upcoming enhancements to LoadStorm. So I shared a few of those in the podcast - proxy recorder for scenario building, extra reporting data, scale up to 500,000 users, server-side monitoring during tests, and controlling geographic sources of traffic./p pI thoroughly enjoyed the podcast interview, and I have had a great time working with Corbin on projects for his enterprise clients. I look forward to our next chance to work together./p pPlease check out the a href=http://www.linksbusinessgroup.com/itunesIdentity Access Management Success Podcast with Corbin Links/a on iTunes./p pImportant resources mentioned on the show/p p * IAM Success Audio – Collectors Edition – a href=http://bit.ly/iamcollectors title=http://bit.ly/iamcollectorshttp://bit.ly/iamcollectors/abr / * Loadstorm.com – a href=http://www.loadstorm.com title=http://www.loadstorm.comhttp://www.loadstorm.com/abr / * Corbin Links on Twitter – a href=http://www.twitter.com/corbinlinks title=http://www.twitter.com/corbinlinkshttp://www.twitter.com/corbinlinks/abr / * Corbin Links’ new web contact page: a href=http://www.corbinlinks.com title=http://www.corbinlinks.comhttp://www.corbinlinks.com/abr / * Corbin Links on LinkedIn – a href=http://www.linkedin.com/in/corbinlinks title=http://www.linkedin.com/in/corbinlinkshttp://www.linkedin.com/in/corbinlinks/abr / * Comprehensive article on how to stress test an Access Management System – a href=http://bit.ly/namstorming title=http://bit.ly/namstorminghttp://bit.ly/namstorming/a/p
Categories: Load & Perf Testing

Web Performance Tuning Never Ends

Mon, 08/23/2010 - 09:43
pI read a good article this morning that presents a case study of scaling a web site. a href=http://highscalability.com/blog/2010/8/23/6-ways-to-kill-your-servers-learning-how-to-scale-the-hard-w.html#comments target=_blank6 Ways to Kill Your Servers - Learning How to Scale the Hard Way/a by Steffen Konerow presents some excellent points about how to avoid system crashes in your web application. Surprisingly, there are not direct mentions of load testing the site re-launch before going live. Load testing is implied throughout, yet never specifically addressed./p pWhile not everyone will have these same problems, and each web app is distinct in its architecture, the tuning process Steffen describes is very applicable to almost any web performance tuning. The conclusion is classic and can never be repeated too frequently:/p h3Scaling a website is a never ending process./h3 pI have learned the hard way that assuming your site performance is acceptable now and will remain acceptable is a bad idea. You should continually be testing and tuning. How many times have you hear me say that? Load test, tune, load test, tune, rinse and repeat./p pMost of the components of your web application will remain fairly stable. However, over the weeks and months of production deployment there will invariably be many small changes. A few examples include:/p ul liNew releases of your application code /liliPeriodic upgrades to the web server software /liliConfiguration modifications in the network /liliDatabase query additions from a new customer report(s) /liliBug fixes and system patches in the load balancing software /liliEnvironmental changes in the data center /li/ul pWeb performance tuning is an ongoing process because web applications have thousands of moving parts. Nothing should be taken for granted./p h3Web Performance Tuning Lessons from Steffen/h3 pI love how he puts the lessons in terms of, Do this and your servers will DIE! He does a good job explaining how some architecture decision resulted in a performance problem. I would probably like a little more detail, but I'm not complaining. The article is well-written and a good length for a blog post. Here are his lessons in order./p p#1 - Put Smarty compile and template caches on an active-active DRBD cluster with high load and your servers will DIE!/p pThe first lesson is about how mirroring and high availability cluster software can corrupt your file system. I can use this as a reason to thoroughly load and performance test your new system before putting it into production. There is a high probability this clustering side effect would have been identified during a high-volume test./p p#2 - Put an out-of-the-box webserver configuration on your machines, do not optimize it at all and your servers will DIE!/p pThis lesson applies to IIS, Apache, or any type of web server software you use. Whenever you have dozens of configuration settings in a piece of software, you will almost never get high performance without performance tuning those settings. I have rarely encountered a company that didn't feel their situation is unique across the industry. So, why would you think that your system won't require unique setup? It will. /p p#3 - Even a powerful database server has its limits and when you reach them – your servers will DIE!/p pAs a manager of a commercially successful web application (SaaS model learning system), several years ago I was faced with a decision about scale. Our customers were overwhelming our servers. My naive answer was to buy bigger machines. It wasn't really the problem, and my solution was the easy path. Bad management and bad technical decision. Our database was the bottleneck. Or more accurately, our expensive SQL queries were killing our performance. Examining slow query logs and re-writing poor SQL statements is a better answer than just upgrading hardware./p p#4 - Stop planning in advance and your servers are likely to DIE./p pBe proactive. Web performance is a moving target. Your success should be driving your metrics ever higher. More customers, more content, more creativity on the part of marketing, etc. will make performance tuning an iterative necessity. Planning ahead should involve a href=http://loadstorm.com/2010/load-testing-success-criteriasetting clear performance criteria and load testing against those performance goals/a. Not just before re-launching your site, but load tests should be run against every release of your code. I'm not kidding - it is that important. You may not believe me now, however you will after that seemingly simple programming mod craters your performance and 25% of a normal day's load grinds your system to a halt! Been there, done that, and you should learn from my mistakes./p p#5 - Forget about caching and you will either waste a lot of money on hardware or your servers will die!/p pCaching is your best ally in the world of web performance tuning. You probably don't overlook it, and you probably turn it on in the web server. My recommendation is to look for as many ways to utilize caching as possible. RAM trumps physical drive access every time. Good caching will reduce hardware, software, and hosting footprint, often over 60%. I've seen it make an immediate 500% improvement in throughput. Steffen used Memcached, and it is very effective. Other caching tools can provide enormous scalability gains, such as aiCache our a href=http://loadstorm.com/performance-improvementperformance improvement/a partner. When in doubt, cache it./p p#6 - Put a few hundred thousand small files in one folder, run out of Inodes and your server will die!/p pStatic files tend to be heavily used in today's web sites. Tons of images, plenty of rich multi-media, lots of Javascript for interactivity, and ever-increasing CSS usage will add up quickly on your site. Don't be fooled into assuming static files won't hurt your performance - they can. I have seen customers' pages that have over 200 HTTP requests, and when you start to count up all the overhead of each request/response cycle on your system, the volume can be significant. Steffen mentions splitting directories, using partitions, and deploying tools to reduce I/O load. I also recommend try to reduce the number requests by combining scripts into one JS file, combining CSS files into a single stylesheet, and using sprites to put images together. About 50% of all visitors to your site will come in with an empty cache, so make the page lighter on requests./p h3Conclusion/h3 pI agree completely with Steffen's statements:/p pNever ever start thinking “that’s it, we’re done” and lean back. It will kill your servers and perhaps even your business. It’s a constant process of planning and learning./p pWeb performance tuning never ends. I strongly advocate you adopt a process of load test, tune, load test, tune, rinse amp; repeat because you want sleep well at night. You don't want to be called into the boss' office to explain why the site crashed. Bad performance doesn't have to be a surprise if you stay proactive about it. /p pThe most compelling reason I can share for why you should be continually testing and tuning is because a href=http://loadstorm.com/2009/web-performance-tuningweb performance tuning/a translates to revenue and profit. It's proven empirically (just follow that link to see the data)./p
Categories: Load & Perf Testing

IIS Connections Affect Web Performance

Fri, 08/20/2010 - 15:00
pimg src=http://loadstorm.com/sites/loadstorm.com/files/IIS-connections.jpeg alt=IIS connections web performance class=picture /I have received questions from customers about load testing reports that show their server doing some unexpected things. For example, a customer sent some server monitor that showed a pattern of large peaks of CPU utilization followed by a precipitous drop to a low level of usage. He wanted to know why LoadStorm wasn't applying a consistent load to his system as evidenced by the CPU spikes./p pFirst, it is important to think about the big picture of what is going on. He had not told me if this CPU was one of many in a round robin configuration or if this was even on a web server at all. He was obviously ignoring other measurements of traffic on the server - most notably the web server logs. Wouldn't it be a better indicator of the LoadStorm's load to see the timestamps and volume of requests in the IIS (he is on MS stack) server logs?/p pimg src= alt=IIS connections web performance class=picture-left /There are thousands, if not millions, of configuration permutations possible with any web application. One configuration setting that could cause a roller coaster affect in CPU utilization is the limit of connections allowed from clients. While there are many considerations that could influence your decision for how many simultaneous connections you want as max, generally speaking, you want to give the web server as many as will be supported effectively by your CPU and RAM. This is especially true if the machine is dedicated to be a web server only./p pApache has different ways to control the maximum number of simultaneous client connections, such as the MaxClients directive for the whole server, and other ways to fine tune processes and threads. IIS provides a way to set the max connections for each specific Web site./p pOn Microsoft's support site, there is a help topic that provides good information to a href=http://support.microsoft.com/kb/324093 target=_blankSet Client Connection Limits/a. /p blockquotepTo set the maximum number of client connections:/p ol liLog on to the Web server computer as an administrator. /liliClick Start, point to Settings, and then click Control Panel. /liliDouble-click Administrative Tools, and then double-click Internet Services Manager. /liliRight-click the Web site that you want to configure, and then click Properties. /liliClick the Web site tab. /liliIf you want to support an unlimited number of connections, click Unlimited. /liliTo set a specific number of connections, click Limited To, and then type the number of connections that you want the server to support (the default setting is 1000). /liliClick OK. /li/ol/blockquote pSteven Warren suggests in his blog that you a href=http://blogs.techrepublic.com.com/window-on-windows/?p=116 target=_blanktweak the registry to improve IIS' performance/a relative to connections. /p blockquotep The registry settings for IIS are stored in HKEY_LOCAL_MACHINE | SYSTEM | CURRENTCONTROLSET | SERVICES | INETINFO | PARAMETERS. You can work with strongListenbacklog/strong - This registry setting specifies how many active connections IIS has in its queue. The default value is 15. It can range all the way up to 250. /p/blockquote pI recommend doing some research on your own system to see how many client connections you allow. Then figure out (experimentation may be necessary) how many connections you can effectively service on your machine. You don't want to overload the machine, but you don't want to waste horsepower while restricting users. /p pThis is just one more performance engineering aspect to tune your web application. I suggest you invest a few hours to make sure you have it optimized for your environment. Test, tune, test, tune, test, tune....rinse and repeat./p
Categories: Load & Perf Testing

Load Testing Quote for August 19, 2010

Thu, 08/19/2010 - 12:43
pimg src=http://loadstorm.com/sites/loadstorm.com/files/sSQ_PSL_Logo7.gif class=picture-left alt=Load Testing Question on SearchSoftwareQuality /On SearchSoftwareQuality, there is a section called Ask the Software Expert where a guru answers someone's question. Scott Barber has a post for a href=http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1259518_mem1,00.html target=_blankUnderstanding performance, load, and stress testing/a that I find amusing. /p pThe content is nothing that most web developers would find particularly noteworthy, except this pithy conclusion about the underlying myth of performance testing:/p blockquotepExperience tells me that these questions stem from the myth that performance testing is just functional testing, only with more users. This simply is not the case. One will not meet with success while performance testing, except by accident -- which does happen quite often -- by treating it like functional testing. Not only are the skills, purpose, planning, scheduling and tools different, but the entire thought process is different. To tell the truth, you are probably more likely to succeed by simply asking the members of the team what would you like to know about performance? and then figuring out how to collect that data than by treating performance testing like a functional testing project on steroids. /p/blockquote pNow I may not be as smart as Scott Barber, but I do know a cool storm picture when I see one. Storm your site with virtual users to find out its performance under load. Metaphorically speaking, it should be like this picture:/p pimg src=/files/Lightning_Crashes_at_a_place_in_Netherlands.jpg //p
Categories: Load & Perf Testing

Load Testing Think Time

Tue, 08/17/2010 - 07:25
pI have customers frequently ask me why LoadStorm has a minimum pause time. They want the load test to click from step to step in less than a second. My reply is that it is unrealistic to have actions of users like that. /p pimg src=http://loadstorm.com/sites/loadstorm.com/files/load-testing-think-time.jpeg alt=load testing think time class=picture /People have to think about what they see, read it, make a decision, then click. Each person will respond to the delivered page in a slightly different time frame. Think time is variable because real users have a large deviation in their ability to process information. Some are visual learners, some are analytical, some are contemplative, some are driving personalities, /p pSo when you are load testing, it will be beneficial to have a randomized pause between steps in your scenario. This will realistically represent the length of time from when a person receives a response (page) from your server to the time that person requests a new page./p pI used to believe that interpolation for performance metrics was acceptable. However, now I have found that I was wrong. Especially when it comes to think time./p pPlease don't fall into the trap of thinking that a web application will respond linearly with additional load regardless of the think time. For example, just because your app has a two second average response time for 1,000 concurrent users with a 15 second think time does NOT mean that you will get the same response for 3,000 users with a 45 second think time. The algebra may suggest the equation is valid, but it rarely holds up in real testing results. There are simply too many variables in the performance algorithm to make the jump based on pauses between steps./p pSpecifically, it is common for the virtual users in your load test that have longer think times to consume more resources than expected. The way you have coded your application will greatly affect the resources, and thus requests per page, attributable to each virtual user. Your pages may keep requesting data from the server while the user is simply reading. Not all requests require a user action. Also, if the user is aggressively clicking as soon as they spot an interesting link on the display, chances are that several resources won't have even been requested from the server by that time – resulting in less requests per page for aggressive clickers./p pI recommend letting the tool generate a random think time for your scenarios. Most of the books I've read and performance engineers I've talked to will generally put a range on the pauses between 10 to 120 seconds. My suggestion is to review some of your web server logs to get a rough idea of how much time people take on each of your important pages, then put about 20-30 seconds on each side of that. For instance, if I find that people spend an average of 40 seconds on my site pages, I will set a random think time between 20 seconds and 60 seconds./p pNot all people are alike, and not all web applications are alike. Please take these suggestions around think time and apply them to your situation. If you have any additional thoughts or recommendations about pausing between steps in a test scenario, please post them here in the comments. We always welcome your contribution./p
Categories: Load & Perf Testing

Google Android Passes iPhone

Mon, 08/16/2010 - 15:33
pa href=http://www.marketingcharts.com/uncategorized/gartner-worldwide-mobile-device-sales-grew-138-in-q2-13890/?utm_campaign=newsletteramp;utm_source=mcamp;utm_medium=textlinkGartner reports 13.8% increase in mobile device sales/a in the second quarter of 2010 over 2009 Q2./p pLoad testers should take note that global access to web sites is becoming more common from smartphones. In fact, smartphone sales are now up 50.5% from the same period in 2009./p pAccess the web from a mobile device gets cheaper and easier all the time. As web developers, we should all be enabling our apps and sites for mobile access. /p pI was surprised to see that Apple only has a 2.7% share of the overall mobile device market./p pAndroid also overtook Apple’s iPhone OS to become the third-most-popular OS in the world./p pAny thoughts on how load testing should account for all the mobile device access?/p
Categories: Load & Perf Testing

Load Testing Success Criteria

Wed, 08/11/2010 - 05:48
h3How Do We Know If We Are Successful?/h3 pOne of the over-arching objectives of load testing is to measure system capacity. When you are load testing, what are the goals? Generally, everyone wants to know how much load (user traffic) the system can handle without significant performance degradation. What is significant degradation? /p pBefore you begin load testing, I urge you determine what you need from the testing. Is there a service level agreement that your company must fulfill? Are you trying to make a decision about releasing a new web application into production? Does the CTO just want to make the CEO feel good that we've done a good job with a new version? Are you creating a performance benchmark? When will we know we are finished with load testing?/p pThink about the goal of the load test. What are the success criteria and how will we know when the objective is met?/p pThe best answer for all IT questions applies here...it depends. Ok, that's sort of a cop-out. Another good answer that is used by most IT consultants can be used for load testing success criteria...Whatever will make the customer happy. /p pYes, we need to keep customer satisfaction in mind. Yes, we need to understand who our stakeholders are. Yes, we need to figure out what our boss calls success. Yes, we need to talk to the alpha geek in the basement - he knows everything./p pRegardless of how you get there, it is imperative that your team decides upfront what constitutes acceptable performance for your web application. /p pThat said, you should define “good performance” in terms of something measurable. Specifically, we recommend picking metrics that are meaningful to your stakeholders. For instance, 5,000 concurrent users with sub-second average response time and less than 0.3% error rate. /p h3It Must Be Measurable/h3 pSo the load goals should be clearly quantified, such as 25,000 concurrent users or 400 requests per second. Some people like to measure the amount of bytes flowing in and out of the network - throughput or bandwidth. Everyone wants to understand how the user perceives the system, so a metric like Response Time is paramount. Measuring the errors returned from the server is another key performance indicator. If average response time is 300 ms but you are seeing 500 Internal Server errors, that's a very bad thing because that's what the users will see. A quick error is still an error. /p pKnow the targets for load. Know the targets for measuring performance. The objectives must be expressed as something measurable and indisputable./p h3The Specification/h3 pWhere do you get the load goals? Often, there are specifications that define the number of users that are expected in a particular period of time, and it is assumed that everyone on the product team is concerned with meeting the expectations in the specification because stakeholders typically view the spec as a contract. /p h3Find the Peak/h3 pWhen there is no spec, developers normally have a plan for the number of users they want the app to support without slowing down or generating errors. I recommend looking at the existing server logs to find the peak times of traffic. If you find that 8:00 a.m. EST is 10x the normal traffic, start there. Let's say that your web application has 1,000 users login between 7:55 and 8:15 a.m., and they hit an average of 5 pages. I would set my load testing goal to be somewhere around 2,500 because I want to know how the system handles 2.5x the regular peak traffic. It may not get there, but as the load tester, I want to know where my app is stress to failure. /p h3Current Traffic Plus Some/h3 pIf you have no existing server logs because you are developing a brand new application, then talk to the marketing people. I guarantee there is a VP of Marketing (or similar title) in your company that has a number. That person has been assigned an objective for driving traffic to the site through various marketing activities such as advertising campaigns. in fact, many times that site traffic objective will be one of the measurements used during that marketing person's job evaluation. /p h3Make a Friend in Marketing/h3 pCan you think of a better way to endear yourself to that VP than to sit in her/his office and strategize beforehand about handling that volume? They are marketing and probably just assume the site will take the traffic with no problem. They probably won't sit around and ponder your load testing metrics, but they will definitely have an idea of what they expect from the marketing perspective. Also, they understanding measurements. Their metrics may be about conversion rates, dollar to lead ratios, and views per click. Make no mistake...they understand measuring activity and results./p pWhatever you do - find out who is writing the check. Who is the financial sponsor? Where is the money coming from that is paying for the testing? Identify that person and other aspects of the project start to come into focus (surprise). I suggest you interview that person...maybe more than once. Figure out what they need from the site. What do they need from the application? How does it impact their life? If it fails completely, where do they hurt the most? Clearly and precisely understand the pain related to this site's performance./p pIt wouldn't hurt to check with the legal department too. /p pSo by reaching out to them proactively, they will view you as a leader and an ally in the geek department. That is a good thing. While you are there, ask them what other expectations they have. Write it all down. You should try to use any of that information in the development of your load testing requirements./p h3Resource Utilization/h3 pHere is where we geeks get excited. Server side metrics seem to be more scientific, and thus, more interesting to programmers and engineers. /p pIs the database server's memory more than 80% consumed when the system is supporting 12,000 kilobytes per second of throughput? How much of the web servers' CPUs are being used when we are handling 350 requests per second? Those are cool questions for us nerds. More importantly, those are metrics you can use in your success criteria. I encourage your team to brainstorm about each major component of your system - with the output of such an exercise to be goals for measurements such as percentages of utilization for memory, CPU, and network bandwidth./p pSome people include stress testing when they discuss load testing, and stress testing is about breaking the system. In light of discussing resource utilization, you should consider how any component of the application's architecture affects the overall performance. What happens if a disk drive dies during heavy web traffic? How do your bottlenecks change when one of the load balancing machines must be rebooted? /p pWhen defining success criteria for your load tests be sure to identify constraints in your system, and set some targets for extreme conditions. /p h3Conclusion/h3 pInvest the time early to clearly understand what is going to make or break your load testing efforts. Get others involved. Define the measurements for input/load and for output/performance. You need a stake in the ground, even if it is your best guess. /p pI strongly recommend that you understand what metrics are going to make your boss happy. Don't overlook the other important stakeholders that reside in other departments like marketing or sales. Having your ducks in a row will always be a benefit to you personally as others assess your success. /p pI have heard it said that response time is a user concern, resource utilization is a system concern, and throughput is a business concern. That sounds clever, but my opinion is that it's a bit too trite to be useful. However, the point can be applied generally if it makes you realize how important it is to get different perspectives. You as a system architect, programmer, tester, or performance engineer will not have the same point of view for load testing as the Program Manager./p pAs for the load testing success, the good news is that it is objective. Measurable is unambiguous. Set the goals upfront, and your job is going to be much easier./p
Categories: Load & Perf Testing

Load Testing Quote for July 30, 2010

Fri, 07/30/2010 - 07:58
pimg src=http://loadstorm.com/sites/loadstorm.com/files/endurance-testing.jpg alt=endurance testing duration of load testing class=picture width=200 /On the Software QA and Testing Resource Center site, there is a post about a href=http://sqa.fyicenter.com/FAQ/Stress-Testing-Web-Sites/Estimating_Test_Duration.html target=_blankEstimating Test Duration/a. Here is a quote for your educational benefit:/p blockquotepThe duration of the peak is also very important-a Web site that may deal very well with a peak level for five or ten minutes may crumble if that same load level is sustained longer than that. You should use the length of the average user session as a base for determining the load test duration. /p/blockquote pFor some of my thoughts on endurance testing (or soak testing), see the post on a href=http://loadstorm.com/2010/long-duration-load-testsLong Duration Load Tests/a./p
Categories: Load & Perf Testing

Long Duration Load Tests

Sun, 07/25/2010 - 13:11
pI've known some web Product Managers that require at least one 72 hour load test on a monthly basis. In fact, a couple are customers of LoadStorm. /p pThe reasoning behind conducting such long tests is that memory leaks can be very difficult to find in a 1 hour test and should be detectable after 72 hours. Comparisons of memory consumption in the first hour to the 72nd hour under the same load may expose any defects in the code that leave memory wasted. /p pI call this endurance testing. Many developers refer to it as soak testing or reliability testing. The goals for success of your load testing would be better to include at least one long duration test because it checks the reliability in a way that cannot be tested in any other way. /p pWikipedia defines soak testing as:/p blockquotepSoak testing involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use./p/blockquote pMike Kelly at SearchSoftwareQuality believes endurance testing and soak testing are slightly different:/p blockquotepIt's similar to endurance testing and burn-in testing, and is often used interchangeably. For me, the key distinction to soak testing is that load remains high throughout the run, while with an endurance test my load may vary based on the usage model I'm executing./p/blockquote pFor my purpose, I'll use these terms interchangeably. The only difference seems to the volume that Mike uses. I view the duration as the significant factor that makes a soak test./p pIt's common to overlook endurance testing simply out of ignorance of its importance. Even worse, programmers may think too highly of their skills to consider memory leaks a possiblity. What is probably the most significant reason for skipping soak testing? Budget. Running a load test for a day or two consumes resources; thus, it's viewed as expensive. To justify it, calculate the cost of shutting down the system to recover from a failure. It might be easy to get your stakeholders to agree to include endurance testing in your success criteria when they factor in the downtime. /p pHow long can your web application process heavy load without errors or failures? Best to find out now rather than later./p
Categories: Load & Perf Testing

Web Performance Testing

Sat, 07/17/2010 - 11:30
pThere is a difference between performance testing of a traditional client server application versus a web application./p pWeb performance testing involves primarily HTTP traffic. There are latencies involved with Internet technologies that you shouldn't encounter on an internal software implementation./p pWeb performance testing can be the measuring and analysis of a single web page. Taking apart the download times of each individual resource such as an image, an HTML document, or a stylesheet. Web performance testing should also involve measuring the DNS lookup times and how long it takes to get a connection to the server. /p pRendering is a key element of understanding the performance of a web page. Specifically, how long after the user makes a request does the browser start to paint something. /p pIt is also critical to measure the actual server processing time for a request. The browser side perspective of the end user is important, and that is where large images or too many files can cause poor user experience. Additionally, if the database is a bottleneck that results in 5 seconds to put together a dynamic page, that is a performance problem./p
Categories: Load & Perf Testing

Load Testing Disconnect

Fri, 07/16/2010 - 10:51
pimg src=http://loadstorm.com/sites/loadstorm.com/files/load-testing-ignored.jpg class=picture alt=load testing disconnect /br / h2CTOs Don't Get It/h2 /ppOne of the most disruptive events at organizations is the disconnect between decisions made by managers and the views of those decisions by employees. Does this situation sound familiar? /p pYour CTO or CIO makes a decision to forego load testing until after the launch of your web application. Your team responds by talking about how the higher ups don't get it, they don't understand the issues with system performance, they are making shortsighted decisions, and they are ignoring the product development team. You have seen this before - management is saying postpone and they mean cancel./p pBefore CTOs were managers, they were individual employees. Most became managers because of their skills as individuals in technical problem solving and decision making. So why the disconnect between your team and the CTO?/p pLet's break it down so we as load testers can affect change. /p pspan /br / span /br / span //span/span/span/p h2Money, Money, Money/h2 pMost companies are always budgeting and adjusting budgets. The senior executives are paid the big bucks to make sure the investors are happy with results, and the most important result is the bottom line profitability. Profitability is driven by income and expenses, so every dollar saved on the expense side is a win for the executives (and investors). /p pLoad testing is mistakenly viewed as non-critical. It is an easy target for CTOs in budget cutting. Bugs are visible and immediately get the attention of users, which quickly reaches the CEO and other top managers. It rolls downhill to the CTO very quickly. Bugs are bad. Bugs are universally understood./p pimg src=http://loadstorm.com/sites/loadstorm.com/files/MONEY-in-trash.jpg class=picture-left alt=load testing money lost /However, if the site has 5 second response time instead of a 2 second response time, rarely does the CEO get involved. It isn't as noticeable. It isn't mission critical. Thus, it isn't nearly as high a priority to improve performance as it is to fix bugs. The conclusion is obvious and logical to put more resources on the functional testing than on load testing. In fact, CTOs cut the load testing out of the budget submitted by the development team without having to explain that decision to anyone. It's easy for them. Who is going to raise a stink about it? The testers? The QA Manager? The Product Manager? The programmers? You? Probably not. /p pIt is too hard to defend! It's tough to make an argument that gets attention. You put yourself out on a limb if you stir up trouble about wasting money on performance testing. You don't want to get called into the CTO's office to explain the importance of load testing and system performance under heavy traffic. /p pSo what I've seen is a dynamic where the testers and/or developers take an attitude of, They will find out when the system crashes, and I'll just sit here and smile. Most of us know that we are right about the importance of performance, so if the managers are out of touch and don't listen to us, then they deserve what they get./p pOf course, that isn't good for our company. Translate that to, Ignoring load testing is bad for my job security. Now it clearly becomes something that can cost our company credibility, money, and ultimately jobs. Hmmm....that isn't worth being smug about. We need to do something about it./p pspan /br / span /br / span //span/span/span/p h2Simple Psychology of I Versus We - Disconnect/h2 pWhen your CTO was a programmer, the questions often asked are What should I do?, How can I solve this problem?. However as a Manager, this evolves to What should WE do, How do WE solve this problem. The consequences that affect the WE are much more complicated than those that affect the I. Moreover, not everyone who is a part of that WE will be equally affected; therefore, there will be different levels of benefactors based on the decisions that include WE, and these benefactors have opinions. /p pThe more senior a manager is, the more distant they may become from the data and information that is needed to validate an assumption. The WE produces many views that can distort the validation. Without a good process to validate assumptions, a senior manager risks moving forward on an invalid assumption. The CTO needs you! You need to step up and contribute to the conversation about load testing./p pIt's no secret that poor communication is probably the most significant factor contributing to a disconnect about the importance of system performance. If the assumptions of the decision maker are different than those affected by the decisions, then the conclusions will likely be different. If there is no communication about what those assumptions are, it is almost certain that a disconnect will occur. /p pLet's consider an example. The CEO has made growth commitments to the investors. In a big off-site executive planning weekend, the CEO beats up the head of sales and marketing to get projections of higher revenue. Marketing budgets are increased to get more leads and closed deals. Innovative and expensive advertising campaigns will drive lots of traffic to your website where cool interactivity will create better qualified prospects. The content management system underpinning all of your web marketing, CRM, and prospect database becomes critical to the CEO's success. However, the execs on this don't realize the CMS has never been load tested or tuned for high performance. The CTO isn't concerned because he spent a bunch of money on the CMS and has lots of good hardware in the data center to run it on. In the CTO's thinking is the fact that the CMS vendor provided cool performance charts in their sales brochures showing very high volumes of traffic supported. strongDisconnect!!/strong/p pspan /br / span /br / span //span/span/span/p h2Get Your CTO and Managers on Board/h2 pCommunication between your team and the senior managers should include the assumptions being made and how they were validated. Show them how a href=http://loadstorm.com/2009/web-performance-tuningweb performance tuning affects profit/a. Get the VP of Marketing involved. Explain how slower response times has been proven to lower revenue. Teach them the importance of load testing because it turns into money. /p pimg src=http://loadstorm.com/sites/loadstorm.com/files/money-falling.gif class=picture alt=load testing helps profit /Educate them. Gather up your a href=/performance-testing-statisticsperformance testing statistics/a and present them to the CTO. Remove the disconnect./p pFor example, explain to the CTO that just because the performance was good in the vendor's environment doesn't mean the system will respond well in your environment. The hardware is different, the web server is configured differently, the database is not tuned, etc. We MUST load test the CMS ourselves. It's a safe bet that we won't get half the load that the vendor claims. Tuning and testing will take several cycles for us to make our system perform well. If the CTO has real experience with software and operational systems, this should make a lot of sense to her or him./p pGive them articles about a href=http://loadstorm.com/category/tags/failuresperformance failures/a on websites. Let them see the downside of ignoring load testing. They should understand from examples and studies that there is a clear correlation between performance tuning and company profits. The marketing folks will be motivated by campaign results so show them how traffic spikes can bring down a site, which loses customers forever. I bet they quickly see in their mind how the dollars are walking out the proverbial door./p pspan /br / span /br / span //span/span/span/p h2Load Testing and Profit/h2 pWhen executives see the connection between profit and load testing, they will start listening to you. Now THAT'S job security!/p pspan /br / span /br / span //span/span/span/p
Categories: Load & Perf Testing

Even Large Companies Have Performance Problems in 2010

Wed, 07/14/2010 - 08:14
pI found this interesting because it shows how even large companies running on their own safe data centers can experience massive performance failure. SaaS and cloud providers may get media attention for outages, but isn't it somewhat hypocritical of Fortune 5000 CTOs to claim that their internal systems are safe? Come on, let's be real. Systems have been experiencing poor performance for 60+ years, and hardware will continue to fail, and software will always have bugs, and architects will overlook weaknesses, and CEOs will annually cut budgets for performance engineering./p h2Twitter/h2 pIf you've used Twitter much in the past year, then you aren't surprised the web application has been failing. The Fail Whale is famous. Make that infamous. Twitter has suffered more actual downtime than usual through much of June. Some reports say that we should expect more outages through July./p pThe Fail Whale officially spoke to the media and blamed the downtime on the World Cup. Yeah...right. It's been apparent for a long time to most of us that Twitter is a victim of its own success. Too many users, too many tweets, too rapid growth. /p pOutages began on June 11 and continued with poor site performance for about 5 days. They are working on improvements, but Twitter has publicly admitted to internal network deficiencies. I hope they get their system tuned soon. It would be nice if they could stay ahead of the growth curve too, but I live in the real world and expect to see the Fail Whale periodically forever. Or at least until they get a real revenue model./p h2Intuit/h2 pIntuit is well-known to say the least. Their TurboTax Online runs hundreds of thousands of concurrent users around tax time. Did you know that they are still using the original C++ application with each user's own web-enabled copy instantiated on the server? Yep, I know. Crazy. /p pI guess I'm not shocked that Intuit had an overnight outage that prevented SMB customers from processing QuickBooks Online, Quicken, TurboTax Online, and QuickBase through the night of June 15. No credit card payments, no taxes calculated, no books reconciled for an estimated 300,000 small businesses. It was reportedly caused by a power failure that occurred during routine maintenance./p h2NetSuite/h2 pPopular and aggressive SaaS ERP provider NetSuite went down on April 26 for several hours. Its apps were inaccessible to customers worldwide. According to NetSuite, the cloud applications were down for under an hour, but reports say customers had performance problems for at least 6 hours. NetSuite blamed a network issue for the outage. They publicly stated that they had no idea how many customers were affected. Duh. This is a NYSE company. My guess is that they just didn't want to put the large number in print...no number does less credibility damage than hanging the truth out there. /p h2Microsoft Live/h2 pOk, I'm a skeptic about any Microsoft software reliability. Since 1983 I've been fighting the blue screen of death. There are MS haters, but I'm not really in that camp. I have been a MS Certified Professional and my company has been a MS Certified Solution Provider. That said, I love my Mac./p pSo when on January 28 Microsoft Online Services had poor performance and outage for such offerings as Microsoft Business Productivity Online Standard Suite (BPOS), I wasn't shocked. A MS post stated it was a network infrastructure problem, but it didn't say how long the poor performance lasted or how many customers were affected. You don't think that Windows was involved do you? Me neither./p pA couple of weeks later MS's Windows Live services were reported as down on February 16. Not shocked this time either. Keyword: Windows. What was also not surprising is that they blamed hardware. However, doesn't it seem a bit trite to have the world's largest software company describe the cause of global online services outage as due to a server failure? /p pAccording to Microsoft, there was an issue with the Windows Live ID service and log-ins failed for some customers, which increased the load on remaining servers. Things were back to normal after about an hour, Microsoft said./p pSlightly understated don't you think? I'm sure it had nothing to do with the instability of Windows./p h2The Planet/h2 pThe Planet is one of the world’s largest Web hosting companies and server hosting providers. It serves more than 20,000 businesses and more than 15.7 million Websites and has more than 48,500 servers under its management./p p“On May 2 at approximately 11:45 p.m. Central time, we experienced a network issue that affected connectivity within The Planet’s core network in Houston that may have prevented your servers from communicating to the Internet,” Jason Mathews, overnight technical support supervisor for The Planet wrote in one of the community forums. “Network connectivity service was fully restored in less than 85 minutes; however, your servers may have been impacted by this incident.”/p p“In our ongoing analysis of the occurrence, we determined that one of four border routers in Houston failed to properly maintain standard routing protocols,” Mathews wrote. Yep, you are down. Poor performance. Dang hardware./p pA second outage occurred on Monday morning. “We believe the network issues this morning are unrelated to connectivity problems customers in [Houston] may have experienced around 12 a.m. CDT. Around 9:30 a.m. CDT, The Planet noted that services had been fully restored and analysis found that a circuit between Dallas and Houston caused the Monday morning disruption. Yep, hardware again. /p h2Sage/h2 pPerformance problems hit Sage North America in June, which included a 22-hour outage on June 1-2 of its order entry-system, online CRM, and internal systems such as email. out of commission, according to a report on TheProgressiveAccountant.com. Sage reported the cause was in its storage area network. Not sure if that means hardware./p h2EMC/h2 pEMC launched Atmos Online in May 2009 as a scalable online storage infrastructure built on its Atmos technology. On Feb. 17, it went down. /p pUsers reported messages such as, EMC Atmos Online Temporarily Down For Maintenance and No suitable nodes are available to serve your request./p pEMC stated that the outage was caused by maintenance issues, but did not elaborate. The day before, EMC announced a bunch of cool updates. Guess there was a breakdown in the implementation of those cool updates. Call it maintenance if you want to, but I suspect the budget for testing was cut on this project. I'll even go so far as to say they probably didn't performance test at all./p
Categories: Load & Perf Testing

Load Testing Should NOT be Like This

Tue, 07/13/2010 - 15:13
pimg src=http://loadstorm.com/sites/loadstorm.com/files/duct-tape-baby.jpg classs=picture alt=load testing duct tape //p pI've seen too many people ignore load testing until it was too late. Some don't test at all. Some treat it with a duct tape approach: Test a little here, test a little there, no planning, no consideration for what is a real-world traffic scenario, and sloppily thrown together at the last minute before going live in production./p pThat's why I get so many calls from web developers in a panic. They probably have a CTO or EVP of Marketing looking over their shoulder the night before a big site launch. Too bad they didn't give enough priority for testing the performance of the system. /p pThe duct tape approach to load testing will make you look as irresponsible as the parents of this little girl. Plan for web performance testing and tuning early in your project. It's worth it because a href=http://loadstorm.com/2009/web-performance-tuningweb performance tuning/a correlates directly to revenue./p
Categories: Load & Perf Testing

Load Testing Questions

Fri, 07/09/2010 - 15:22
pimg src=http://i289.photobucket.com/albums/ll228/nursesnaturally/question-mark-thumb676874.jpg alt=load testing questions class=picture-left /In many situations, the correct answer is to ask questions. Software developers, testers, and project managers have a tendency to tell rather than ask because they have so much knowledge and expertise about the subject domain. However, before you begin load testing, you need to know as much as possible to help you create better scenarios and scripts./p pFor instance, you should understand how the system will be used. Ask plenty of questions of project stakeholders and write down the answers./p pspan /br / span //span/span/p ul liWhat response time would the CEO consider acceptable to her/him? /liliHas our development team built this app from scratch? /liliAre there any previous performance reports or datapoints such as benchmarks? /liliHas anyone set goals for performance? /liliWhat measurements do stakeholders want to see? /liliHow does the project manager define success for this site's performance? /liliCan I get a copy of any system documentation? /liliWill the site be mostly for anonymous browsing? /liliWhat potential is there for spikes in traffic? /liliHow often will the content change? /liliWill content such as blog posts be candidates for the Slashdot Effect? /liliIs it designed for internal company research? /liliDoes it primarily facilitate collaboration between trading partners? /liliAre there planned marketing campaigns to promote the site? /liliB2B or B2C? /liliDo you expect the company to promote the site through social media? /liliWill employees all sign-on at the same time each day? /liliAre there regular times of higher usage such as end of month? /liliCan irregular events such as webinars or advertisements drive waves of traffic? /liliHow many people does the marketing department anticipate coming to the site monthly? Daily? /liliDoes the user interface serve as a front-end to non-integrated back-end system such as SAP? /liliIs single sign-on or other identity access management system involved? /liliHow fast should a page load? /liliDoes the VP of Marketing have any idea of the correlation between site performance and revenue? /liliHow does the executive team define success for the average response times of this application? /liliAre any open source modules being used? /liliWhat is the primary technology stack? LAMP, MS, other? /liliHow many coding languages are we using? Which ones? /liliWhich database engine is deployed? Oracle, SQL Server, mySQL, other? /liliAre we hosting this in our own center? Co-located? Dedicated hosted servers? Cloud? /liliWhat are the logical layers in our architecture? /liliWhat are the physical layers in our architecture? /liliHow, if at all, are we configuring load balancing? /liliIs caching turned on in the application? Web server? Database or messaging? /liliDoes the web server use a compression technology? Is it turned on? /liliWill we be testing against the production environment? /liliIf we have a staging environment for testing, what is different from production? /liliHow often do we anticipate releasing new versions into production? /liliDo we want to measure performance of each release to identify any degradation? /liliWho should be on the email list to receive load test reports? /liliWho is John Galt? /li/ul pThat's a decent list to start. Please submit your own via the comments. /p pThe idea is to get a broad context of which the system will operate. Ask! Listen! Take notes!/p pTry it, I promise it will help you create better pland and scenarios for your load tests. Your stakeholders will be happier as well./p
Categories: Load & Perf Testing

Gary Busey's Tuesday Load Testing Notes

Tue, 06/29/2010 - 04:31
pimg src=http://loadstorm.com/sites/loadstorm.com/files/220px-Gary_Busey_2007.jpg class=picture alt=Gary Busey is a stress test height=200 /I've have always liked a href=http://en.wikipedia.org/wiki/Gary_Busey target=_blankGary Busey/a in movies and on TV. He has starred in everything from Kung Fu to The Buddy Holly Story to the Simpsons to Gunsmoke to Saturday Night Live to Lethal Weapon. His recent appearance on Celebrity Rehab with Dr. Drew showed just how much brain damage he suffered in that motorcycle crash. In some ways, Gary's life is similar to a good stress test: Ramped up the volume until the system could not respond appropriately to requests made of it./p pOn to the good stuff about system performance testing./p h2Performance Metrics Tied to Business Metrics/h2 pMike Kelly at SoftwareSearchQuality.com has an article about developing a working understanding of the business drivers for application performance. He postulates that a skilled performance tester can build better tests through understanding performance goals better through knowing what the business truly needs. He also believes this understanding makes test results more actionable. The key question to ask is, What do the project stakeholders care about? a href=http://searchsoftwarequality.techtarget.com/tip/0,289483,sid92_gci1508137_mem1,00.html?track=sy280 target=_blankRead more aboutbr / application lifecycle performance testing and monitoring strategies./a/p h2Agile Performance Testing/h2 pMaybe the best point in Mike's post is:br / The best time to be thinking about possible performance issues is when you're actively designing and writing code. You're still close to the code. You know what it does. You know how it works. For each day away from the code, the fuzzier it gets in your head. The more developers committing to the codebase, the less you really know about the code you're interacting with. /p pSound like Agile? Yep, test early and test often. I've asked questions at conferences of Agile gurus about performance testing early in the development cycle. All of them acknowledge that frequent perf tests should be run from the beginning of coding. Every build should automatically run a load test. However when you press them to see how often their projects actually follow this model, they eventually admit that the practice doesn't meet the theory. Rarely, if ever, do teams create perf scripts and execute them until the last 20% of of the project. It's easy to stand up behind a podium and talk to a PowerPoint slide about what you SHOULD do, but it is not reality that teams put high priority on performance until all the pieces of code are pulled together./p h2Unit Versus Comprehensive System Tests/h2 pA common misconception is that there is one form of performance testing - hit the whole system with volume and see how it responds. That's one type, but in many cases it is much more effective to test a component on its own. For example, let's say you are coding a module that inserts data into the database. You should examine the efficiency of the way you coded the process before you test the performance of all the moving pieces that call your module. It will be harder to detect the root cause of a bad SQL statement later when it is hidden in the big pile of other database hits./p pMike makes an argument about using tools that will let you test the speed of units of code. Again, sounds good. I support that idea and recommend it to every developer. You may be able to find some expensive queries or poorly written loop structures. That will be a tremendous help when you are testing the speed and responsiveness of the system to a user where the UI layer is communicating with the application layer which is communicating with the database layer. Identifying resource hogs and slow functions is possible early, and it appears obvious that fixing those in unit testing would be best. Tune the individual parts before assembling them. It has a much higher probability of producing a well performing system./p h2Realistic Test Scenarios for User Types/h2 pMike talks about performance profiles, and I tend to think about the different ways people will use the system. In a typical web app, you might have 70% of your traffic from anonymous lookers viewing your product catalog or reading articles. Maybe 15% of your traffic is going through the shopping experience and buying something (if you are lucky!). Perhaps you have 10% of your traffic downloading white papers or filling out web forms to register for your newsletters. You probably have a few of your internal employees signed in to modify content or update the product catalog. All of those are hypothetical of course, but they represent realistic user patterns. Your test plan should reflect these. /p pIt is also true that during Christmas the percentage of credit card processing could go up by 100% over normal. Your application may have spikes in certain kinds of traffic at end of year. I spoke with PerfDan at the STP conference last year over lunch. He said the TurboTax Online app could see a million users per day during the second week of April. Hmmm, predictable. My suspicion is that most of them are in a panic too!/p pOne of our clients recently released a new book. Realizing that they needed to load test for the anticipated volume, they started building tests with LoadStorm. What they didn't consider was that most of the traffic would NOT come through their home page to make the purchase. They were sending a large email blast with a link to a specific landing page, and their test wasn't accounting for that. Thus, their original scenario was essentially useless for predicting site performance. Their home page was horribly slow with hundreds of images and dozens of unnecessary Javascript external libraries. The system stressed out at low volume because the test virtual users bottlenecked at the home page. In reality, they were fine because the landing page was simple and loaded quickly without a bunch of crap. We were happy to coach them to improve the home page, but the real point is that traffic patterns are situational, seasonal, and specific to user types./p pMike makes a good suggestion about finding potential performance problems before they bite you in production. He recommends usage modeling, and I agree. More importantly, you should create multiple test plans to reflect each of your usage models. Test with different scenarios together to reflect what could happen under certain circumstances that are likely to happen - even if your actual server logs don't show that happening right now.br / Usage modeling isn't always just about trying to test different worse-case scenarios either. It's also possibly looking at different ways to utilize excess capacity in the off season or testing to see if dynamic provisioning will support certain growth scenarios. You might also try different usage models just so you can build a taxonomy of different load profiles so later on, when you're monitoring production, you can go back and correlate what you're seeing in production to different load patterns during your testing phase. Sometimes that can provide insight into potential problems. 'Oh yea, I remember seeing that before. Last time we saw that we were….' /p pimg src=http://loadstorm.com/sites/loadstorm.com/files/puzzle-pieces-together.jpg class=picture-left alt=pieces fit together /br / h2Load Testing - Putting the Pieces Together/h2 /ppOne of my favorite hobbies is to trace a web transaction all the way through a system. It isn't always easy. Sometimes I get stuck and need to bring in smarter developers than me - I'm the Omega Geek. It's a challenge that I recommend for you. WARNING: it is tough to do. Worth it? Yes. Easy? No./p pIt takes a lot of time and energy investment to walk through the depths of your application code, database queries, stored procedures, UI code, web services, etc. But I tell you that if you ever do this, you will become so enlightened about how all your system components are interacting. You will know what piece of code is calling the next, which methods are heavily used, which queries are unique to a particular button click, and which third party library is referenced yet unused. /p pIt will help you be the best load tester on your team because you will have the best insight into where the bottlenecks can form. You will have a perception into the inner workings that is an outstanding jump start when you begin trying to tune the system performance./p pIf you work with your whole team (developers, architect, product manager, operations, QA), then everyone learns faster. For example, we have an architect that understands the implications of tweaking settings in a message queuing system that is squarely in my area of ignorance. He has shown me ways to employ the messaging technology to get 1000% throughput increase. I wouldn't have figured that out so quickly (or never) if I had not involved him./p pI like Mike's comment about the team concept of breaking down the system into one transaction flow:br / It's almost like practicing a fire drill for a production issue. Teams that can do this analysis well can respond to production issues well. It's almost like a capstone project – tying together all your models, metrics, and logging into one neat package./p pHere are some good monitoring ideas he presents too:br / Look at the profile of the servers while they are processing that transaction. Understand what the impact might be to the end user. Try to become the world's foremost expert on what's going on in the system, for that thin sliver of time, for that small transaction./p h2Some Things Haven't Changed/h2 pSure, web applications are much more complex in 2010 than they were in 1998 when my company started building them. But in some ways, this whole performance testing issue is just like Dr. Hood taught us back in college. He made us follow a single processing thread through the VMS operating system to understand the impact on the VAX and on our application. In many ways it is easier today because the tools for monitoring and testing are better. In other ways it is more challenging because there are so many moving parts to explore./p pThe bottom line in my not-so-humble opinion is that the best performance testing involves a lot of elbow grease. You must dig in and dig in deeply. It is not a trivial exercise, and you will hit roadblocks along the way that seem insurmountable. However, as long as you set your goal properly to fully understand the inner workings of your application, and if you have the tenacity to push through the complexity of interactions of components, then you will become a world-class performance engineer./p pMy last bit of advice: Don't wait to start. Get going on a deep dive now without reading, researching, and studying all the documentation. You can find the answers once you are clear on the questions posed by your system. Dig in and see what wall you hit first. Then start asking people for help. Your colleagues, product vendors, and online forums are great places to help you over that first challenge and on to the next one. Soon you will be amazed at how thoroughly you understand the performance aspects of your application and its environment./p
Categories: Load & Perf Testing

Performance Testing FIFA Cloud Scalability

Sun, 06/27/2010 - 12:53
pimg src=http://loadstorm.com/sites/loadstorm.com/files/the-links.jpg alt=performance testing links class=picture-left /As we begin another week of summer, here are a few helpful links for articles concerning issues of interest to us load and performance testers. We at LoadStorm have been busy the past month with many new customers asking us for help. /p pWe like people, and we like web performance testing. And we really like people that like web performance testing! Our focus is on providing the best value load testing tool. That said, we often get requests from clients that want us to assist them with understanding the best way to go about it. Some want us to coach them as they figure out what to do about poorly performing web applications. In any case, we certainly enjoy working with customers to make their sites run better./p pWhile we know that we aren't a fit for everyone, it has become clear over the past 18 months that we are helping a bunch of people. Thank you to each client that has sent us a testimonial or even referred us to your geek friends. We sincerely appreciate the word of mouth support we have been getting. /p pWhen we say things like, Please let us know if we can help in any way, we mean it. Seriously. So don't hestitate to tell us if you have questions or need some coaching around performance testing. /p pIn the meantime, we offer these links to other writers and vendors that may be useful to you as you tune amp; test your web applications./p pspan /br / span /br / span /br / img src=http://loadstorm.com/sites/loadstorm.com/files/USA.jpg class=picture alt=performance World Cup //span/span/span/p h2Performance of the World Cup Site/h2 pa href=http://blog.dynatrace.com/2010/06/04/hands-on-guide-verifying-fifa-world-cup-web-site-against-performance-best-practices/ target=_blankVerifying FIFA World Cup Web Site against Performance Best Practices/a/p pAndreas Grabner predicts that the World Cup site will fail. The site has poor performance to start with, and Andreas applies some engineering best practices from which all of us can learn. /p blockquotepMy analysis of the FIFA site shows that – once the World Cup starts next week and the site gets really hit by millions of users around the globe – there is a big chance the site will run into performance and scalability issues due to several best practices that my analysis shows the site does not follow. This failure causes load times of the initial page take more than 8 seconds and requires downloads of more than 200 elements. These problems can easily be fixed by following the recommendations I highlight in this blog./p/blockquote pI especially appreciate how Andreas breaks down the Key Performance Indicators (KPI) into groups, then he shows where each metric is a problem for the FIFA site. He makes great suggestions for where to look for improvement./p pHe gives the World Cup site a failing grade for overall performance. In looking at the 4 categories he uses for measurement, none got a good grade./p ul liBrowser Caching: font color=redF/font – 175 images have a short expires header, 4 have a header in the past /liliNetwork: font color=redF/font – 201 Requests in total, 1 Redirect, 1 HTTP 400, duplicated image requests on different domains /liliServer-Side: font color=redC/font - 10 App-Server Requests with a total of 3.6s - analyze server-side processing /liliJavaScript: font color=redD/font - Use CSS Lookups by ID instead of Class Name pspan /br / span /br / span /br / img src=http://loadstorm.com/sites/loadstorm.com/files/up-arrow.jpg class=picture alt=performance moving up //span/span/span/p h2Cloud Does NOT Mean Infinite Scalability/h2 pa href=http://itexpertvoice.com/ad/scale-and-scalability-rethinking-the-most-overused-it-system-selling-point-for-the-cloud-era/ target=_blankScale and Scalability: Rethinking the Most Overused IT System Selling Point for the Cloud Era/abr / Scott Fulton does a nice job dispelling some myths about cloud computing. Application performance is vital to companies today because of the dollars tied to websites, back office systems, and many forms of business automation. New start-ups utilizing cloud infrastructures are quickly coming out of the gate with disruptive technologies that process huge volumes of data for an ever-increasing user base./p pWe geeks have been saying for years, you can't just throw hardware at the problem. Scott says:/p blockquotepIf you believe that a scalable architecture for an information system, by definition, gives you more output in proportion to the resources you throw at it, then you may be thinking a cloud-based deployment could give your existing system “infinite scalability.” Companies that are trying out that theory for the first time are discovering not just that the theory is flawed, but that their systems are flawed… and now they’re calling out for help./p/blockquote pBusiness people are waking up to the fact that scalability is as important to their success as capitalism and democracy. They usually don't know how important scalability is until their system fail. After customers and revenue are lost, then they look into the underlying causes and find their systems don't really have the performance capability they thought. So in typical manager decision making, they use money to solve the problem by re-deploying on a cloud infrastructure to get much more horsepower. Unfortunately, it is NOT that simple. Cloud platforms are not magic. There are thousands of variables in the scalability equation, and the cloud with massive virtualization is only one factor to improve performance./p pSometimes the solution is to re-architect the system. In fact, Twitter has re-architected their system 4 times already. An academic may tell us to plan for growth, think through all the possibilities and produce the most scalable system from the beginning. Yeah, good idea. But it just isn't that easy because there are too many moving parts. Many of the moving parts are on moving platforms that are getting better, faster, cheaper by the month. Scott tells us to do the best we can with what we have now, and then always be looking ahead for ways to improve performance - including throwing away what you have to build a more scalable application architecture./p pBradford Stephens calls today a transitional period where no one truly has scalability figured out. There is no single right answer for evaluating a scalable platform. But he suggests that virtualization and the cloud make it feasible/affordable to rescale by powers of ten rather than multiples of two. That's exciting to me./p pScott says this:/p blockquotepThis is uncharted territory. What we do know is that businesses can no longer afford to develop solutions around the edges of the core problem. They can’t just point to the symbols for their systems’ various ills (latency, load imbalance) and invest in the symbols that represent their solutions (scalability, the cloud, CMS platforms, social network platforms) as tools for postponing the inevitable rethinking of their business models. Throwing your business model at the cloud doesn’t make the symptoms go away; indeed, it magnifies them. Symbols aren’t solutions./p/blockquote pA quote by Sean Leach in this article matches the philosophy of our CTO, Roger Campbell. Namely, get the app into the customers hands quickly and if you need to fix a scale issue, then that's a good problem to worry about when it arrives. Over 90% of web apps never get much traffic anyway, so why waste time until you know it will be an issue? Wasting months upfront analyzing and planning the best scalable architecture could very well translate to you launching nothing...ever./p pa href= target=_blankx/a/p blockquote/blockquote pa href= target=_blankx/a/p blockquote/blockquote pa href= target=_blankx/a/p blockquote/blockquote pa href= target=_blankx/a/p blockquote/blockquote pa href= target=_blankx/a/p blockquote/blockquote pa href= target=_blankx/a/p /li/ul
Categories: Load & Perf Testing

Performance Blame Game

Sat, 05/29/2010 - 08:57
pimg src=http://loadstorm.com/sites/loadstorm.com/files/OldSchool.jpg class=picture-left alt=old school development /br / h2Poor Performance - Your Fault?/h2 /ppa href=http://blog.dynatrace.com/2010/05/28/week-16-who-is-to-blame-for-bad-application-performanceWho is to blame for bad application performance?/a by Alois Reitbauer is an informative look at how developers, system architects, testers, Ramp;D managers, and operations leaders can each play a role in poor performance of software./p pWhile pointing the finger is a common way employees in companies invest their time, rarely does it have much ROI. My experience is that development teams, IT departments, and company executives usually don't play together very well. They don't communicate clearly or frequently to each other. It's only natural because they have their own jobs to do, and their job evaluation (i.e. bonus or raise) isn't measured by collaboration./p pIn order to reduce or eliminate the blame game, cultures must change. It must be made a culture thing to have everyone accept that they play a key role in the performance of the end product. When companies get smarter and create a culture where everyone wants to have high-performance software applications, then much of the blame game will disappear. A bad habit is replaced with a good habit. /p pWorking together throughout the entire product lifecycle is the best way to make the software faster, smoother, more reliable, and more scalable. /p pMy recommendation is to skip the blame game. Begin going out of your way to drop by a colleague's cubical and ask about how their piece is performing. Offer to help. Encouraging all of the people around you to think about performance may feel uncomfortable at first, and some may treat you with a bit of skepticism at first, but keep at it. Soon you will be viewed as the person in the company that is sincerely concerned about speed and scalability. /p pWould it be so bad to be nicknamed The Performance Czar? I would gladly take that geek name and wear it with pride./p
Categories: Load & Perf Testing

17 Performance Testing Articles

Mon, 05/17/2010 - 14:19
pOf course there are hundreds or thousands of posts out on the web about performance testing. I thought I would share 17 good sources focused on web application performance testing. Some of these are lists that lead to other excellent posts./p pa href=http://blog.dynatrace.com/2010/01/13/ensuring-web-site-performance-why-what-and-how-to-measure-automated-and-accurately/Ensuring Web Site Performance – Why, What and How to Measure Automated and Accurately/a/p pa href=/content/performance-testing-articlesPerformance Testing Articles/a /p pa href=http://agiletesting.blogspot.com/2005/02/performance-vs-load-vs-stress-testing.htmlDifference in perf and load testing - part 1/abr / a href=http://agiletesting.blogspot.com/2005/04/more-on-performance-vs-load-testing.htmlDifference in perf and load testing - part 2/a/p pa href=http://www.sqaforums.com/showflat.php?Cat=0amp;Number=41861amp;an=0amp;page=0FAQ - Performance amp; Load Testing on Software QA Forums/a/p pa href=http://articles.techrepublic.com.com/5100-10878_11-1037784.htmlPerformance testing: Does your Web site make the grade? on TechRepublic/a/p pa href=http://askbobrankin.com/website_performance_testing.htmlHow to Test and Improve Website Performance by Bob Rankin/a/p pa href=http://blog.dynatrace.com/2010/03/11/week-6-how-to-make-developers-write-performance-tests/How to Make Developers Write Performance Tests/a/p pa href=http://www.logigear.com/resource-center/software-testing-articles-by-others/157-loadperformance-articles.htmlLogiGear links and articles/a/p pa href=http://www.stickyminds.com/sitewide.asp?ObjectId=15493amp;Function=DETAILBROWSEamp;ObjectType=ARTamp;sqry=*Z%28SM%29*J%28MIXED%29*R%28relevance%29*K%28simplesite%29*F%28%22performance+testing%22%29*amp;sidx=0amp;sopp=10amp;sitewide.asp?sid=1amp;sqry=*Z%28SM%29*J%28MIXED%29*R%28relevance%29*K%28simplesite%29*F%28%22performance+testing%22%29*amp;sidx=0amp;sopp=10Nuts and bolts of performance testing by Abhinav Vaid on StickyMinds.com/a/p pa href=http://www.dotnetfunda.com/articles/article901-web-performance-test-using-visual-studio-part-i-.aspxWeb performance test using Visual Studio/a/p pa href=http://www.bitpipe.com/detail/RES/1265721149_745.htmlPerformance Testing In Agile Environments by HP/a/p pa href=http://www.bitpipe.com/detail/RES/1255388747_681.htmlRapid Bottleneck Identification - A Better Way to do Load Testing by Oracle/a/p pa href=http://www.theserverside.com/news/1364725/Tips-on-Performance-Testing-and-OptimizationServerside.com looks at performance testing of Java apps/a/p pa href=http://www.testinggeek.com/index.php/testing-articles/132-performance-testing-typesTypes of performance testing by TestingGeek.com/a/p pa href=http://askbobrankin.com/website_performance_testing.htmlHow to Test and Improve Website Performance by Bob Rankin/a/p pa href=http://www.ibm.com/developerworks/rational/library/4169.htmlIntro on IBM's site/a/p
Categories: Load & Perf Testing