Feed aggregator

Validating the Performance Impact of Enabling APC (Alternative PHP Cache)

LoadImpact - Thu, 11/20/2014 - 09:20

UVD is an established agency based in London’s Tech City, specialising in helping startups to launch innovative products and improving business processes for well-known brands. We’ve been migrating a number of clients to our AWS architecture because they’ve outgrown their current hosts and require a more stable, reliable service that can grow with them. We’re ... Read more

The post Validating the Performance Impact of Enabling APC (Alternative PHP Cache) appeared first on Load Impact Blog.

Categories: Load & Perf Testing

Holiday shoppers Are Less Patient than Last Year!

Like last year , Dynatrace asked 2000 holiday shoppers in the United States which channels they will use to do their holiday shopping and what they expect regarding the experience. Last year the need for speed was one of the key findings and this year speed matters even more. In fact, 46% of the holiday […]

The post Holiday shoppers Are Less Patient than Last Year! appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Continuous Delivery 101: Automated Deployments

The ability to automatically and reliably deploy entire application runtime environments is a key factor to optimizing the average time it requires to take features from idea to the hands of your (paying) customers. This minimization of feature cycle time or feature lead time is, after all, the primary goal of Continuous Delivery. Today, I […]

The post Continuous Delivery 101: Automated Deployments appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Takeaways from the DoubleClick Outage

On November 12th 2014, DoubleClick started having an issue delivering Ads.  This was seen by Dynatrace’s Outage Analyzer, a Big Data application which captures millions of domain requests from tens of thousand of tests run from the Dynatrace Synthetic Network. The issue was seen across almost every industry vertical that Dynatrace monitors (Automotive, Social Networking, […]

The post Takeaways from the DoubleClick Outage appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

How to get the most out of impact mapping

The Quest for Software++ - Mon, 11/17/2014 - 03:35

Ingrid Domingues, Johan Berndtsson and I met up in July this year to compare the various approaches to Impact Mapping and community feedback and investigate how to get the most out of this method in different contexts. The conclusion was that there are two key factors to consider for software delivery using impact maps, and recognising the right context is crucial to get the most out of the method. The two important dimensions are the consequences of being wrong (making the the wrong product management decisions) and the ability to make investments.

These two factors create four different contexts, and choosing the right approach is crucial in order to get the most out of the method:

  • Good ability to make investments, and small consequences of being wrong – Iterate: Organisations will benefit from taking some initial time defining the desired impact, and then exploring different solutions with small and directed impact maps that help design and evaluate deliverables against desired outcome.
  • Poor ability to decide on investments, small consequences of being wrong – Align: Organisations will benefit from detailing the user needs analysis in order to make more directed decisions, and to drive prioritisation for longer pieces of work. Usually only parts of maps end up being delivered.
  • Good ability to make investments, serious consequences of being wrong – Experiment: Organisations can explore different product options and user needs in multiple impact maps.
  • Poor ability to make investments, serious consequences of being wrong – Discover: The initial hypothesis impact map is detailed by user studies and user testing that converge towards the desired impact.

We wrote an article about this. You can read it on InfoQ.

Thoughts on the Consulting Profession

Randy Rice's Software Testing & Quality - Fri, 11/14/2014 - 22:28
Sometimes I come across something that makes me realize I am the "anti" version of what I am seeing or hearing.

Recently, I saw a Facebook ad for a person's consulting course that promised high income quickly with no effort on the part of the "consultant" to actually do the work. "Everything is outsourced," he goes on to say. In his videos he shows all of his expensive collections, which include both a Ferrari and a Porsche. I'm thinking "Really?"

I'm not faulting his success or his income, but I do have a problem with the promotion of the concept that one can truly call themselves a consultant or an expert in something without actually doing the work involved. His high income is based on the markup of other people's subcontracting rates because they are the ones with the actual talent. Apparently, they just don't think they are worth what they are being billed for in the marketplace.

It does sound enticing and all, but I have learned over the years that my clients want to work with me, not someone I just contract with. I would like to have the "Four Hour Workweek", but that's just not the world I live in.

Nothing wrong with subcontracting, either. I sometimes team with other highly qualified and experienced consultants to help me on engagements where the scope is large. But I'm still heavily involved on the project.

I think of people like Gerry Weinberg or Alan Weiss who are master consultants and get their hands dirty in helping solve their client's problems. I mentioned in our webinar yesterday that I was fortunate to have read Weinberg's "Secrets of Consulting" way back in 1990 when I was first starting out on my own in software testing consulting. That book is rich in practical wisdom, as are Weiss' books. (Weiss also promotes the high income potential of consulting, but it is based on the value he personally brings to his clients.)

Without tooting my own horn too loudly, I just want to state for the record that I am a software quality and testing practitioner in my consulting and training practice. That establishes credibility with my clients and students. I do not get consulting work, only to then farm it out to sub-contractors. I don't consider that as true consulting.

True consulting is strategic and high-value. My goal is to do the work, then equip my clients to carry on - not to be around forever, as is the practice of some consulting firms. However, I'm always available to support my clients personally when they need ongoing help.

Yes, I still write test plans, work with test tools, lead teams and other detailed work so I can stay sharp technically. However, that is only one dimension of the consulting game - being able to consult and advise others because you have done it before yourself (and it wasn't all done 20 years ago).

Scott Adams, the creator of the Dilbert comic strip had a heyday with poking fun at consultants. His humor had a lot of truth in it, as did the movie "Office Space."

My point?

When choosing a consultant, look for 1) experience and knowledge in your specific area of problems (or opportunities), 2) the work ethic to actually spend time on your specific concerns, and 3) integrity and trust. All three need to be in place or you will be under-served.

Rant over and thanks for reading! I would love to hear your comments.

Randy


Categories: Software Testing

Test Automation Venus Flytrap

Eric Jacobson's Software Testing Blog - Fri, 11/14/2014 - 14:39

You’ve got this new thing to test. 

You just read about a tester who used Selenium and he looked pretty cool in his bio picture.  Come on, you could do that.  You could write an automated check for this.  As you start coding, you realize your initial vision was too ambitious so you revise it.  Even with the revised design you’re running into problems with the test stack.  You may not be able to automate the initial checks you wanted, but you can automate this other thing.  That’s something.  Besides, this is fun.  The end is in sight.  It will be so satisfying to solve this.  You need some tasks with closure in your job, right?  This automated check has a clear output.  You’ve almost cracked this thing…cut another corner and it just might work.  Success!  The test passes!  You see green!  You rule!  You’re the Henry Ford of testing!  You should wear a cape to work!

Now that your automated thingamajig is working and bug free, you can finally get back to what you were going to test.  Now what was it?

I’m not hating on test automation.  I’m just reminding myself of its intoxicating trap.  Keep your eyes open.

Categories: Software Testing

ISTQB Advanced Technical Test Analyst e-Learning Course

Randy Rice's Software Testing & Quality - Fri, 11/14/2014 - 13:35




I am excited to announce the availability of my newest e-Learning course - The ISTQB Advanced Technical Test Analyst certification course. To register, just go to https://www.mysoftwaretesting.com/ISTQB_Advanced_Technical_Test_Analyst_e_Learning_p/advtta.htm
To take a free demo, just click on the demo button.





This e-Learning course follows on from the ISTQB Foundation Level Course and leads to the ISTQB Advanced Technical Test Analyst Certification. The course focuses specifically on technical test analyst issues such as producing test documentation in relation to technical testing, choosing and applying appropriate specification-based, structure-based, defect-based and experienced-based test design techniques, and specifying test cases to evaluate software characteristics. Candidates will be given exercises, practice exams and learning aids for the ISTQB Advanced Technical Test Analyst qualification.
Categories: Software Testing

Webinar Recording and Slides from Estabilishing the Software Quality Organization

Randy Rice's Software Testing & Quality - Fri, 11/14/2014 - 12:24
Hi all,

Thanks to everyone who attended yesterday's webinar on "Establishing the Software Quality Organization" with Tom Staab and me.

Here is the recording of the session: http://youtu.be/pczgcHGvV5Q

Here are the slides in PDF format: http://www.softwaretestingtrainingonline.com/cc/public_pdf/Establishing%20The%20SQO%20webinar%2011132014-1.pdf

I hope you can attend some of our future sessions!

Thanks,

Randy
Categories: Software Testing

What if the Rosetta Satellite Wanted to Read News from Earth?

LoadImpact - Fri, 11/14/2014 - 10:39

  Is the Rosetta satellite starved for news from Earth?  As a foreword, we have particular interest in the Rosetta satellite because – as you probably weren’t aware – the first lines of code we ever wrote for Load Impact were written as part of a project for the European Space Agency (ESA). For that project, the European ... Read more

The post What if the Rosetta Satellite Wanted to Read News from Earth? appeared first on Load Impact Blog.

Categories: Load & Perf Testing

Request Timeout

Steve Souders - Fri, 11/14/2014 - 04:15

With the increase in 3rd party content on websites, I’ve evangelized heavily about how Frontend SPOF blocks the page from rendering. This is timely given the recent Doubleclick outage. Although I’ve been warning about Frontend SPOF for years, I’ve never measured how long a hung response blocks rendering. I used to think this depended on the browser, but Pat Meenan recently mentioned he thought it depended more on the operating system. So I decided to test it.

My test page contains a request for a script that will never return. This is done using Pat’s blackhole server. Eventually the request times out and the page will finish loading. Thus the amount of time this takes is captured by measuring window.onload. I tweeted asking people to run the test and collected the results in a Browserscope user test.

The aggregated results show the median timeout value (in seconds) for each type of browser. Unfortunately, this doesn’t reflect operating system. Instead, I exported the raw results and did some UA parsing to extract an approximation for OS. The final outcome can be found in this Google Spreadsheet of Blackhole Request Timeout values.

Sorting this by OS we see that Pat was generally right. Here are median timeout values by OS:

  • Android: ~60 seconds
  • iOS: ~75 seconds
  • Mac OS: ~75 seconds
  • Windows: ~20 seconds

The timeout values above are independent of browser. For example, on Mac OS the timeout value is ~75 seconds for Chrome, Firefox, Opera, and Safari.

However, there are a lot of outliers. Ilya Grigorik points out that there are a lot of variables affecting when the request times out; in addition to browser and OS, there may be server and proxy settings that factor into the results. I also tested with my mobile devices and got different results when switching between carrier network and wifi.

The results of this test show that there are more questions to be answered. It would take someone like Ilya with extensive knowledge of browser networking to nail down all the factors involved. A general guideline is Frontend SPOF from a hung response ranges from 20 to 75 seconds depending on browser and OS.

Categories: Software Testing

How to Gain & Keep Internal Acceptance for APM

Demonstrating the value of any tool can be tricky. There’s no “12 Step Program” you can follow to guarantee success. It’s not like trying to quit smoking or lose weight. There’s proven data spread over years or research showing how to do those things. In this article I will talk about some successes and roadblocks […]

The post How to Gain & Keep Internal Acceptance for APM appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

The Real Cost of Slow Time vs Downtime [SLIDES]

Web Performance Today - Wed, 11/12/2014 - 13:21

Last week, I had the pleasure of being invited to speak at the CMG’s annual Performance & Capacity Conference. One of my sessions was a presentation of Radware’s research into mobile web stress. The other was a brand-new talk that I’m frankly kind of surprised I’ve never done before: a shallow dive into the topic of measuring the financial impact of slow performance versus the financial impact of outages.

Here are my slides. More discussion below.

If you run an ecommerce site, it’s relatively easy to measure the cost of an outage:

Sure, there’s also the cost of disaster recovery and other behind-the-scenes numbers, but the fact remains that it’s comparatively easy to paint a compelling picture of the damage that downtime has on your organization. It can be much more challenging to paint a compelling picture of the damage wrought by sub-par performance. In my talk, I outlined a couple of ways to paint this picture:

How to calculate short-term losses
  1. Identify your cut-off performance threshold. According to a survey of 300+ companies by TRAC Research, 4.4 seconds is the average delay in response time when business performance begins to decline. This is a good number to use if you’re not sure about the specific threshold for your business.
  2. Measure the Time to Interact (or comparable performance metric) for pages in flows for typical use cases on your site. Time to Interact (TTI) is the moment that a page’s key content renders and becomes interactive. If you use a measurement tool like WebPagetest, then the Speed Index score correlates pretty closely (in milliseconds) to Time to Interact.
  3. Calculate the difference between the performance threshold and the actual TTI for each page. For example, if the threshold is 4.4 seconds, and the actual TTI is 5.6 seconds, then the difference is 2.2 seconds.
  4. Pick a business metric (e.g. cart size, conversions, page views, bounce rate, customer satisfaction). Each of these KPIs (key performance indicators) has a solid case study that proves a correlation between it and performance. Depending on which case study you look at, a 1-second delay correlates to:
    • 2.1% decrease in cart size
    • 3.5-7% decrease in conversions
    • 9-11% decrease in page views
    • 8% decrease in bounce rate
    • 16% decrease in customer satisfaction
  5. Calculate losses. For example, to measure the estimated impact on conversions, take the 2.2 second slowdown from point 3 and multiply it by 3.5. By this calculation, a 2.2 slowdown equals a 7.7% conversion rate hit.
How to calculate long-term losses

This formula is predicated on understanding the customer lifetime value (CLV) of people who make purchases on your site. CLV is the total amount of dollars flowing from a customer over their entire relationship with your business. This metric is one of the best predictors of retail success, yet it’s rarely discussed. CLV is usually calculated over a fixed period of time. (As a simplified example, a retailer might calculate their CLV using a six-year window. A customer’s total spend over the previous three years is their projected spend for the next three years. The total of past and projected is their CLV.)

When considering the relationship between CLV and performance, it’s important to know that, when it comes to abandonment, people react very differently to sites that are down versus sites that are slow. According to this survey, 9% of users will permanently abandon a site that they try to visit during an outage. But a staggering 28% of users will permanently abandon a site that is unacceptably slow.

  1. Identify your site’s performance poverty line. Or use our findings that, for a typical ecommerce site, the performance poverty line is around 8 seconds. (Performance poverty line = the plateau at which your website’s load time ceases to matter because you’ve hit close to rock bottom in terms of business metrics.)
  2. Identify percentage of converting traffic that experiences speeds slower than your poverty line threshold.
  3. Identify current CLV for those customers’ (individual or median).
  4. Using the stat that 28% of those customers will permanently abandon pages that are unacceptably slow, identify the lost CLV.

Using this formula, here’s a sample CLV loss scenario:

If the total value of your median customer spend for the past three years is $1000, then the predicted future value is $1000, making the median CLV $2000. Let’s say that your site has a current converting user base of 10,000, and 10% (1000) of those converting users experiences a TTI slower than your performance poverty line of 8 seconds. If 28% (280) of those customers won’t return, then your net CLV loss is $280,000.

I’m always on the hunt for other formulas and proxies people have used to measure the impact of better/worse performance in their organization. If you have any to share, please let me know!

The post The Real Cost of Slow Time vs Downtime [SLIDES] appeared first on Web Performance Today.

API Testing: Implications of an “API-First” Strategy

LoadImpact - Wed, 11/12/2014 - 10:22

If your company has built a mobile app for public distribution, the odds are good that it’s using an API developed by your team.  APIs play a critical role in application development because they provide so much flexibility in targeting various platforms and devices; in fact, they can be considered the foundation of any good ... Read more

The post API Testing: Implications of an “API-First” Strategy appeared first on Load Impact Blog.

Categories: Load & Perf Testing

Book Review – Introduction to Agile Methods - Ashmore and Runyan

Randy Rice's Software Testing & Quality - Wed, 11/12/2014 - 09:07
Click to Order on AmazonFirst, as a disclaimer, I am a software testing consultant and also a project life cycle consultant. Although I do train teams in agile testing methods, I do not consider myself as an agile evangelist. There are many things about agile methods I like, but there are also things that trouble me about some agile methods. I work with organizations that use a wide variety of project life cycle approaches and believe that one approach is not appropriate in all situations. In this review, I want to avoid getting into the weeds of sequential life cycles versus agile, the pros and cons of agile, and instead focus on the merits of the book itself.

Since the birth of agile, I have observed that agile methods keep evolving and it is hard to find in one place a good comparison of the various agile methodologies. I have been looking for a good book that I could use in a learning context and I think this book is the one.
One of the issues that people experience in learning agile is that the early works on the topic are from the early experiences in agile. We are now almost fifteen years into the agile movement and many lessons have been learned. While this book references many of the early works, the authors present the ideas in a more recent context.
This is a very easy to read book and very thorough in it’s coverage of agile methods. I also appreciate that the authors are up-front about some of the pitfalls of certain methods in some situations.
The book is organized as a learning tool, with learning objectives, case studies, review questions and exercises, so one could easily apply this book “as-is” as the basis for training agile teams. An individual could also use this book as a self-study course on agile methods. There is also a complete glossary and index which helps in getting the terms down and using the book as a reference.
The chapter on testing provided an excellent treatment of automating agile tests, and manual testing was also discussed. In my opinion, there could have been more detail about how to test functionality from an external perspective, such as the role of functional test case design in an agile context. Too many people only test the assertions and acceptance-level tests without testing the negative conditions. The authors do, however, emphasize that software attributes such as performance, usability, and others should be tested. The “hows” of those forms of testing are not really discussed in detail. You would need a book that dives deeper into those topics.
I also question how validation is defined and described, relying more on surveys than actual real-world tests. Surveys are fine, but they don’t validate specific use of the product and features. If you want true validation, you need real users performing tests based on how they intend to use a product.
There is significant discussion on the importance of having a quality-minded culture, which is foundational for any of the methods to deliver high-quality software. This includes the ability to be open about discussing defects.
To me, the interviews were interesting, but some of the comments were a little concerning. However, I realize they are the opinions of agile practitioners and I see quite a bit of disagreement between experts at the conferences, so I really didn’t put a lot of weight in the interviews.
One nitpick I have is that on the topic of retrospectives, the foundational book, “Project Retrospectives” by Norm Kerth was not referenced. That was the book that changed the term “post-mortem” to “retrospectives” and laid the basis for the retrospective practice as we know it today.
My final thought is that the magic is not in the method. Whether using agile methods or sequential lifecycles like the waterfall, it takes more than simply following a method correctly. If there are issues such team strife, stakeholder disengagement and other organizational maladies, then agile or any other methodology will not fix that. A healthy software development culture is needed, along with a healthy enterprise culture and understanding of what it takes to build and deliver great software.
I can highly recommend this book for people who want to know the distinctions between agile methods, those that want to improve their practices and for management that may feel a disconnect in their understanding of how agile is being performed in their organization. This book is also a great standalone learning resource!
Categories: Software Testing

NOTICE: OnDemand Load Testing

BrowserMob - Tue, 11/11/2014 - 18:38

Currently we are experiencing issues with on demand load testing. We are reviewing the issue at this time and are working to return services to normal as quickly as possible. We apologize for the inconvenience.

Categories: Load & Perf Testing

Struts Performance Bug Almost Takes Down Car Rental Web site

Special thanks for this great story to my colleagues Shaun Gautz and Andrew Samuels – two Dynatrace Guardians helping our customers to build better web applications. Struts is a framework very commonly used for building Java Web Applications. It’s also been used as the main web framework for a new car rental platform this story […]

The post Struts Performance Bug Almost Takes Down Car Rental Web site appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Dynatrace in Leaders Quadrant of 2014 Gartner Magic Quadrant for Application Performance Monitoring

I am thrilled to share that Dynatrace (formerly Compuware APM) – for a fifth consecutive year – is placed in the Leaders Quadrant of Gartner’s “Magic Quadrant for Application Performance Monitoring (APM) report” (Read press release). Over the past five years, we have seen the APM industry change considerably. The Magic Quadrants for each year […]

The post Dynatrace in Leaders Quadrant of 2014 Gartner Magic Quadrant for Application Performance Monitoring appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Don’t Go Changing…

Alan Page - Wed, 11/05/2014 - 16:10

My last post dove into the world of engineering teams / combined engineering / fancy-name-of-the-moment where there are no separate disciplines for development and test. I think there are advantages to one-team engineering, but that doesn’t mean that your team needs to change.

First things First

I’ve mentioned this before, but it’s worth saying again. Don’t change for change’s sake. Make changes because you think the change will solve a problem. And even then, think about what problems a change may cause before making a change.

In my case, one-team engineering solves a huge inefficiency in the developer to tester communication flow. The back-and-forth iteration of They code it – I test it – They fix it – I verify it can be a big inefficiency on teams. I also like the idea of both individual and team commitments to quality that happen on discipline-free engineering teams.

Is it for you?

I see the developer to tester ping pong match take up a lot of time on a lot of teams. But I don’t know your team, and you may have a different problem. Before diving in on what-Alan-says, ask yourself, “What’s the biggest inefficiency on our team”. Now, brainstorm solutions for solving that inefficiency. Maybe combined engineering is one potential solution, maybe it’s not. That’s your call to make. And then remember that the change alone won’t solve the problem (and I outlined some of the challenges in my last post as well).

Taking the Plunge

OK. So your team is going to go for it and have one engineering team. What else will help you be successful?

In the I-should-have-said-this-in-the-last-post category, I think running a successful engineering team requires a different slant on managing (and leading) the team. Some things to consider (and why you should consider them) include:

  • Flatten – It’s really tempting to organize a team around product functionality. Does this sound familiar to anyone?
    Create a graphics team, Create a graphics test team. Create a graphics performance team. Create a graphics analysis team. Create a graphics analysis test team. You get the point.
    Instead, create a graphics team. Or create a larger team that includes graphics and some related areas. Or create an engineering team that owns the whole product (please don’t take this last sentence literally on a team that includes hundreds of people).
  • Get out of the way – A book I can’t recommend enough for any manager or leader who wants to transition from the sausage-making / micro-managing methods of the previous century is The Leaders Guide to Radical Management by Steve Denning (and note that there are several other great books on reinventing broken management; e.g. Birkinshaw, Hamel, or Collins for those looking for even more background). In TLGRM, Denning says (paraphrased), Give your organization a framework they understand, and then get out of their way. Give them some guidelines and expectations, but then let them work. Check in when you need to, but get out of the way. Your job in 21st century management is to coach, mentor, and orchestrate the team for maximum efficiency – not to monitor them continuously or create meaningless work. This is a tough change for a lot of managers – but it’s necessary – both for the success of the workers and for the sanity of managers. Engineering teams need the flexibility (and encouragement) to self-organize when needed, innovate as necessary, and be free from micro-management.
  • Generalize and Specialize – I’ve talked about Generalizing Specialists before (search my last post and my blog). For another take, I suggest reading what Jurgen Appelo has to say about T-shaped people, and what Adam Knight says about square-shaped teams for additional explanation on specialists who generalize and how they make up a good team.

This post started as a follow up and clarification for my previous post – but has transformed into a message to leaders and managers on their role – and in that, I’m reminded how critically important good leadership and great managers are to making transitions like this. In fact, given a choice of working for great leaders and awesome managers on a team making crummy software with horrible methods, I’d probably take it every time…but I know in that team doesn’t exist. Great software starts with great teams, and great teams come from leaders and managers who know what they’re doing and make changes for the right reasons.

If your team can be better – and healthier –  by working as a single engineering team, I think it’s a great direction to go. But make your changes for the right reasons and with a plan for success.

(potentially) related posts:
  1. Stuff About Leadership
  2. Who Owns Quality?
  3. Leadership
Categories: Software Testing

Getting more from the Browser Agent and Selenium

2014 marked my third year in a row attending the Dynatrace Global Perform conference in Orlando, FL. The event covers all things APM, and combines engaging breakout sessions with opportunities to learn and share with other performance minded professionals. In addition to the great information, the highlight of the conference for me is meeting the […]

The post Getting more from the Browser Agent and Selenium appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Pages