Feed aggregator

Ask Me Anything with Google, Akamai, and CloudFlare: HTTP/2

Perf Planet - 13 hours 50 min ago

HTTP/2 is on everyone’s minds lately. People are wondering how it will impact their site’s performance, when and if they should migrate completely over from HTTP 1.1, and so on. And the truth is, we should have these questions—we’ve been using the same protocol to deliver content over the web for over 15 years, so some skepticism and concerns are expected when something completely new comes into play.

For this reason, we decided to kick off our first live Ask Me Anything (AMA) presentation, featuring a panel of industry experts to answer your biggest HTTP/2 questions. The panel included Ilya Grigorik, web performance engineer at Google; Tim Kadlec, web technology advocate at Akamai; and Suzanne Aldrich, solutions engineer at CloudFlare.

The panelists tackled an hour’s worth of Q&A and offered their expertise to clear the air and inspire out-of-the-box thinking.

Below is an excerpt from the event. The full transcript will be available soon; you can watch the recording of the AMA here.

The first question is for Ilya. Does optimizing for HTTP/2 automatically imply a poor experience on HTTP 1.1?

Ilya: I think that’s a pretty simple one. The short answer is no. The reason we added HTTP/2 is we found a collection of flaws, if you will, workarounds that we have to do when using HTTP/1. HTTP/2 is effectively the same HTTP that you would use on the web. All the verbs, all the capabilities are still there. Any application that has been delivered over HTTP/1 still works on HTTP/2 as is. If you happen to be running say on CloudFlare, or Akamai, both of which support HTTP/2, Akamai or CloudFlare can just enable that feature, and your site continues to run. Nothing has changed.

From there it just becomes a question of, are there things that I can take advantage of in HTTP/2 to make my site even faster? Say you already have an HTTPS site, then you enable HTTP/2; you’re no worse off. Chances are you may be a little bit better off, but it really depends on your application. And then it becomes a question of, what can I do to optimize? Depending on how aggressive you want to be, you may want to change some of your best practices. You stop doing some of those things, and I think we’ll get into the details of some of those later.

Is HTTP/2 serving all assets like your CSS, images, and JavaScript independently a better option than creating bundles or grouping? Which performance hacks are no longer needed?

Tim: Yeah, I think it’s similar to Ilya’s question. There’s going to be times where it doesn’t work out that way. The most famous example of it that we’ve seen is Khan Academy’s blog post where they were talking about bundling JavaScript. They went from something like 25 different JavaScript packages to 300 or something like that, and saw degradation of performance and degradation in compression. The takeaway there isn’t necessarily that you shouldn’t break these things up in individual files, the takeaway there is that there is some point where packaging still makes sense to an extent.

There’s a line in there where this ceases to be beneficial. We’ve got a lot of these best practices that we’ve had established for a long time—the sharding, the inlining of the resources, concatenating files. We also recognize that inside of H2, some of these things on paper make a lot less sense. What we need now is the real world experimentation and data to help back that up and determine exactly when it makes sense to do it and when doesn’t it. There’s a challenge there too; H2 is young, the implementations in the browser are young, the server’s young.

A lot of the challenge is the problem with the protocol, browser implementation, maturity of the server implementation. There’s just so much variability. This is definitely day zero in terms of H2 in establishing what these practices are. It’s going to take a lot of experimentation before they can firmly cement what makes sense to do in this world.

Ilya: Just to add to what Tim was saying, the Khan Academy example is really interesting because, as Tim said, they went from 25 files to 300. I know they actually say that 25 files is a performance problem with HTTP/1. Chances are if you are 25 in HTTP/1, you’re already thinking about how to collapse it to 5. With HTTP/2, that’s not a problem. You can easily ship those 25. Then the question becomes, should I unpack the 25 into 300? Then you get into more nuanced conversations, well what’s the overhead of this request, and all the rest. This is the space were you really have to experiment. As Tim said, measure it. Measure it in your own application.

Do you feel like there’s a risk of crating two Internets, a slow one and a fast one, due to the possible confusion of having different protocols out there at the same time?

Suzanne: I think that there already is little bit of a problem with the slow web and the fast web with regard to delivery over TLS. With the introduction to HTTP/2, it is implemented in TLS only through all the browsers. It is necessary to deploy it over TLS. One of the advantages of using TLS is you can utilize HTTP/2. In fact, it’s going to even out the playing field a bit in that way, and we do spec the TCP of handshake overhead that we see.

However, and this is harkening a little bit back to the point that Ilya pointed out, which is that these can certainly coexist. You can still utilize technology, such as sharding of the domains, when you’re utilizing HTTP/2. If you’re using the same IP address among the hosts, then it is possible for the same TCP/IP connection to be used for that particular set of requests. We can still use all the multiplexing and take advantage of that pipeline without degrading the performance for HTTP/1 clients.

In addition to that, because of the fact that there’s this transition point between SPDY and HTTP/2, there’s an additional concern there. People don’t want to necessarily jump into the deep water yet, so how do you enable developers to go ahead and produce applications without the fear they’re cutting out a large majority of the market share browsers? I thought an interesting technique that we utilized at CloudFlare was to essentially fork our implementation of NGINX, to allow for lazy loading a SPDY if the browser supports that. If it supports HTTP/2, then we’ll go ahead and make the connection for them in that mode. I think that we’ve really addressed that concern in particular.

The post Ask Me Anything with Google, Akamai, and CloudFlare: HTTP/2 appeared first on Catchpoint's Blog.

GTAC Diversity Scholarship

Google Testing Blog - Wed, 05/04/2016 - 08:32
by Lesley Katzen on behalf of the GTAC Diversity Committee

We are committed to increasing diversity at GTAC, and we believe the best way to do that is by making sure we have a diverse set of applicants to speak and attend. As part of that commitment, we are excited to announce that we will be offering travel scholarships this year.
Travel scholarships will be available for selected applicants from traditionally underrepresented groups in technology.

To be eligible for a grant to attend GTAC, applicants must:

  • Be 18 years of age or older.
  • Be from a traditionally underrepresented group in technology.
  • Work or study in Computer Science, Computer Engineering, Information Technology, or a technical field related to software testing.
  • Be able to attend core dates of GTAC, November 15th - 16th 2016 in Sunnyvale, CA.


To apply:
Please fill out the following form to be considered for a travel scholarship.
The deadline for submission is June 1st.  Scholarship recipients will be announced on June 30th. If you are selected, we will contact you with information on how to proceed with booking travel.


What the scholarship covers:
Google will pay for standard coach class airfare for selected scholarship recipients to San Francisco or San Jose, and 3 nights of accommodations in a hotel near the Sunnyvale campus. Breakfast and lunch will be provided for GTAC attendees and speakers on both days of the conference. We will also provide a $50.00 gift card for other incidentals such as airport transportation or meals. You will need to provide your own credit card to cover any hotel incidentals.


Google is dedicated to providing a harassment-free and inclusive conference experience for everyone. Our anti-harassment policy can be found at:
https://www.google.com/events/policy/anti-harassmentpolicy.html
Categories: Software Testing

GTAC 2016 Registration is now open!

Google Testing Blog - Wed, 05/04/2016 - 08:27
by Sonal Shah on behalf of the GTAC Committee

The GTAC (Google Test Automation Conference) 2016 application process is now open for presentation proposals and attendance. GTAC will be held at the Google Sunnyvale office on November 15th - 16th, 2016.

GTAC will be streamed live on YouTube again this year, so even if you cannot attend in person, you will be able to watch the conference remotely. We will post the livestream information as we get closer to the event, and recordings will be posted afterwards.

Speakers
Presentations are targeted at students, academics, and experienced engineers working on test automation. Full presentations are 30 minutes and lightning talks are 10 minutes. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 25 talks and 300 attendees for the event. The selection process is not first come first serve (no need to rush your application), and we select a diverse group of engineers from various locations, company sizes, and technical backgrounds.

Deadline
The due date for both presentation and attendance applications is June 1st, 2016.

Cost
There are no registration fees, but speakers and attendees must arrange and pay for their own travel and accommodations.

More information
Please read our FAQ for most common questions
https://developers.google.com/google-test-automation-conference/2016/faq.
Categories: Software Testing

Testers Don’t Prevent Problems

DevelopSense - Michael Bolton - Wed, 05/04/2016 - 05:55
Testers don’t prevent errors, and errors aren’t necessarily waste. Testing, in and of itself, does not prevent bugs. Platform testing that reveals a compatibility bug provides a developer with information. That information prompts him to correct an error in the product, which prevents that already-existing error from reaching and bugging a customer. Stress testing that […]
Categories: Software Testing

The V.5H Bug

QA Hates You - Tue, 05/03/2016 - 08:00

How prepared is your software for this sudden shift?

Venezuelans lost half an hour of sleep on Sunday when their clocks moved forward to save power, as the country grapples with a deep economic crisis.

The time change was ordered by President Nicolas Maduro as part of a package of measures to cope with a severe electricity shortage.

I’m calling this the V.5H bug.

Categories: Software Testing

Web Performance Implodes as Leicester City Wins Premier League

Perf Planet - Tue, 05/03/2016 - 07:53

For those of you who don’t follow football (the real one), Leicester City won the 2015/16 Premier League title in one of the most remarkable underdog triumphs in sporting history. This was the club’s first championship in their 132-year history.

In fact, Leicester City were 5,000/1 to win the Premier League last August.

That was 10 times more unlikely than finding the Loch Ness Monster, and almost thrice as unlikely to find flying pigs.

As expected, Twitter exploded after the win.

This had a ripple effect on the entire internet’s performance in the UK. On average, sites took twice as long to load during this period of mayhem.

 

The gambling sites, apart from losing money on the incredible odds, also lost big on performance; webpage response times jumped by almost 25 seconds on some cases.

Analyzing this bloodbath led us to the usual suspects: third-party content, specifically Twitter.

We haven’t been shy about discussing the impact of inappropriately placed third party content like marketing tags and third party cloud providers. Yet, no one seems to pay heed.

Below is the general snapshot of the performance of most Twitter URLs on webpages.

Hence every website that had Twitter integrated and loading it before Document Complete saw a spike in their Document Complete times by almost three times, some increasing even up to five times. Contrasting that with websites loading Twitter after Document Complete, the impact of Twitter on these websites is clear.

A consolidated scatterplot of the above drives home our point further: Loading third-party content before Document Complete is asking for trouble.

Social network integrations are essential to interact with users and drive marketing campaigns; but, integrating these components before your Document Complete is equivalent to placing your website’s remote control in your neighboring site. You could have the fanciest 4K television set, but without the remote control, you have no way to control it. Marketing tags are beneficial to your brand, but if your website doesn’t load, there’s nothing to promote.

Having said that, congratulations to Leicester City FC on winning the Premier League—even if they did finish 15th in our Web Performance Premier League (to be published soon).

The post Web Performance Implodes as Leicester City Wins Premier League appeared first on Catchpoint's Blog.

The five phases of performance maturity

Perf Planet - Mon, 05/02/2016 - 18:18

In my last post I discussed the importance of building a performance culture and the common traits of first-class teams to drive that culture across the organization.

The first step in reaching any destination is to understand your starting point. This post establishes a framework for assessing the state of your organization’s commitment to performance. The goal is to help you determine the gap between your current state and reaching your performance goals.

See if you recognize your company in any of the following categories. Obviously, the characteristics below are not hard and fast, and some companies might be further ahead in specific areas. That said, Howard Chorney*, director of solutions architecture here at SOASTA, and his team have found these groupings of capabilities pretty consistent across organizations.

Phase 1: Reactive

In this phase, there is a limited awareness of end user experience and application performance. You find yourself constantly reacting to application performance issues in the production environment. There is little to no front-end monitoring and you find out about site issues from your customer base. Ad hoc “war rooms” are being formed, but there is no formal application performance testing process.

Phase 2: Aware

You have a basic awareness of application performance and can identify issues. However, the ability to mitigate those issues is limited and time consuming. While there is an understanding of a performance baseline and you can track performance trends, you’re still primarily reactive to issues in production. There is a performance testing team in place, however mapping test cases to real business scenarios is limited.

Phase 3: Efficient

Good visibility exists into front-end and back-end application performance. Effective problem resolution is in place, with deep dive diagnostics. Problems are identified and prioritized by business impact and a high percentage of problems are discovered and mitigated in a pre-production environment. More realistic testing scenarios have been created, however testing is still limited to behind the firewall. Basic level key performance indicators (KPIs) have been established.

Phase 4: Advanced

There is broad visibility and deep-dive diagnostics for front-end and back-end performance. Automation exists for problem identification, analysis and diagnosis. The majority of issues are identified during the application performance testing process, testing is being pushed left, and there is active testing of third-party services to understand the impact on end users. Initiatives are prioritized based upon the business impact, actively, testing in their production environment. Advanced level KPIs lead to actionable insights from the data.

Phase 5: Proactive

At this phase, your capabilities checklist, in addition to having many of the positive traits of the previous phases, also includes these:

  • Active management of front-end and back-end performance, leveraging realtime visibility to meet and exceed robust KPIs that are mapped to business impact.
  • Collective intelligence from across the organization is used to achieve business agility and competitive advantage.
  • Active continuous application performance testing is executed throughout the entire software development life cycle, including comprehensive production testing processes.
  • C-level leadership takes an active role, supporting the performance team in driving a culture where everyone takes ownership and pride in performance as a key attribute of every web and mobile property.
Next step: Assess your organization’s performance maturity

Ultimately, the goal is to drive a continuous cycle of improvement by measuring, testing and optimizing performance. This means the right people (as described in my previous post), tools, and processes are in place, and these are constantly under evaluation for opportunities to enhance performance.

Given the ever-changing nature of applications and technology — as well as the clear connection between performance and business impact — the opportunity and need for performance improvement will never end.

Did you recognize your company in any of the above categories?

Take this short survey and find out your Organizational Performance Maturity Score (OPMI).

*Howard also runs the PerfBytes podcast, which I strongly recommend you check out.

Related reading

Choose an APM Tool for the Solution – not for the Problem!

Perf Planet - Mon, 05/02/2016 - 13:04

Just last week a senior Hybris consultant told me the story of a customer engagement on which he was working. This customer had problems, serious problems! We are talking about response times far beyond the most liberal acceptable standard! They were unable to solve the issue in their eCommerce platform – specifically Hybris. Although the eCommerce project was delivered by a […]

The post Choose an APM Tool for the Solution – not for the Problem! appeared first on about:performance.

Choose an APM Tool for the Solution – not for the Problem!

Just last week a senior Hybris consultant told me the story of a customer engagement on which he was working. This customer had problems, serious problems! We are talking about response times far beyond the most liberal acceptable standard! They were unable to solve the issue in their eCommerce platform – specifically Hybris. Although the eCommerce project was delivered by a […]

The post Choose an APM Tool for the Solution – not for the Problem! appeared first on about:performance.

Categories: Load & Perf Testing

Measuring the User Experience

Perf Planet - Mon, 05/02/2016 - 06:00

SpeedCurve’s sweet spot is the intersection of design and performance - where the user experience lives. Other monitoring services focus on network behavior and the mechanics of the browser. Yet users rarely complain that “the DNS lookups are too slow” or “the load event fired late”. Instead, users get frustrated when they have to wait for the content they care about to appear on the screen.

The key to a good user experience is quickly delivering the critical content.

SpeedCurve helps you do this by focusing on metrics that reflect what the user actually experiences. For web pages, this comes down to when the content gets rendered. In last month’s redesign, we added a new rendering chart. Here’s an example of Smashing’s rendering chart:

To emphasize the importance of rendering, we made this the first chart in the dashboard. The rendering chart includes metrics for Start Render, Speed Index, and Visually Complete. Start Render is the lower bound indicating render time for the first pixel. Visually complete bookends that with a metric for rendering the last pixel. Speed Index sits in-between showing the average render time across all pixels. (Many people were surprised to find out that Speed Index is “expressed in milliseconds”.)

Rendering is important, but not all pixels have equal importance. Instead, most pages have critical design elements (e.g, the call-to-action or hero image). Knowing when this critical content is available is even more important than overall rendering metrics. Measuring critical content is done with Custom Metrics and the User Timing spec. We jazzed up the Custom Metrics chart as part of the recent redesign:

If rendering and critical content get slower, the first place to investigate is the number of critical blocking resources. Since it’s hard for developers to track changes in the number of blocking scripts and stylesheets across releases, we added that as a new metric in SpeedCurve. Here’s the chart of critical blocking resources for NY Times:

We’re excited to find such fertile ground for accelerating how our customers deliver their critical content to users. If you’d like to see these metrics for the user experience on your website, start a free trial today and give SpeedCurve a try.

Poor Web Performance Hampers Walmart, Lowe’s Shoppers

Perf Planet - Fri, 04/29/2016 - 12:46

Walmart and Lowe’s online shoppers encountered slow performance and timeouts today that prevented many from finding products or completing purchases. In addition to frustrating customers, these issues are likely to significantly impact the companies’ web sales.

The Lowe’s problem affected the site’s product search for a number of categories, including lamps and grills. The issue began late on Thursday, 28 April and continued to hamper searches on Friday afternoon, 29 April. Shoppers looking for these items eventually saw 303 redirect error messages (see below).

After investigating the cause, Catchpoint found that the Lowes.com search results page was taking longer than a minute to load for mobile site users (see below). Desktop users also encountered the same problem.

 

Meanwhile Walmart’s performance issue lasted only two hours, and occurred in the early morning of Friday, 29 April. Catchpoint explored the issue and identified the culprit: very large images on the site’s product detail page, which bloated the total page weight to roughly 29 mb. Looking even more closely, the root cause appeared to be human error: someone selected .tiff files, which are vastly bigger than the site’s standard .jpg images. At its peak the page’s total downloaded bytes reached 59 mb, stalling shoppers and potentially reducing revenues for the day.

Issues like these, especially when they affect only a subset of users, can be difficult to detect – but they can cause immediate and lasting damage to customer loyalty and brand reputation. Synthetic testing located the sources of both issues, which means that active monitoring could have found and possibly preempted them before they frustrated real shoppers. These examples also demonstrate that sophisticated, global brands are far from immune to web performance lapses and process breakdowns that impact the end-user experience.

The post Poor Web Performance Hampers Walmart, Lowe’s Shoppers appeared first on Catchpoint's Blog.

How Fast is Fast Enough?

Perf Planet - Thu, 04/28/2016 - 13:19

All of our customers understand the importance of having high quality websites. They work hard to ensure an excellent customer experience and maximum availability. Ensuring fast response times is business as usual for them—but how fast is fast enough?

Most people understand that the economic law of diminishing returns (beyond a certain point, added investments fail to increase profit, production, etc.) also applies to customer experience. Using this, we can now try to calculate the point at which a decrease in web response times no longer results in an improved conversion rate or additional revenue.

Using Catchpoint Glimpse, our real user measurement tool, we can determine exactly how user behavior is changing as response times are increasing.

Here is an example of a site that is currently starting an optimization project. The graph is showing all pages being accessed from all devices and from all across the world for a single day. By plotting the response time against the number of page views we can see that for this site the number of page views is increasing until we reach a response time of 3.2 seconds. Between 3.2 seconds and 5.4 seconds we can see that many customers are tolerating the slowness of the page, though page views are no longer increasing. Once response times go over 5.4 seconds users start to abandon the site.

Using this data for a longer time period and with breakdowns per geography and device type, the customer was able to understand the importance of speed and decided to start using a CDN to make its site faster. The customer has set target response times for the different page categories. The home page has to load in 2 seconds, product pages in 2.5 seconds and all other pages in 3 seconds. These are realistic targets that will have a positive impact on the customer’s business.

The customer can then use synthetic monitoring to ensure that these targets are continuously met, diagnose potential bottlenecks and hold their CDN to their service level agreement.

How fast do you want your site to be? Let us know on LinkedIn.

The post How Fast is Fast Enough? appeared first on Catchpoint's Blog.

Webcast: Emerging languages: Kotlin, Rust, Elm, and Go

O'Reilly Media - Thu, 04/28/2016 - 00:00
In an online conference inspired by the Emerging Languages track at the upcoming O'Reilly Open Source Convention (May 1619 in Austin, TX), you'll get the lowdown on four relative newcomersKotlin, Rust, Elm, and Go. O'Reilly Media, Inc. 2016-04-27T21:15:52-08:10

Webcast: Ensuring QoS for Hadoop - Automatically prevent performance issues to guarantee SLAs

O'Reilly Media - Thu, 04/28/2016 - 00:00
Join Pepperdata co-founder and CEO Sean Suchter in this webinar to learn how you can: automatically run more jobs, faster, automatically prevent performance issues, and spend 90% less time troubleshooting O'Reilly Media, Inc. 2016-04-27T21:15:53-08:11

Webcast: Jenkins 2.0

O'Reilly Media - Thu, 04/28/2016 - 00:00
Patrick Wolf offers an overview of what's new in Jenkins 2.0, demonstrates how to configure traditional jobs and orchestrate delivery pipelines, and discusses where Jenkins is going in the future. O'Reilly Media, Inc. 2016-04-27T21:15:53-08:12

Webcast: KeystoneML: Optimized large-scale machine-learning pipelines on Apache Spark

O'Reilly Media - Thu, 04/28/2016 - 00:00
You'll learn the KeystoneML programming model, how to work with KeystoneML to construct new pipelines, how salient aspects of the KeystoneML optimizer work, and how KeystoneML achieves high performance and scalable model training while maintaining a high-level programming interface. O'Reilly Media, Inc. 2016-04-27T21:15:53-08:13

Webcast: Dont forget the fourth V - veracity! What you need to know about data quality in Hadoop.

O'Reilly Media - Thu, 04/28/2016 - 00:00
In this webcast, you will learn how to blend traditional questions with a new way of thinking to ensure data quality. O'Reilly Media, Inc. 2016-04-27T21:15:53-08:14

Webcast: Building Slack bots

O'Reilly Media - Thu, 04/28/2016 - 00:00
Chris Dawson demonstrates how to write a Slack bot using the Hubot framework from GitHub. O'Reilly Media, Inc. 2016-04-27T21:15:53-08:15

Webcast: Embarking on your own Big Data Journey

O'Reilly Media - Thu, 04/28/2016 - 00:00
Carey James will discuss the Big Data journey and typical obstacles faced in the process, as well as steps within the journey and typical technology solutions to look for. O'Reilly Media, Inc. 2016-04-27T21:15:53-08:16

Webcast: Introduction to the SciPy Ecosystem

O'Reilly Media - Thu, 04/28/2016 - 00:00
Join Ben Root as he offers a high-level overview of the SciPy ecosystem and highlights some of his favorite tools to get you started with SciPy. O'Reilly Media, Inc. 2016-04-27T21:15:53-08:17

Pages