Netflix published a great article, Making Netflix.com Faster, about their move to Node.js. The article mentions how they use performance.timing.domInteractive to measure improvements:
In order to test and understand the impact of our choices, we monitor a metric we call time to interactive (tti).
Amount of time spent between first known startup of the application platform and when the UI is interactive regardless of view. Note that this does not require that the UI is done loading, but is the first point at which the customer can interact with the UI using an input device.
For applications running inside a web browser, this data is easily retrievable from the Navigation Timing API (where supported).
I keep my eye out for successful tech companies that use something other than window.onload to measure performance. We all know that window.onload is no longer a good way to measure website speed from the user’s perspective. While domInteractive is a much better metric, it also suffers from not being a good proxy for the user experience in some (fairly common) situations.
The focus of Netflix’s redesign is to get interactive content in front of the user as quickly as possible. This is definitely the right goal. The problem is, domInteractive does not necessarily reflect when that happens. The discrepancies are due to scripts and fonts that block rendering. Scripts can make domInteractive measurements too large when the content is actually visible much sooner. Custom fonts make domInteractive measurements too small suggesting that content was interactive sooner than it really was.
To demonstrate this I’ve created two test pages. Each page has some mockup “interactive content” – links to click on and a form to submit. The point of these test pages is to compare the domInteractive measurement with when this critical content becomes interactive.
Loading the domInteractive JS – too conservative test page, we see that the critical content is visible almost immediately. Below this content is a script that takes 8 seconds to load. The domInteractive time is 8+ seconds, even though the critical content was interactive in about half a second. If you have blocking scripts on the page that occur below the critical content, domInteractive produces measurements that are too conservative – they’re larger than when the content was actually interactive.
On the other hand, loading the domInteractive Font – too optimistic test page, the critical content isn’t visible for 8+ seconds. That’s because this content uses a custom font, and that font file takes 8 seconds to load. However, the domInteractive time is about half a second. If you use custom fonts for any of your critical content, domInteractive produces measurements that are too optimistic – they’re smaller than when the content was actually interactive. (Unless you load your fonts asynchronously like Typekit does.)
One solution is to avoid these situations: always load scripts and fonts asynchronously. While this is great, it’s not always possible (I’m looking at you third party snippets!) and it’s hard to ensure that everything is async as code changes.
The best solution is to use custom metrics. Today’s web pages are so complex and unique, the days of a built-in standard metric that applies to all websites are over. Navigation Timing gives us metrics that are better than window.onload, but teams that are focused on performance need to take the next step and implement custom metrics to accurately measure what their users are experiencing.
NOTE: I hope no one, especially folks at Netflix, think this article is a criticism of their work. Quite the opposite, their article is what gave me the idea to do this investigation. Netflix is a great tech company, especially when it comes to performance and sharing with the tech community. It’s also likely that they don’t suffer from these custom font and blocking script issues, and thus domInteractive is producing accurate interactive times for their site.
The deadline to apply for GTAC 2015 is this Monday, August 10th, 2015. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to submit your proposal for consideration. If you would like to speak or attend, be sure to complete the form by Monday.
We will be making regular updates to the GTAC site (developers.google.com/gtac/2015/) over the next several weeks, and you can find conference details there.
For those that have already signed up to attend or speak, we will contact you directly by mid-September.
The new Microsoft Edge browser is here
New Shaka player release from Google devs
The DASH client, which stands for Dynamic Adaptive Streaming HTTP, removes the disadvantages of typical streaming services, such as proprietary protocols or required hardware. In addition, Shaka supports delivery of protected content via the EME APIs to get licenses and do decryption. Because the high performance video player is plugin free and built on web standards, it can be used for a large array of devices. The Shaka project can be downloaded and tested now.
Facebook releases new Security Checkup tool worldwide
Facebook’s Security Checkup tool is now being released to all users worldwide. The tool was built into their platform to guide users through each of the available security options, one at a time. The tool allows users to change their passwords, enable login alerts, and clean up login sessions simply by clicking through the screen prompts. Facebook’s Security Checkup was introduced earlier this year in a limited test release, allowing users to test and give feedback on the tool. The checkup should be positioned at the top of your Facebook newsfeed, ready to try out.
#NoHacked campaign is back from Google
In a continuation of their #NoHacked campaign, Google focused attention on protecting sites from hacking. Google is engaging webmasters on Twitter, Google+, and will hold a live Q&A hangout on hacking prevention and recovery in the next few weeks. In their first installment, titled “How to avoid being the target of hackers”, Google offered basic safety tips. To keep your site safe they suggested strengthening your account security, keeping your site’s software updated, researching how your web host handles security issues, and using Google tools to stay informed of potentially compromised content on your site.
“Hell is where I was born/Hell is where I was raised….” Hellyeah doing “Hush”:
This week in web performance news we saw an uproarious reaction to Amazon’s Prime Day, Google add the ability to purchase on Google, Facebook begin testing e-commerce pages, and Firefox temporarily ban Adobe Flash for security reasons.
#PrimeDayFail? Not according to Amazon
To celebrate its 20th anniversary this week, Amazon hyped up its first ever Prime Day, a one-day online sales event for Prime members. The sales day of the year, advertised to have more deals than Black Friday, turned out to be what many shoppers considered an overhyped e-garage sale. As many disappointed customers took to Twitter to complain about the false advertising, many others took to Twitter to enjoy the hilarity of the “Lightning Deals”. If you were looking for packs of granny panties, 55-gallon drums of lubricant, a plate of ham, or family packs of brass knuckles, you were in luck! Despite the complaints, Greg Greeley, Amazon Prime’s Vice President, extended a thank you “to the hundreds of thousands of new members who signed up on Prime Day, and our tens of millions of existing members for making our first ever Prime Day a huge success.” Global records were broken, with order growth exceeding Black Friday 2014 by 18%. Customers ordered an average of 398 items per second! With those numbers, Greeley declared that Amazon will “definitely be doing this again.”
Google releases “Purchases on Google”
Google has now made it possible to buy products directly from mobile search results. The “buy” option will be seen on promoted mobile search results, and will simply facilitate the purchase from merchants, rather than cut out the middleman entirely. Eligible advertisements with the “buy” option will be taken to a retailer-branded product page hosted by Google, in an effort to improve mobile conversions. The new feature will also give users the ability to save payment credentials for future purchases.
Facebook testing e-commerce pages
Facebook will soon offer businesses with Facebook pages a way to sell content directly through the platform. Emma Rodgers, Facebook product marketing manager remarked, “With the shop section on the page, we’re now providing businesses with the ability to showcase their products directly on the page.”
By allowing them to build Facebook e-commerce pages, Facebook is evolving into much more than a place to socialize and share content. The social media giant recently introduced the ability to send money through its messenger, and is rumored to be developing a virtual Messenger assistant to facilitate production research and purchases.
Firefox bans Adobe Flash plugin, calls for security updates
After blocking all versions of Adobe Flash from running in Firefox, Mozilla reinstated the plugin Wednesday. Two zero-day flaws prompted the default blocking of Flash player, which disabled millions of users from watching videos or interacting with certain content through the browser. Adobe Flash has been widely criticized by tech experts due to security and compatibility issues. The security risks were resolved in a security fix Flash released two days later in their latest version, and Firefox once again enabled Flash by default in the browser. A new version was released for Linux as well.
A script is a browser recorded HAR file that you can modify in LoadStorm. Parameterizing scripts for a load test can seem like a daunting task, but we have a few tips on making it go more smoothly.1. Keep load testing scripts simple.
Making the script perform dynamically for each virtual user will depend greatly on the complexity of the application and the recording behavior. Generally we recommend keeping each script as simple as possible because the goal is to test the target server’s scalability, but testing functionality at scale is also good within reason. Sometimes this simple ability to record what you’re doing in a browser can lead to overdoing the user behavior.
Here are some examples of different complexities of recorded user behavior:
- Simple – Visit a site to browse some pages.
- Normal – Browse a site, Log in, submit a form, and log out.
- Complex – Browse a site, Log in, begin editing an ajax form that dynamically changes as you choose options, save the form, and log out.
Some web applications cannot avoid complexity during scripting due to their infrastructure. This is understandable as there are a vast number of ways to build a web application. Some of this complexity can be seen when modifying form data vs payloads.2. Modify multiple requests at once.
One of my favorite time saving tricks is selecting a bunch of requests that have the a Query String, Form Input, Cookie, or Request Header that each need the same dynamic token or user data parameterization and making that change to all of them at once. This option, like everything, has its pros and cons. If one of those requests also needs a modification for a parameter that it does not share with the other requests you just lumped together in a multi-request modification then you’ll need to modify it on its own. You have to be careful though to not overwrite the parameter you just modified in the multi-request modification.3. Recover a different script version with the History tab.
A recovery option is nice to have if there is ever a problem with a script executing, or you simply wish to look at the modifications you made in an earlier execution. You can switch to the script’s History tab to open up a previous execution as a copy. This new copy will contain all of the request modifications that were present during that execution.4. Swap servers in a script.
Sometimes you have a development server that you built your recording from, or you did so from your production server. Now you’ve got another domain, sub-domain, or IP address that you wish to change the current script to. Normally we recommend to make a new recording of the other domain, but if it’s an exact copy of the site all you need to do is change the target server to fix the script. This can be helpful for saving time on making parameterizations to a script all over again. The process is fairly straight-forward which you can use the following steps or watch this video tutorial:
- Open the script you wish to modify.
- Switch to the Parameterization tab.
- Use the servers drop-down filter to select the server that you wish to change.
- Click the Select All button.
- From the modification options below the requests table, click the Server button.
- Using the constant option type the new server address you wish to replace it with and include https:// if it will change to a secure connection.
- Close the modification window.
- Repeat these steps if you have more than one server you need to change.
- At the top-right click the Execute now button to commit the pending modifications.
Occasionally customers wish to test their target server’s ability to handle many file uploads. Unfortunately this is somewhat limited, but still doable. Using Firefox it is possible to record the entire file you wish to upload as a base64 encoded string within the request payload. Depending on the size of the file this string gets very large and possibly too large to be recorded properly. We recommend to limit this to very small files preferable 1MB or less. The process of recording this is the same as any Firefox recording, but it actually stores the upload in the request while other browser omit the base64 string from the HAR file. Here is an example of what this request’s payload would look like:
Thank you for checking out this post on tips and tricks creating load testing scripts, and please check out the previous tips and tricks on troubleshooting scripts. If you’re new to LoadStorm we offer step-by-step instructions to get you started on your first test, and for in-depth documentation visit our learning center and video tutorials. As always, ask for help or more tips if there is a particular area that you’re having trouble with.
I’d love to reboot my blog and monthly newsletter with inspiring and educational information, to help you get better outcomes with your teams. I finally have some time to write again, but I’d like to try something a bit different – and ask people what they wanted to learn about.
If you could take just five minutes and tell me what is the single biggest challenge that you’re struggling with at the moment, I’d appreciate it very much, and more importantly I will be able to use that information to cover the topics that you specifically want to know about.
I’m collecting the information using SurveyGizmo. Please fill in this quick survey to help steer me!
Sick Puppies, “You’re Going Down”
Apple, Facebook, Microsoft, and Twitter compromised for industrial espionage
A number of multinational companies, including Apple, Facebook, and Twitter, have been compromised by a group suspected of industrial espionage. Recent reports by Symantec and Kaspersky Lab exposed the hacking collectives as a powerful group that has compromised the billion-dollar companies since 2011. Symantec suspects the highly trained hackers may be independent “hackers for hire”, interested in more than credit card information or customer databases. Instead, they suspect the threat actor is focused on high level corporate information they can profit from, such as insider trading information. Symantec reported that 49 different organizations around the globe had been attacked; Kaspersky noted the variety of different companies ranged from healthcare to Bitcoin-related companies, as well as legal companies involved in acquisition deals. Neither security group was able to determine the origin of the attacks, but warned organizations that the threat is still active and should be taken seriously.
Google clarifies statements about link building being harmful
A year after Google published an extensive post about unnatural links, it shared an additional post on the Google Webmaster Blog to help identify unnatural links and clarify the potential consequences of their use. Buying links in order to distort page rank has been known to be against Google policy, but the post drew a lot of attention when it instructed webmasters not to “buy, sell, exchange or ask for links.” Google has since clarified its stance on the link issue, specifying that you cannot “buy, sell or ask for links that may violate [Google] linking webmaster guidelines.” The wording in the post support Google’s John Mueller’s advice to avoid link building because “only focusing on links is probably going to cause more problems for your website than it actually helps.”
U.S. group petitions for expansion of Right to Be Forgotten rules
Consumer Watchdog, a U.S consumer rights organization, has petitioned the U.S. Federal Trade Commission to enact “Right to Be Forgotten” rules for Google searches. The ruling of the European Court of Justice requires Google to de-list search results tied to a person’s name if the information is inaccurate or outdated. In the complaint, Consumer WatchDog’s Privacy Project director, John Simpson, urged the commission to investigate and take action. “Google’s refusal to consider such requests in the United States is both unfair and deceptive, violating Section 5 of the Federal Trade Commission Act,” Simpson stated. The extent of the ruling has been highly scrutinized over the past year, prompting France to give Google a 15 day timeframe to begin delisting links across the board before facing sanctions. Nearly a week ago, Russian Parliament approved a similar “Right to Be Forgotten” law, allowing for broader removals than the European law. The Russian version of this law has been criticized for being too broad, however, because it would allow people to simply object to content in general and ask for the links to be removed from search engines. Yandex, Russia’s largest search engine says that “the private interest and the public interest should exist in balance.”
OpenSSL patch addresses “high severity” vulnerability
An OpenSSL vulnerability has prompted action by the project team. The July 9th release addresses a single “high severity” security defect that was introduced with OpenSSL versions 1.0.2d and 1.0.1p. OpenSSL “high severity” flaws typically include risks including denial-of-service attacks, server memory leak, and remote code execution. Experts advise users to patch as soon as possible, as the release of the information can mean attackers can use the vulnerability to their advantage.
I recently replaced an old timey thermostat that measured the temperature in Roman numerals with a new thermostat that the blister case said was programmable but that doesn’t know Java, Ruby, Python or C# at all (which is just as well, since any programming I did in those languages would undoubtedly set my household temperature to null.
Inside, though, note the guide to the internal switches, particularly the last:
To turn the battery monitor off, you have to set the switch to the on position. To turn the battery monitor off, you have to set the switch to the on position. It’s akin to clicking Cancel and getting a confirmation dialog box that has a Cancel button which is to cancel the cancellation and an OK button that is to actually cancel. If you mix in some confusing message on the dialog box to confound the user.
Look closer, though.
There is no switch #4 on the board.
Never mind, it’s more like a 404 error then.
It’s good to see our friends on the hardware side of things getting into the slapdash action we’re accustomed to in software development.
And by ‘good,’ I mean terrifying.
Billy Corrigan’s working life is not unlike ours, as he explains in Smashing Pumpkins’ “Bullet with Butterfly Wings”:
Apparently, the Twitter Promoted Tweet on mobile devices now features Fact Checking:
Otherwise, there’s some sort of bug displaying variable values in the tweet, and that would be impossible for a bug to make it to production like that.
Sidewalk Labs is on a mission to bring free wifi to the world
Sidewalk Labs, a startup backed by Google, has begun re-purposing old phone booths in New York City by turning them into free Wi-Fi hotspots. The startup’s initiative to improve city life through tech innovation began last month and started as a winning idea in the city’s Reinvent Payphones Design Challenge, which aimed to find a creative new use for the phone booths. The 10,000 obsolete phone booths will soon be converted into Wi-Fi hotspots and information kiosks that can be used to charge cell phones, make calls, and provide public transit information. This Google project is beginning as a trial in New York, but the giant anticipates it will eventually spread to other cities all over the world.
Engineers surpass power limits for fiber optic communication
Electrical engineers at the University of California, San Diego have surpassed the maximum power limits for fiber optic communication. The power limit previously determined the maximum distance information could be transmitted through fiber optic cables and be accurately received. The photonics researchers compared their approach to a concert master tuning instruments in an orchestra, and use a frequency comb to synchronize the optical carriers that propagate through the fiber. The implications of the discovery are an increase in the data transmission rates in fiber optic cables, along with eliminating the need for electronic regenerators within fiber links.
Google acts on revenge porn
Google is now taking steps to remove “revenge porn” from its search results. Revenge porn is sexually explicit media that is posted online without the consent of the people they feature. The damaging content is often used as a form of cyber extortion, with sites often requiring compensation for removing the content. While Google does not have the power to remove the actual images from the actual websites they are on, victims of this assault will soon be able to submit a form to have the content removed from Google search results.
“Our philosophy has always been that Search should reflect the whole web. But revenge porn images are intensely personal and emotionally damaging, and serve only to degrade the victims—predominantly women. So going forward, we’ll honor requests from people to remove nude or sexually explicit images shared without their consent from Google Search results.” -Amit Singhal, SVP, Google Search
The form can be found in the next coming weeks on an update of their post.
Google adds un-do send feature to Gmail
We’ve all experienced the feeling of regret after firing off an email before it was ready. Luckily, the people at Google have probably experienced it too, and will soon be giving us the ability to take back that particular message that wasn’t ready. Google has been experimenting for years, and has now formally added the “undo send” option for all gmail users. Setting a delay time from 5 to 30 seconds will allow users a “send cancellation period” to ensure that they really wanted to send that message at that time. To try it out yourself, click on the Settings cogwheel in gmail and enable the feature on the general tab.
Recently, I had to test an animation in a Web browser that occurred when the user scrolled down the Web page, so I had to figure out as many ways to scroll the viewable area of the page to ensure the animation worked with each.
With apologies to Paul Simon, there must be fifty ways to scroll your window. Here are a few:
Press the down arrow, Barrow.
Press the arrow up, Hup.
Page up and page down, clown.
And move the view around.
You can push Home, Gome.
Roll the mouse wheel, Lucille.
Press the End key, Lee.
And it moves magic’ly.
Mobile tap and drag, Dag.
Tap and give it a flick, Rick.
Tap on the SPACE, Ace.
Make sure it’s in its place.
Click the mouse wheel and get the scroll icon, Ryan.
Keep hitting Tab, Gab.
Slide the vertical bar, Dar.
Maybe we’re carrying the joke on too far.
Also, thanks to @mrpjones and @kinofrost on Twitter; they get co-songwriting credits and will get to share the royalties when this baby charts.
The GTAC (Google Test Automation Conference) 2015 application process is now open for presentation proposals and attendance. GTAC will be held at the Google Cambridge office (near Boston, Massachusetts, USA) on November 10th - 11th, 2015.
GTAC will be streamed live on YouTube again this year, so even if you can’t attend in person, you’ll be able to watch the conference remotely. We will post the live stream information as we get closer to the event, and recordings will be posted afterward.
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations are 30 minutes and lightning talks are 10 minutes. Speakers should be prepared for a question and answer session following their presentation.
For presentation proposals and/or attendance, complete this form. We will be selecting about 25 talks and 200 attendees for the event. The selection process is not first come first serve (no need to rush your application), and we select a diverse group of engineers from various locations, company sizes, and technical backgrounds (academic, industry expert, junior engineer, etc).
The due date for both presentation and attendance applications is August 10th, 2015.
There are no registration fees, but speakers and attendees must arrange and pay for their own travel and accommodations.
You can find more details at developers.google.com/gtac.
Readers of my blog know my stance on UI automation. But, as I’ve forgotten my StickyMinds password, and the answer is longer than 140 characters, so I’m responding here.
This article from Justin Rohrman talks about the coolness of Selenium for UI testing. In a paragraph called, “Why the UI”, Justin wrote:
The API and everything below that will give you a feel for code quality and some basic functionality. Testing the UI will help you know things from a different perspective: the user’s.
I like everything else in the article, but that second sentence kills me. Writing automated tests for the UI is as close to a user perspective as I am to the moon (I’m only on the 20th floor). I’m going to do Justin a favor and rewrite that paragraph for him here. Justin – if you read this, feel free to copy and paste the edit.
…some basic functionality. Testing the UI is difficult and prone to error, and automation can never, ever in a million years replace, replicate, or mimic a real users interaction with the software. However, sometimes it’s convenient – and often necessary to write UI automation for web pages, and in cases where that happens, Selenium is obvious choice.
Justin – your work is good – I just disagree (a LOT) with the trailing sentence of the paragraph in question.
Back to work for me…(potentially) related posts:
It’s been a long time since I have had to talk so much, but I had a great time, and met some great people.
As promised (to many people in my talks), here are the links to my presentations.
Site Under Maintenance
Marhaban Ya Ramadhan
From : ./MR.INTERCEPTION and ./UnIX
#CUPU CREWS! SCRIPT KIDDIES GONNA KILL YOU NOW!#
I Have Backup Your index