Feed aggregator

1 out of 6 internet users in the US don’t have broadband access

Web Performance Today - Thu, 01/29/2015 - 12:55

As of today, the Federal Communication Commission has updated its definition of “broadband” from 4 Mbps to 25 Mbps. In effect, this means that 17% of internet users in the United States now don’t have broadband access.

This is huge news. Here’s why.

From the FCC’s press release:

Broadband deployment in the United States — especially in rural areas — is failing to keep pace with today’s advanced, high-quality voice, data, graphics and video offerings, according to the 2015 Broadband Progress Report adopted today by the Federal Communications Commission.

…The 4 Mbps/1 Mbps standard set in 2010 is dated and inadequate for evaluating whether advanced broadband is being deployed to all Americans in a timely way, the FCC found.

This is no longer considered “broadband”.

In a post I wrote just last week, I talked at length about the widely held misconception — even within the tech community — that non-broadband users are a tiny, negligible minority.

Most city dwellers have a hard time swallowing the idea that a sizable chunk of the population experiences download speeds of less than 20 Mbps. (This is no doubt due to the fact that only 8% of urban Americans don’t have broadband access.)

What does this mean for site owners?

This misconception that non-broadband users are freakish outliers is a major issue when it prevents us from even acknowledging that there is a performance problem. All too often, site owners, developers, designers, and anyone else who interacts with their company website — all of whom generally fall into the category of urban broadband user — assume that their own speedy user experience is typical of all users.

The FCC’s definition puts non-broadband users on the centre stage. Seventeen percent of the population — 55 million people — is hard to ignore.

Some of the FCC’s other findings include:

  • More than half of all rural Americans lack broadband access, and 20% don’t even have access to 4 Mbps service.
  • 63% of residents on Tribal lands and in US territories lack broadband access.
  • 35% of US schools lack access to “fiber networks capable of delivering the advanced broadband required to support today’s digital-learning tools”.
  • Overall, the broadband availability gap closed by only 3% last year.
  • When broadband is available, Americans living in urban and rural areas adopt it at very similar rates.
100 Mbps could be just around the corner

The FCC has also expressed that this update is just the beginning. According to FCC Commissioner Jessica Rosenworcel:

“We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy.”

It’s going to be very interesting to see how the major service providers respond to this announcement. I’m looking forward to seeing what difference this update makes — both to the level of service and to the way services are marketed to subscribers.

The post 1 out of 6 internet users in the US don’t have broadband access appeared first on Web Performance Today.

Key Performance Metrics For Load Tests Beyond Response Time- Part II

In Part I of this blog I explained which metrics on the Web Server, App Server and Host allow me to figure out how healthy the system and application environment is: Busy vs. Idle Threads, Throughput, CPU, Memory, et. Cetera. In Part II, I focus on the set of metrics captured from within the application […]

The post Key Performance Metrics For Load Tests Beyond Response Time- Part II appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Very Short Blog Posts (23) – No Certification? No Problem!

DevelopSense - Michael Bolton - Wed, 01/28/2015 - 01:14
Another testing meetup, and another remark from a tester that hiring managers and recruiters won’t call her for an interview unless she has an ISEB or ISTQB certification. “They filter résumés based on whether you have the certification!” Actually, people probably go to even less effort than that; they more likely get a machine to […]
Categories: Software Testing

Testing on the Toilet: Change-Detector Tests Considered Harmful

Google Testing Blog - Tue, 01/27/2015 - 17:43
by Alex Eagle

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.


You have just finished refactoring some code without modifying its behavior. Then you run the tests before committing and… a bunch of unit tests are failing. While fixing the tests, you get a sense that you are wasting time by mechanically applying the same transformation to many tests. Maybe you introduced a parameter in a method, and now must update 100 callers of that method in tests to pass an empty string.

What does it look like to write tests mechanically? Here is an absurd but obvious way:
// Production code:
def abs(i: Int)
return (i < 0) ? i * -1 : i

// Test code:
for (line: String in File(prod_source).read_lines())
switch (line.number)
1: assert line.content equals def abs(i: Int)
2: assert line.content equals return (i < 0) ? i * -1 : i
That test is clearly not useful: it contains an exact copy of the code under test and acts like a checksum. A correct or incorrect program is equally likely to pass a test that is a derivative of the code under test. No one is really writing tests like that, but how different is it from this next example?
// Production code:
def process(w: Work)
firstPart.process(w)
secondPart.process(w)

// Test code:
part1 = mock(FirstPart)
part2 = mock(SecondPart)
w = Work()
Processor(part1, part2).process(w)
verify_in_order
was_called part1.process(w)
was_called part2.process(w)
It is tempting to write a test like this because it requires little thought and will run quickly. This is a change-detector test—it is a transformation of the same information in the code under test—and it breaks in response to any change to the production code, without verifying correct behavior of either the original or modified production code.

Change detectors provide negative value, since the tests do not catch any defects, and the added maintenance cost slows down development. These tests should be re-written or deleted.

Categories: Software Testing

Like Every Michelin Commercial Ever

QA Hates You - Tue, 01/27/2015 - 04:30

Job posting:

Can improved automobile tyres really make the world a better place? Should we trust developers the one to change the tyres?

To be honest, probably better than me: the last time I changed a tire, I cross-threaded two of the lug nuts and then snapped one of them off (with only a lug wrench, sir; I was motivated to remove that bolt). BECAUSE I BREAK THINGS.

Categories: Software Testing

AWS Aurora – The Game Changer

LoadStorm - Mon, 01/26/2015 - 09:30

Last November, Amazon Web Services (AWS) announced a new database service, codenamed Aurora, which appears to be a real challenger to commercial database systems. AWS will offer this service at a very competitive price, which they claim is one-tenth that of leading commercial database solutions. Aurora has a few drawbacks some of which are temporary, but the many benefits far outweigh them.

Benefits

With a special configuration they’ve come up with that tightly integrates the database and hardware, Aurora delivers over 500,000 SELECTS/sec and 100,000 updates/sec. That is five times higher than MySQL 5.6 running the same benchmark on the same hardware. This new service will utilize a sophisticated redundancy method of six-way replication across three availability zones (AZs), with continuous backups to AWS Simple Storage Service (S3) to maintain 99.999999999% durability. In the event there is a crash, Aurora is design to almost instantaneously recover and continue to serve your application data by performing an asynchronous recovery on parallel threads. Because of the amount of replication, disk segments are easily repaired from other volumes that make up the cluster. This ensures that the repaired segment is current, which avoids data loss and reduces the odds of needing to perform a point-in-time recovery. In the event that a point-in-time recovery is needed, the S3 backup can restore to any point in the retention period up to the last five minutes. Aurora has survivable cache which means the cache is maintained after a database shutdown or restart so there is no need for the cache to “warm up” from normal database use. It also offers a custom feature to input special SQL commands to simulate database failures for testing.

  • Primary instance – Supports read-write workloads, and performs all of the data modifications to the cluster volume. Each Aurora DB cluster has one primary instance.
  • Aurora Replica – Supports only read operations. Each DB cluster can have up to 15 Aurora Replicas in addition to the primary instance, which supports both read and write workloads. Multiple Aurora Replicas distribute the read workload, and by locating Aurora Replicas in separate AZs you can also increase database availability.

Aurora replicas are very fast. In terms of read scaling, Aurora supports up to 15 replicas with minimal impact on the performance of write operations while MySQL supports up to 5 replicas with a noted impact on the performance of write operations. Aurora automatically uses its replicas as failover targets with no data loss while MySQL replicas can have this done manually with potential data loss.

In terms of storage scalability, I asked AWS some questions about how smoothly Aurora will grant additional storage in the event that an unusually large amount of it is being consumed since they’ve stated it will increment 10GB at a time up to a total of 64TB. I wanted to know where the threshold for the autoscaling was at and if it were possible to push data in faster than it could allocate space. According to the response I received from an AWS representative, Aurora begins with an 80GB volume assigned to the instance and allocates 10GB blocks for autoscaling when needed. The instance has a threshold to maintain at least an eighth of the 80GB volume as available space (this is subject to change). This means whenever the volume reaches 10GB of free space or less, the volume is automatically grown by 80GB. This should provide a seamless experience to Aurora customers since it is unlikely you could add data faster than the system can increment the volume. Also, AWS only charges you for the space you’re actually using, so you don’t need to worry about provisioning additional space.

Aurora also uses write quorums to reduce jitter by sending out 6 writes, and only waiting for 4 to come back. This helps to isolate outliers while remaining unaffected by them, and it keeps your database at a low and consistent latency.

Pricing

For the time being Aurora is free if you can get into the preview which is like a closed beta test. Once it goes live there are a few things to keep in mind when considering the pricing. The cost of each instance is multiplied by each AZ. Storage is $0.10 a month per 1GB used, and IOs are $0.20 per million requests. Backups to S3 are free up to the current storage being actively used by the database, but historical backups that are not in use will have standard S3 rates applied. Another option is to purchase reserved instances which can save you money if your database has a steady volume of traffic. If your database has highly fluctuating traffic then on-demand instances tend to be the best so you only pay for what you need. For full details, please visit their pricing page.

Drawbacks

Currently, the smallest Aurora instances start at db.R3.large and scale up from there. This means once the service goes live there will be no support for smaller instances like they offer for other RDS databases. Tech startups and other small business owners may want to use these more inexpensive instances for testing purposes. So if you want to test out the new Aurora database for free you better try apply for access to the preview going on right now. AWS currently does not offer cross region replicas for Aurora, so all of the AZs are located in Virginia. On the other hand, that does mean that latency is very low.

It only supports InnoDB and any tables from other storage engines are automatically converted to InnoDB. Another drawback is that Aurora does not support multiple tablespaces, but rather one global tablespace. This means features such as compressed or dynamic row format are unavailable, and it affects data migration. For more info about migrations, please visit the AWS documentation page.

Temporary Drawbacks

During the preview, Aurora is only available in the AWS North Virginia data center. The AWS console is the only means for accessing Aurora during the preview, but other methods such as CLI or API access may be added later. Another important thing to note is that during preview, Aurora will not offer support for SSL, but they are planning to add this in the near future (probably before the preview is over). Another feature to be added at a later date is the MySQL 5.6 memcached option, which is a simple key-based cache.

Conclusion

All in all, it sounds amazing, and I for one am very excited to see how it will play out once it moves out of the preview phase. Once it is fully released, we may even do a performance experiment to load test a site that relies on an Aurora DB to see how it holds up. If you’re intrigued enough to try and get into the preview, you can sign up for it here.

Sources:

The post AWS Aurora – The Game Changer appeared first on LoadStorm.

QA Music: An Old New Song

QA Hates You - Mon, 01/26/2015 - 03:22

Howard Jones “New Song”. Which is thirty-one years old, but never mind that.

Full disclosure: The mime in chains following the rock star around inspired me to get into QA.

Categories: Software Testing

New Relic vs. AppDynamics

Performance Testing with LoadRunner Focus - Mon, 01/26/2015 - 01:36
In recent years, IT projects seem to have stopped asking “which APM solution should we buy”, and have started asking “should we buy New Relic or AppDynamics?” Given the speed at which these two companies are innovating, the many product comparisons available on the web quickly become outdated. This comparison is a little more “high […]
Categories: Load & Perf Testing

Resilient Networking: Planning for Failure

Ilya Grigorik - Sun, 01/25/2015 - 06:00

A 4G user will experience a much better median experience both in terms of bandwidth and latency than a 3G user, but the same 4G user will also fall back to the 3G network for some of the time due to coverage, capacity, or other reasons. Case in point, OpenSignal data shows that an average "4G user" in the US gets LTE service only ~67% of the time. In fact, in some cases the same "4G user" will even find themselves on 2G, or worse, with no service at all.

All connections are slow some of the time. All connections fail some of the time. All users experience these behaviors on their devices regardless of their carrier, geography, or underlying technology — 4G, 3G, or 2G.

You can use the OpenSignal Android app to track own stats for 4G/3G/2G time, plus many other metrics. Why does this matter?

Networks are not reliable, latency is not zero, and bandwidth is not infinite. Most applications ignore these simple truths and design for the best-case scenario, which leads to broken experiences whenever the network deviates from its optimal case. We treat these cases as exceptions but in reality they are the norm.

  • All 4G users are 3G users some of the time.
  • All 3G users are 2G users some of the time.
  • All 2G users are offline some of the time.

Building a product for a market dominated by 2G vs. 3G vs. 4G users might require an entirely different architecture and set of features. However, a 3G user is also a 2G user some of the time; a 4G user is both a 3G and a 2G user some of the time; all users are offline some of the time. A successful application is one that is resilient to fluctuations in network availability and performance: it can take advantage of the peak performance, but it plans for and continues to work when conditions degrade.

So what do we do?

Failing to plan for variability in network performance is planning to fail. Instead, we need to accept this condition as a normal operational case and design our applications accordingly. A simple, but effective strategy is to adopt a "Chaos Monkey approach" within our development cycle:

  • Define an acceptable SLA for each network request
    • Interactive requests should respect perceptual time constants.
    • Background requests can take longer but should not be unbounded.
  • Make failure the norm, instead of an exception
    • Force offline mode for some periods of time.
    • Force some fraction of requests to exceed the defined SLA.
    • Deal with SLA failures instead of ignoring them.

Degraded network performance and offline are the norm not an exception. You can't bolt-on an offline mode, or add a "degraded network experience" after the fact, just as you can't add performance or security as an afterthought. To succeed, we need to design our applications with these constraints from the beginning.

Tooling and API's

Are you using a network proxy to emulate a slow network? That's a start, but it doesn't capture the real experience of your average user: a 4G user is fast most of the time and slow or offline some of the time. We need better tools that can emulate and force these behaviors when we develop our applications. Testing against localhost, where latency is zero and bandwidth is infinite, is a recipe for failure.

We need API's and frameworks that can facilitate and guide us to make the right design choices to account for variability in network performance. For the web, ServiceWorker is going to be a critical piece: it enables offline, and it allows full control over the request lifecycle, such as controlling SLA's, background updates, and more.

Very Short Blog Posts (22): &#8220;That wouldn&#8217;t be practical&#8221;

DevelopSense - Michael Bolton - Sat, 01/24/2015 - 16:46
I have this conversation rather often. A test manager asks, “We’ve got a development project coming up that is expected to take six months. How do I provide an estimate for how long it will take to test it?” My answer would be “Six months.” Testing begins as soon as someone has an idea for […]
Categories: Software Testing

When Version Release & Marketing Campaigns Collide (Performance Testing Case Study)

LoadImpact - Fri, 01/23/2015 - 09:15

Brille24 is Germany’s leading online-optician with customers in more than 115 countries around the world. They offer a wide range of glasses of that can be purchased via their e-commerce website. Brille24 used Load Impact to verify that their new e-commerce platform could handle their existing traffic level of 2,500 concurrent users. Testing helped them identify and fix performance-related problems ... Read more

The post When Version Release & Marketing Campaigns Collide (Performance Testing Case Study) appeared first on Load Impact Blog.

Categories: Load & Perf Testing

Can load testing help stop hackers?

LoadStorm - Thu, 01/22/2015 - 09:51

In light of the recent Sony hack, security should be on every web developer’s mind. This cyber attack, which is being called “the worst cyber attack in U.S. history” by Sony’s CEO, is a perfect example of why security is something we all need to take seriously. An enormous amount of personal and financial information was revealed for millions of customers.

As we grow increasingly aware of these occurrence, we as developers need to go forward with the mindset that people will be trying to access our data. As the internet and technology permeates throughout physical stores, our information is becoming even more vulnerable to criminals who use online hacking methods against organizations.

What you can do.

There are numerous ways you can be proactive in protecting your website. One practice that is often overlooked is a combination of penetration and stress testing. Stress testing is the practice of determining how well a website functions in deliberately adverse conditions. Penetration testing is actively trying to break down security methods and access forbidden information. Typical actions in this testing test may include:

  • Running several resource-intensive processes (consuming CPU, memory, disk, network) on the web and database servers at the same time
  • Attempting to access and hack into the system and use it as a zombie to spread spam
  • Doubling the baseline number for concurrent attempts to access a single website as a regular user.
  • Attempting to gain cross-account access to gain information when logged into a system

When you evaluate and benchmark your system in this way, you can observe how your system reacts and recovers. As the array of different protocols and applications grow increasingly complex, malicious attacks can quickly bring down a site or exploit a lack of security.

Security breaches are camouflaged.

A previous Sony data breach that jeopardized 77 million users was actually disguised as a DDoS attack, which is an attack characterized with very overwhelming amounts of traffic. According to a recent study from RSA Security and the Ponemon institute, 64 percent of IT professionals in the retail sector have seen an increase in attacks and fraud attempts during abnormally high traffic periods. By testing your site’s response to a simulated attack, this type of security gap can be reduced, which is a proactive step towards protecting your site.

LoadStorm can be used as a tool to determine your site’s breaking point, and possibly, your site’s performance at its most vulnerable point. By simulating an attack in conjunction with a load test, you can now evaluate how network and security devices perform under stress, and isolate and repair flaws. After determining the weak points of your site, get to work implementing a more secure infrastructure. The idea is to close the gap between the attack and the response to the attack.

Readiness is Key.

Many companies make the mistake of launching before they are truly ready. It’s easy to get caught up in launch deadlines or the pressure to conserve time and resources that could be spent on testing. However, with the diverse competition that exists today, many customers will only give you one shot. If your site or your customer’s data has been compromised, don’t be surprised if they leave and do not return. It takes a lot of work to build trust with new users. Don’t lose the value of your hard work on a vulnerable system. As millions of transactions take place on the internet every day, it’s up to us to make sure that our systems are prepared for an attack; that security provisions like network firewalls, flood controls, intrusion detection and prevention, and application firewalls have all been tested thoroughly with realistic simulated traffic.

It’s up to us to ensure that our sites are ready for high traffic and that our data is secure.

The post Can load testing help stop hackers? appeared first on LoadStorm.

Key Performance Metrics For Load Tests Beyond Response Time- Part I

Whether it is JMeter, SoapUI, Load Runner, SilkTest, Neotys or one of the cloud-based load testing solutions such as Keynote, Dynatrace (formerly Gomez) or others, breaking an application under heavy load is easy these days. Finding the problem based on automatically generated load testing reports is not. Can you tell me what is wrong based […]

The post Key Performance Metrics For Load Tests Beyond Response Time- Part I appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Your Web Site’s Undergarments Are Showing

QA Hates You - Thu, 01/22/2015 - 04:29

Keep in mind your invisible meta tags display when a user shares the link in various social media forms:

In this case, I think we can agree it’s a QA fail.

Categories: Software Testing

These 7 things are slowing down your site’s user experience (and you have no control over them)

Web Performance Today - Wed, 01/21/2015 - 13:13

As a website owner, you have 100% control over your site, plus a hefty amount of control over the first and middle mile of the network your pages travel over. You can (and you should) optimize the heck out of your pages, invest in a killer back end, and deploy the best content delivery network that money can buy. These tactics put you in charge of several performance areas, which is great.

But when it comes to the last mile — or more specifically, the last few feet — matters are no longer in your hands.

Today, let’s review a handful of performance-leaching culprits that are outside your control — and which can add precious seconds to your load times.

1. End user connection speed

If you live in an urban center, then congratulations — you probably enjoy connections speeds of 20-30 Mbps or more. If you don’t live in an urban center, then you have my sympathy.

As someone who lives some distance from a major city, I can tell you that 1-3 Mbps is a (sadly) common occurrence in my house, especially during the internet rush hour when everyone in my neighborhood gets home and starts streaming movies and doing whatever else it is they do online. There’ve been many, many times when I’ve gotten incredibly frustrated waiting for a file to download or stream, and I’ve performed many, many speed tests with sub-3 Mbps results.

Oh look! Here’s one right now:

I’ve gotten into long, heated debates with people (always city dwellers) who flat-out refuse to believe that THAT many people are really affected by poor connection speeds. Their arguments seem to be largely fueled by the fact that they’re operating with a limited view of data, which corroborates their own personal experience.

In all my research into US connection speeds, I’ve found that the numbers vary hugely depending on the source. At the highest end is OOKLA’s “household download index” that states a typical US household connection is 30 Mbps. My issue with OOKLA’s data is I can’t find specifics on how they gather it, or whether it’s based on average speeds or peak speeds. I suspect the latter, based on comparing their data to Akamai’s “state of the internet” reports, which give numbers for both average and peak Mbps. OOKLA’s index loosely correlates to Akamai’s peak broadband numbers. According to Akamai, average broadband connection speeds run approximately 12-17 Mbps for the ten fastest states. The slowest states perform significantly worse, with Alaska trailing at 7.2 Mbps.

Another issue: When most agents report on connection speed, they tend to focus on broadband, which ignores a massive swath of the population. (Broadband is, by definition, an internet connection that is above 4 Mbps, regardless of the connection being used.) According to Akamai, in many states broadband adoption is only around 50%. And in roughly one out of ten states, broadband adoption rates are actually declining.

Takeaway: Non-broadband users aren’t freakish outliers whom you should feel comfortable ignoring.

And that’s just desktop performance. Don’t even get me started on mobile.

2. What the neighbours are downloading

If, like me, you’re already at the low end of the connection speed spectrum, when your neighbours start streaming a movie or downloading a massive file, you REALLY feel it. I’ve been known to schedule my download/upload activities around the release of new episodes of Game of Thrones and Breaking Bad. Sad or savvy? Maybe a little bit of both.

 3. How old their modem is

I’ve yet to encounter an ISP that proactively reminds customers to upgrade their hardware. Most people use the same modem for between five to ten years, which guarantees they’re not experiencing optimal load times — even if they’re paying for high-speed internet.

Here’s why:

If your modem is five or more years old, then it pre-dates DOCSIS 3.0. (DOCSIS stands for Data Over Cable Service Interface Specification. It’s the international data transmission standard used by cable networks.) Back around 2010-2011, most cable companies made the switch from DOCSIS 2.0 to DOCSIS 3.0. If you care about performance, you should be using a DOCSIS 3.0 modem. Otherwise, you’ll never be able to fully leverage your high-speed plan (if you have one).

The funny (but not ha-ha funny) thing here is the fact that so many people have jumped on the high-speed bandwagon and are currently paying their ISPs for so-called high-speed internet. But without the right modem they’re not getting the full service they’ve paid for.

4. How old their device is

The average person replaces their machine every 4.5 years. In those 4.5 years, performance can seriously degrade — due to viruses or, most commonly, simply running low on memory. And while many people suspect that upgrading their memory would help, most folks consider themselves too unsavvy to actually do it. Instead, they suffer with poor performance until it gets so bad they finally replace their machine.

5. How old their browser is

You upgrade your browser religiously. Many people do not or cannot. (To illustrate: When Wine.com first became a Radware customer, one of their performance issues was that a significant portion of their customer base worked in the financial sector, which at the time was obligated to use IE 6 because it was the only browser compatible with specific legacy apps. These customers tended to visit the site from work, which inevitably led to a poor user experience until Wine.com was able to implement FastView.)

Each generation of browser offers performance gains over the previous generation. For example, according to Microsoft, IE 10 is 20% faster than IE 9. But according to StatCounter, roughly half of Internet Explorer users use IE 8 or 9, missing out on IE 10’s performance boost.

6. How they’re using (and abusing) their browser

Browser age is just one issue. There are a number of other scenarios that can affect browser performance:

  • Stress from having multiple tabs or windows open simultaneously.
  • Performance impact from toolbar add-ons. (Not all widgets affect performance, but some definitely do, particularly security plug-ins.)
  • Degradation over time. (The longer the browser remains open, the greater its likelihood of slowing down and crashing.)
  • Variable performance when visiting sites that use HTML5 or Flash, or when watching videos. (Some browsers handle some content types better than others.)

Most people are guilty of keeping the same browser window open, racking up a dozen or so tabs, and living with the slow performance degradation that ensues — until the browser finally crashes. According to this Lifehacker post, you should never have more than nine browser tabs open at the same time. Raise your hand if you have more than nine tabs open right now. *raises hand*

7. Running other applications

Running too many applications at the same time affects performance. But most non-techie internet users don’t know this.

Security solutions are a good example of this. Antivirus software scans incoming files to identify and eliminate viruses and other malware such as adware, spyware, trojan horses, etc. It does this by analyzing all files coming through your browser. Like firewalls and security products, antivirus software operates in realtime, meaning that files are paused for inspection before being permitted to download.

Because of this inspection, a performance penalty is inevitable and unavoidable. The extent of the penalty depends on the software and on the composition of the page/resource being rendered in the browser.

How do all these factors add up, performance-wise?

Consider this scenario:

Someone visits your site, using a four-year-old PC running IE9 with a handful of nifty toolbar add-ons, including a parental control plug-in (this is a shared computer) and a couple of security plug-ins, all of which are significantly hurting performance. They think they have high-speed internet because they’re paying their service provider for it, but they’re using a six-year-old modem. They tend to leave all their applications on, for easy access. They only restart their machine when it’s running so slowly that it becomes intolerable — or when it crashes. They keep the same browser window open for days or weeks on end, and they currently have eleven tabs open. They’re concerned about internet security, so they’re also running antivirus software.

All of these factors can add up — not just to milliseconds, but to seconds of extra user time. Unfortunately, when people are unhappy with how fast a page is loading, the number one target for their blame is the site. Fair? No. It’s also not fair that these lost seconds result in lost revenue for your company.

Takeaway

As I said at the top of this post, you have no control over potential problems caused by any of these factors. But that doesn’t mean you shouldn’t arm yourself with the knowledge that problems are occurring.

This is also why it’s crucial to not rely on your own experience with your site. It’s also why you can’t rely on synthetic tests to give you a true sense of how your pages perform. (Don’t get me wrong: synthetic tests definitely have value, but they tend to skew toward the optimistic end of the spectrum.) You need real user monitoring that gives you full visibility into how actual people are experiencing your site.

The post These 7 things are slowing down your site’s user experience (and you have no control over them) appeared first on Web Performance Today.

Waiting for One of Two Things To Appear on a Page in Java Selenium

QA Hates You - Tue, 01/20/2015 - 03:26

I’ve been working on a Java Selenium test automation project, teaching myself Java and Selenium WebDriver along the way.

I ran into a problem that I didn’t see solved on the Internet elsewhere, but my skill at searching StackOverflow is not that good.

The problem is conducting an operation that might yield one of two results, such as success or an error message. Say you’ve got an registration form or something that can succeed if you put the right data in it or will fail if you put the wrong data in it. If the user successfully registers, a welcome page displays. If the user fails, an error message displays. And, given this is the Web, it might take some time for one or the other to display. And the implicit waits didn’t look like they’d handle branching logic.

So here’s what I did:

public int waitForOne(WebDriver driver, Logger log, String lookFor1, String lookFor2){ WebDriverWait wait = new WebDriverWait(driver, 1); for (Integer iterator = 1; iterator < 60; iterator++){ try{ WebElement element = wait.until(ExpectedConditions.elementToBeClickable(By.id(lookFor1))); return 1; }catch (Exception e){ } // try 1 try{ WebElement element = wait.until(ExpectedConditions.elementToBeClickable(By.id(lookFor2))); return 2; }catch (Exception e){ } //try 2 } //for loop }// waitForOne

You could even create a longer list of events to wait for one of them to occur by passing in an array of strings and then using a For-Each loop to run through the list.

This sample looks for a Web element by its ID, but you could change it to use another By parameter, such as CSS Selector (cssSelector). Or, if you're feeling dangerous, you could pass in the By parameter as a string and parse it in the method to determine whether to use ID, CSS Selector, or a mix therein. But that's outside the simplicity of this example.

Also note that the For loop that limits the waiting for a total of sixty iterations, which in this case will max out at 120 seconds (at 1 second per attempt for 2 items a maximum of 60 times). You could pass the max in as a parameter when calling this method if you want. That's especially important if you're using a list of possible elements to look for. If you're passing in five elements, suddenly you're at a maximum time of five minutes if it times out completely. You might not want your tests to wait that long, especially if you're using the check multiple times per test.

I'm sure there are more elegant solutions for this. Let's hear them. Because, frankly, I'm not very good at searching StackOverflow, and I'd prefer if you'd just correct my foolishness here in the comments.

Categories: Software Testing

QA Music: I Am Machine

QA Hates You - Mon, 01/19/2015 - 02:57

It’s been a while since we’ve had some Three Days’ Grace, so here is “I Am Machine”:

It’s been a while since we’ve had anything, actually. Perhaps I should post something.

Categories: Software Testing

WordPress Bare Metal vs WordPress Docker Performance Comparison

LoadImpact - Fri, 01/16/2015 - 03:17

Performance is paramount in today’s website world. Google has also included Website Performance as another key criteria in the search rank algorithm it uses (source). Every decision we make regarding our website either affects performance, flexibility, or both. So how can we combine performance and flexibility? Docker! Docker is a new service which allows you to ... Read more

The post WordPress Bare Metal vs WordPress Docker Performance Comparison appeared first on Load Impact Blog.

Categories: Load & Perf Testing

4 Actions to help Marketing and support your Super Bowl ad performance

While your marketing team/agency was figuring out how many puppies, bikini clad women or hashtags to incorporate into your upcoming Super Bowl ad/commercial, I sure hope someone has been planning for the increased attention your homepage or landing pages expect to get. There is a great chance marketing hasn’t thought this through, especially this year […]

The post 4 Actions to help Marketing and support your Super Bowl ad performance appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

4 Actions to help Marketing and support your Super Bowl ad performance

While your marketing team/agency was figuring out how many puppies, bikini clad women or hashtags to incorporate into your upcoming Super Bowl ad/commercial, I sure hope someone has been planning for the increased attention your homepage or landing pages expect to get. There is a great chance marketing hasn’t thought this through, especially this year […]

The post 4 Actions to help Marketing and support your Super Bowl ad performance appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Pages