Feed aggregator

Big Game 2016 Live Blog Coverage

Live Blog Coverage Starts 12pm on Sunday Feb 7th Make Sure to check out out Prepping for The Big Game 2016  

The post Big Game 2016 Live Blog Coverage appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Couple Tests With The Product They Test – Part 2

Eric Jacobson's Software Testing Blog - Fri, 02/05/2016 - 12:30

I had a second scenario this week that gave me pause before resulting in the above practice.

ProductA is developed and maintained by ScrumTeamA, who writes automated checks for all User Stories and runs the checks in a CI.  ProductB is developed and maintained by ScrumTeamB.

ScrumTeamB developed UserStoryB, which required new code for both ProductA and ProductB.  ScrumTeamB merged the new product code into ProductA…but did NOT merge new test code to ProductA.  Now we have a problem.  Do you see it?

When ProductA deploys, how can we be sure the dependencies for UserStoyB are included?  All new product code for ProductA should probably be accompanied with new test code, regardless of the Scrum Team making the change.

The same practice might be suggested in environments without automation.  In other words, ScrumTeamB should probably give manual test scripts, lists, test fragments, or do knowledge transfer such that manual testers responsible for ProductA (i.e., ScrumTeamA) can perform the testing for UserStoryB prior to ProductA deployments.

Categories: Software Testing

Couple Tests With The Product They Test

Eric Jacobson's Software Testing Blog - Fri, 02/05/2016 - 07:38

…It seems obvious until you deal with integration tests and products with no automation.  I got tripped up by this example:

ProductA calls ProductB’s service, ServiceB.  Both products are owned by the same dev shop.  ServiceB keeps breaking in production, disrupting ProductA. ProductA has automated checks.  ProductB does NOT have automated checks.  Automated checks for ServiceB might help. Where would the automated checks for ServiceB live?

It’s tempting to say ProductA because ProductA has an automation framework with its automated checks running in a Continuous Integration on merge-to-dev.  It would be much quicker to add said automated checks to ProductA than ProductB.  However, said checks wouldn’t help b/c they would run in ProductA’s CI.  ProductB could still deploy to production with a broken ServiceB.

My lesson learned: Despite the ease of adding a check to ProductA’s CI, the check needs to be coupled with ProductB. 

In my case, until we invest in test automation for ProductB, said check(s) for ServiceB will be checks performed by humans.

Categories: Software Testing

Item 1 On The List: I Can’t Finish The List

QA Hates You - Fri, 02/05/2016 - 06:20

When you log into Slack, it provides you an inspirational message. How positive of the program. This particular item always gets me:

The first item on the list is that I couldn’t complete the list in under 24 hours.

Then we get into the physically impossible.

What, this is a rhetorical question? Then why ask it?

Categories: Software Testing

Prepping for The Big Game 2016

It’s only a couple of days away and the team here at Dynatrace has been working hard this week prepping for The Big Game.  We will be having a Dynatrace “Performance Bowl” to monitor a variety of websites all touched in some way by the wrap up this season’s final game for the NFL.  Given […]

The post Prepping for The Big Game 2016 appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Digital Performance Management: A Note from the Front

Some evolutionary changes are actually revolutions. This is the case for DC RUM’s new Universal Decode in the world of Digital Performance Management. It is not simply a new option for decoding a packet stream, part of a set of new features, but a real innovation! Digital performance management innovators fond of DC RUM My […]

The post Digital Performance Management: A Note from the Front appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

QA Music – Hello

QA Hates You - Mon, 02/01/2016 - 03:27
Categories: Software Testing

A Context-Driven Approach to Automation in Testing

DevelopSense - Michael Bolton - Sat, 01/30/2016 - 22:28
(We interrupt the previously-scheduled—and long—series on oracles for a public service announcement.) Over the last year James Bach and I have been refining our ideas about the relationships between testing and tools in Rapid Software Testing. The result is this paper. It’s not a short piece, because it’s not a light subject. Here’s the abstract: […]
Categories: Software Testing

What 2016 Presidential Candidates Can Learn about Site Performance from Online Retailers

Can Online Retailers Teach 2016 Presidential Candidates About Site Performance? If you have turned on any news network or visited any news site, you know that the 2016 Presidential Election is in full swing. We’ve been tracking the various candidate sites for a while now and are seeing some interesting trends. While the past few […]

The post What 2016 Presidential Candidates Can Learn about Site Performance from Online Retailers appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Default Garbage Collection settings for JVMs can cost you!

If you are following this blog you will have come across multiple posts that point out the correlation between Garbage Collection (GC) and Java Performance. Moreover, there are numerous guides on How Garbage Collection Works or Configure Garbage Collection policy on your Java Virtual Machine (JVM). Having read the above posts you can quickly come […]

The post Default Garbage Collection settings for JVMs can cost you! appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Automated Checks Should Be Mutually Exclusive

Eric Jacobson's Software Testing Blog - Wed, 01/27/2016 - 08:54
While helping some testers, new to automation, I found myself in the unexpected position of trying to sell them on the idea that all test methods should be mutually exclusive.  Meaning, no automated check should depend on any other automated check…automated checks can run in any order…you can run them all, in any order, or you can run just one.
If I could take one test automation rule to my grave, this would be it.  I had forgotten that it was optional.
I know, I know, it’s seems so tempting to break this rule at first;  TestA puts the product-under-test in the perfect state for TestB.  Please don’t fall into this trap. 
Here are some reasons (I can think of) to keep your tests mutually exclusive:
  • The Domino Effect – If TestB depends on TestA, and TestA fails, there is a good change TestB will fail, but not because the functionality TestB is checking fails.  And so on.
  • Making a Check Mix – Once you have a good number of automated checks, you’ll want the freedom to break them into various suites.  You may want a smoke test suite, a regression test suite, a root check for a performance test, or other test missions that require only a handful of checks...dependencies will not allow this.
  • Authoring – While coding an automated check (a new check or updating a check), you will want to execute that check over and over, without having to execute the whole suite.
  • Easily Readable – When you review your automation coverage with your development team or stakeholders, you’ll want readable test methods.  That usually means each test method’s setup is clear.  Everything needed to understand that test method is contained within the scope of the test method.
Categories: Software Testing

That Could Have Afforded A Couple Testers

QA Hates You - Wed, 01/27/2016 - 08:36

How to lose $172,222 a second for 45 minutes:

The tale has all the hallmarks of technical debt in a huge, unmaintained, bitrotten codebase (the bug itself due to code that hadn’t been used for 8 years), and a really poor, undisciplined devops story.

I’d always sworn I’d never work for a health devices or financial services company because the risks were so great.

Well, so far, I’m keeping half of that pledge.

Categories: Software Testing

Monitoring SLAs in an On-Demand World

Perf Planet - Tue, 01/26/2016 - 12:46

If you were listening in on our Opscast webinar with Honeywell a few weeks ago you would have learned about an interesting use case of the Catchpoint OnPrem Agent. Honeywell was experiencing painfully slow satellite Internet service on its entire fleet of corporate jets, jets that include a number of Honeywell components, including the satellite communications routers that provide on-board Internet service from satellite signals.

Honeywell installed Catchpoint OnPrem Agents on its fleet of planes, its sat-com ground station and even on non-Honeywell planes that used a different satellite Internet service. The OnPrem Agents monitored network service levels from all of these measurement points and within two weeks from its first contact with Catchpoint, Honeywell got to the bottom of its crawling in-flight Internet service. The satellite spotbeam service was over-saturated at peak travel times. In this case, Honeywell’s satellite Internet service provider was indeed providing the contracted level of service, but that service wasn’t sufficient to meet Honeywell’s inflight Internet needs during peak travel times. Honeywell needed to upgrade its service.

This isn’t the textbook example of how Catchpoint’s technology helps you to monitor your SLAs, but it’s nonetheless instructive. Here the contracted service level was fine, but it turned out to not be enough to meet peak user demand. A more typical Catchpoint SLA monitoring use case has us discovering that the service provider did not provide the contracted service level. Customers have reported that Catchpoint paid for itself in as little three months after finding SLA breaches that resulted in refunds or credits from their service providers.

In today’s world of distributed, on-demand computing, relying on third parties has never been more critical to the customer experience you’re delivering. Content delivery networks, external DNS providers, ad-serving networks, software-as-a-service and infrastructure-as-a-service providers can all be a part of the final product your customers see. All add new capabilities, but also vulnerabilities that have to be managed. Holding these service providers to the service delivery commitments you’re paying them for is a crucial part of any digital performance strategy.

You need a monitoring service that can pinpoint failures and slowdowns wherever they’re occurring, simultaneously discovering where a problem exists and eliminating other causes of failures. Most of all you need to find this out fast so you can get back to running your business. Whether on the ground or in the air, Catchpoint can help you break through the noise and solve your most challenging digital performance problems.

The post Monitoring SLAs in an On-Demand World appeared first on Catchpoint's Blog.

YOTTAA LAUNCHES NEW INDEX TO HELP ONLINE BUSINESSES BENCHMARK WEB PERFORMANCE AND USER EXPERIENCE

Perf Planet - Tue, 01/26/2016 - 07:17

CEXi Enables Online Businesses to Quantify the User Experience of Their Web Applications

Waltham, MA—January 26, 2016— Yottaa, the leading cloud platform for optimizing web and mobile applications, today announced that it has launched the Customer Experience Index (CEXi), a new index designed to help online businesses benchmark web performance and user experience.

Yottaa has previously helped businesses quantify the business impact of their CDN services with its ValidateIT methodology. With the CEXi, Yottaa is providing a framework for businesses to quantify their desktop and mobile user experiences to justify a web acceleration project. Prominent industry thought leadership from eCommerce giants including Amazon and Walmart suggest that web performance leads to increased conversion rates; in the past year mobile usage has increased significantly and in 2015 exceeded that of desktop for holiday shopping for many e-retailers. With claims that the web is getting slower, many businesses struggle to understand the impact that web performance has on the quality of experience they are providing to their users, and the impact that experience is having on business execution.

The CEXi is a compound index that quantifies the desktop and mobile user experience of a web application far more accurately than any single metric. The CEXi combines objective, measurable aspects of performance, such as page load speed from both mobile and desktop simulations, with specific weighting calculations to provide online businesses with an overall performance score that simulates the actual feeling, positive or negative, of using their web applications for a modern, cross-device user.

The Customer Experience Index focuses on three aspects of web performance: speed, parity of experience (across devices), and how well a site is optimized in relation to its complexity.

  • Speed – For the CEXi, key performance metrics, such as Time to Start Render (TTSR) and Time to Display (TTD), are collected and blended, and then weighted 50/50, for both mobile and desktop users. The mobile user simulation uses a 3G connection and an iPhone 5 with Safari browser, while the desktop user simulation uses a standard broadband connection and the latest version of Chrome. Then Yottaa takes the two resulting figures and weights them 60/40 – 60 percent mobile, 40 percent desktop, to match the current trends in web device usage in the U.S.
  • Device Parity – The CEXi compares the complexity of a desktop site for an online business versus its corresponding mobile site. Ideally, a site should be built on mobile-first principles, leading to a design that is equivalent for all users in functionality, and also performs well for all users.
  • Performance Power – While an eCommerce site should not be penalized for choosing to create a simple site, there are some sites that go above and beyond the call of duty by providing a super-rich experience that is also remarkably fast. To reward these companies, Yottaa developed a “bonus” score that bumps up the existing score by 25-50%, depending on how much power it packs from an optimization perspective. This is done by comparing the page weight and Time to Start Render for mobile (the more difficult of the two platforms to optimize). Conversely, Yottaa gives an equivalent penalty to those sites that should be fast, because they’re relatively lightweight, but are not.

“A great web experience has the power to compel users to devote precious time, money, and mental space to a company’s brand,” said Bob Buffone, Co-founder and Chief Technology Officer, Yottaa. “While factors like performance, design, security, and usability each contribute to the result, it’s only when every last aspect comes together to form an experience that you can see the power of the web at work. Yottaa’s CEXi is a powerful tool in that it enables online businesses to gauge how their web applications are performing by approximating the experience of the people that matter most, their end users.”

Click here to learn more about the CEXi.

About Yottaa
Yottaa is the leading cloud platform for optimizing web and mobile applications. Through Yottaa’s patented ContextIntelligence™ architecture, enterprises can manage, accelerate, and secure end user experiences on all devices in real-time with zero code change. Top Internet 500 businesses have adopted Yottaa’s platform to realize billions in incremental revenue through dramatic improvements across key performance and business metrics. To learn more about how Yottaa can maximize your users’ experience, please visit www.yottaa.com or follow @yottaa on Twitter.

Media Contact:

Tedd Rodman
617-239-9024
t.rodman@yottaa.com

 

###

The post YOTTAA LAUNCHES NEW INDEX TO HELP ONLINE BUSINESSES BENCHMARK WEB PERFORMANCE AND USER EXPERIENCE appeared first on Yottaa.

YOTTAA LAUNCHES NEW INDEX TO HELP ONLINE BUSINESSES BENCHMARK WEB PERFORMANCE AND USER EXPERIENCE

Perf Planet - Tue, 01/26/2016 - 07:17

CEXi Enables Online Businesses to Quantify the User Experience of Their Web Applications

Waltham, MA—January 26, 2016— Yottaa, the leading cloud platform for optimizing web and mobile applications, today announced that it has launched the Customer Experience Index (CEXi), a new index designed to help online businesses benchmark web performance and user experience.

Yottaa has previously helped businesses quantify the business impact of their CDN services with its ValidateIT methodology. With the CEXi, Yottaa is providing a framework for businesses to quantify their desktop and mobile user experiences to justify a web acceleration project. Prominent industry thought leadership from eCommerce giants including Amazon and Walmart suggest that web performance leads to increased conversion rates; in the past year mobile usage has increased significantly and in 2015 exceeded that of desktop for holiday shopping for many e-retailers. With claims that the web is getting slower, many businesses struggle to understand the impact that web performance has on the quality of experience they are providing to their users, and the impact that experience is having on business execution.

The CEXi is a compound index that quantifies the desktop and mobile user experience of a web application far more accurately than any single metric. The CEXi combines objective, measurable aspects of performance, such as page load speed from both mobile and desktop simulations, with specific weighting calculations to provide online businesses with an overall performance score that simulates the actual feeling, positive or negative, of using their web applications for a modern, cross-device user.

The Customer Experience Index focuses on three aspects of web performance: speed, parity of experience (across devices), and how well a site is optimized in relation to its complexity.

  • Speed – For the CEXi, key performance metrics, such as Time to Start Render (TTSR) and Time to Display (TTD), are collected and blended, and then weighted 50/50, for both mobile and desktop users. The mobile user simulation uses a 3G connection and an iPhone 5 with Safari browser, while the desktop user simulation uses a standard broadband connection and the latest version of Chrome. Then Yottaa takes the two resulting figures and weights them 60/40 – 60 percent mobile, 40 percent desktop, to match the current trends in web device usage in the U.S.
  • Device Parity – The CEXi compares the complexity of a desktop site for an online business versus its corresponding mobile site. Ideally, a site should be built on mobile-first principles, leading to a design that is equivalent for all users in functionality, and also performs well for all users.
  • Performance Power – While an eCommerce site should not be penalized for choosing to create a simple site, there are some sites that go above and beyond the call of duty by providing a super-rich experience that is also remarkably fast. To reward these companies, Yottaa developed a “bonus” score that bumps up the existing score by 25-50%, depending on how much power it packs from an optimization perspective. This is done by comparing the page weight and Time to Start Render for mobile (the more difficult of the two platforms to optimize). Conversely, Yottaa gives an equivalent penalty to those sites that should be fast, because they’re relatively lightweight, but are not.

“A great web experience has the power to compel users to devote precious time, money, and mental space to a company’s brand,” said Bob Buffone, Co-founder and Chief Technology Officer, Yottaa. “While factors like performance, design, security, and usability each contribute to the result, it’s only when every last aspect comes together to form an experience that you can see the power of the web at work. Yottaa’s CEXi is a powerful tool in that it enables online businesses to gauge how their web applications are performing by approximating the experience of the people that matter most, their end users.”

Click here to learn more about the CEXi.

About Yottaa
Yottaa is the leading cloud platform for optimizing web and mobile applications. Through Yottaa’s patented ContextIntelligence™ architecture, enterprises can manage, accelerate, and secure end user experiences on all devices in real-time with zero code change. Top Internet 500 businesses have adopted Yottaa’s platform to realize billions in incremental revenue through dramatic improvements across key performance and business metrics. To learn more about how Yottaa can maximize your users’ experience, please visit www.yottaa.com or follow @yottaa on Twitter.

Media Contact:

Tedd Rodman
617-239-9024
t.rodman@yottaa.com

 

###

The post YOTTAA LAUNCHES NEW INDEX TO HELP ONLINE BUSINESSES BENCHMARK WEB PERFORMANCE AND USER EXPERIENCE appeared first on Yottaa.

How to Measure User Experience

Perf Planet - Tue, 01/26/2016 - 06:55
Introducing the CXi’s next evolution.

Last year we experimented with a new way of measuring user experience. The “CXi”, as we called it, combined a few different objective, measurable aspects of performance and, with some weighting, rolled them into a single number. We gave it a test drive by measuring the top 64 eCommerce companies and setting up a “tournament” that compared the performance of different merchant categories.

The equation worked, but we weren’t totally satisfied. For one, there were sites that we felt, based on our own subjective experiences, didn’t score right. This informal “sniff test” only found a few such misses, but it was still more frequent than we were comfortable with.  We also found that aspects of the equation were not easily repeatable, which would have made it an onerous task to replicate at larger scale for future projects.

A new year,  a new index

For the reprised version, we went back to the drawing board. Same principles, but a slightly different approach. We still use multiple measures of page load speed from both mobile and desktop simulations as the basis of the score, but adjusted how they’re compiled and weighted. We also introduce a new aspect we’re calling “performance power” to reward or demerit sites.

Most importantly, the equation addresses both previous issues: its rankings match up well with our own subjective experiences (that is, it passed the sniff test) and it’s easily repeatable for any site or population of sites. We even changed the name slightly: CEXi. (Yes, internally we pronounce it exactly how you think we do).

To (re)introduce our index, we have an introductory whitepaper

The introduction to CEXi is complete with results from the initial proof-of-concept study we performed with a sample of 175 well-known retail sites. It includes a detailed breakdown of the ingredients that go into a CEXi score, and some of the data analysis (with charts) we performed to validate the score. Think of it as a teaser for some of the deeper analyses to come soon!

Read more below to find out what retailers scored highest and how metrics like page weight correlate with CEXi score (there may be some surprises).

your browser doesn’t support frames 

 

The post How to Measure User Experience appeared first on Yottaa.

Madness, Math and (half) Marathons – and Medium!

Alan Page - Mon, 01/25/2016 - 14:02

I decided that in the rare occurrences where I post non-software articles, that I’d use a blog on Medium and post my attempts at story-telling along with the rest of the world.

My first post is here.

(potentially) related posts:
  1. Settling on Quality?
  2. Writing About Testing
  3. Walls on the Soapbox
Categories: Software Testing

Survey Results for the Tester to Developer Ratio - January 2016

Randy Rice's Software Testing & Quality - Mon, 01/25/2016 - 09:55
For the past month, I have been collecting surveys to see which tester to developer ratios are in use in various organizations. In addition, I asked some other questions about management attitudes toward dealing with testing workloads. I'll publish those results a little later.

This was a small survey of 22 companies worldwide, 19 of which were able to provide accurate information about their tester to developer ratio.
This survey is part of ongoing research I have been conducting since 2000.
Thanks to everyone who has contributed to date.
Before I get into the findings, I want to refer to two articles I have written on this topic. These articles explain why I feel that the data show there is no single ratio that works better than others. Getting the right workload balance is a matter of tuning processes and scope, which includes optimizing testing to get the most efficiency with the resources you have.
You can read these articles at:
The Tester-to-Developer Ratio Revisited
The Elusive Tester to Developer Ratio
The recent findings are:1.        The range of ratios are much tighter. The range was 1 tester to 1 developer on the richer end of the scale, to 1 tester to 7 developers on the leaner end. I feel that some of this is due to the small sample size.2.        The majority of responses (16) indicated just three ratios: 1 tester to 1 developer on the low side to 1 tester to 3 developers on the high side.3.        The most common ratio was 1 tester to 2 developers4.        The average was also 1 tester to 2 developers5.        People reported poor, workable and good test effectiveness at all ratios. The variation was wide. There were no noticeable indications that a particular ratio of testers to developers worked any better than another, simply due to the ratio.
This survey showed much richer ratios than any other survey I’ve taken. This could be due to the impact of agile methods. Most of these companies (13) reported they do not anticipate hiring more testers in 2016. I plan to continue this survey to get a more significant sample size.
If you have not contributed to this survey yet, you can still add your responses at:https://www.surveymonkey.com/r/55LVHFZ
Thanks!
Randy
Categories: Software Testing

Preparing for Online Events All Year Long

Perf Planet - Fri, 01/22/2016 - 14:13

Cyber Monday—the Monday after Thanksgiving in the US—may be a relic of a bygone era of slow, dial-up, at-home Internet connections but remains the biggest online shopping day of the year for US e-commerce sites. While Cyber Monday may make or break the year for some online retailers, online events crucial to an e-tailer’s business happen year round.

Valentine’s Day is fast approaching and is huge for flower, gift and jewelry sites. Ditto for Mother’s Day a few months later. But an online event doesn’t have to be tied to a holiday. A free-shipping promotion or a sitewide sale can drive hordes of customers to your site all at once. Sites that fail to handle the extra traffic not only lose business on that day, they may lose future customers as well.

While most online retailers in the US held their own this past holiday season, there were some notable casualties along the way. Neiman Marcus’s site was down most of Black Friday and struggled throughout the weekend. Target.com stumbled badly on Cyber Monday, unable to cope with traffic driven by a free-shipping promotion and site-wide discounts of at least 15%. NewEgg.com also had some site outages throughout the weekend.

So what can retail sites do to withstand the high traffic volumes generated by online events and continue to deliver quality user experiences that convert and retain customers? We asked our own in-house digital performance management evangelists for their suggestions to prevent the site outages we saw this past holiday season. Here are our 7 tips to help your site remain available and deliver fast page load times:

  1. Size Matters. We’ve talked in this space before about the importance of image and HTML compression to make your websites smaller and faster. Median e-commerce site size was 1.86m bytes this past holiday season. We’d recommend keeping your sites under 1.5m bytes. Have a lighter-weight version of your site for peak events or at the very least keep your most visited pages lighter-weight than usual. Reduce dynamic content on your homepages or landing pages. Better still, move dynamic content to your CDN. You can also combine CSS and JavaScript files using utilities like Minify and combine images using CSS image sprites. Both these techniques can minimize the number of HTTP requests it takes to load your page, a frequent cause of poor performance under heavy web traffic.
  2. Keep third-party tags in check. Third-party tags, for advertising, social media, affiliate marketing, paid search, site analytics or other purposes can add new digital marketing capabilities to your site but also create new dependencies that can hurt your site’s performance. Use a tag management system to “containerize” your tags and better manage the way they load to reduce page load times.
  3. Test and manage your APIs. This is especially important for mobile sites, which rely heavily on APIs. By integrating with internal and external systems, APIs add necessary functionality to sites but those API calls can also slow down web performance especially when your number of concurrent users goes up.
  4. Mind the back end. If possible, deploy a high-capacity stack of application servers and web servers on peak traffic days. They will take the brunt of the high traffic. You can keep the same database server you normally use. Make sure you continue to test and validate your performance against this stack.
  5. Don’t forget the network: Make sure your network devices such as load balancers, routers and switches can handle the extra traffic you’re expecting. Along the same lines, eliminate single points of failure in your network. If a load balancer array would take your whole network down if it went down, you need a better network design.
  6. Look outside the data center: Make sure your hosting provider, CDN provider, external DNS provider or any third-party you deal with can handle the traffic you’re expecting. Ask the right questions and ensure that the necessary optimizations have been made.
  7. Prepare for failure just the same: Follow steps 1-6 and you hopefully won’t need step 7. But even the best prepared and tested sites can go down at some point under heavy load. Complete all backups and provision additional hardware so if you do go down you can quickly get back up. Have your customer communications—web splash page that tells your customers you’re down, Twitter alerts, discount code for when you’re back up, etc.—ready to go. No one wants to get to this point, but if you’re customers can’t access your site or complete transactions, you’ll want to do everything you can to placate them.

Top 5 desktop sites from Thanksgiving to Cyber Monday (Nov. 26-30) based on median webpage response time:

  1. Apple, 1.58s
  2. Amway Global, 1.77s
  3. Costco, 1.86s
  4. Walgreens, 2.31s
  5. Etsy, 2.44s

Top 5 mobile sites from Thanksgiving to Cyber Monday (Nov. 26-30) based on median webpage response time:

  1. W. Grainger Inc., 1.08s
  2. Amway Global, 1.28s
  3. Apple, 1.34s
  4. Costco, 1.56s
  5. Systemax, 1.59s

Catchpoint 100% Club

These sites were available 100% of the time from Black Friday to Cyber Monday:

  • Kohls (desktop and mobile)
  • BestBuy (desktop and mobile)
  • com (desktop and mobile)
  • HomeDepot (desktop and mobile)
  • Barnes&Noble (desktop only)
  • QVC (desktop only)
  • Staples (desktop only)
  • Walgreens (desktop only)
  • Amway (desktop only)
  • Sears (mobile only)
  • Dell (mobile only)
  • Grainger (mobile only)
  • Zulily (mobile only)
  • Systemax (mobile only)

The post Preparing for Online Events All Year Long appeared first on Catchpoint's Blog.

2015 Year in Review: Cyberattacks and Security

Perf Planet - Thu, 01/21/2016 - 13:06

It’s a new year and time to look back at the year that was in cyberattacks.

Perhaps the biggest story of the year is a non-story: the fact that we’re still talking about the same types of breaches we have for a few years now. This proves that prevention measures are not yet sufficient and that eCommerce and hospitality websites are still lucrative targets. These sites’ large pools of clients mean the financial gain from breaching even a single site can be very rewarding.

For their part, banks have become better at acting to mitigate the direct pecuniary impact of these breaches for their customers, and are shutting off exposed credit cards faster than ever. Such vigilance, however, only serves to increase the economic incentive for attackers to continue to look for new sites to breach, and does nothing to prevent or minimize the personally identifiable information (PII) that becomes available to the black market communities.

Flavor of the (year?)

The general types of attacks that continue to be problematic for eCommerce businesses are D/DoS, brute force attacks, SQLi, directory traversal, and account hijacking. A relatively new entry into the attack types is malvertising, where 3rd party advertising software inadvertently injects malware into an otherwise legitimate site. (Forbes was recently subject to a high profile attack of this type when they forced users to turn off ad blocking software to access the site).

The risk with malvertisement attacks is that they are hard to track down, since advertisements are generally handled by third parties. They exploit the name recognition and trust built up between the brand name of the site displaying the ads and the user; this increases the chance that the user will lower their guard. It also damages the brand of the site when malvertisement is discovered.

The low cost of D/DoS attacks and malvertisements (relative to the potential reward) means that we can expect these types of attack to continue for as long as it is economically viable.

What’s in a trend? 

According to Hackmageddon, 2015 marked a persistent increase of cyberattacks over 2015, with the notable exception of October where attacks had a small decrease over last year. From this we can determine that while certain attack vectors may change, organized crime syndicates continue to realize profits from cybercrime with relatively little downside, so from the criminals’ perspective there is little incentive to stop. That attacks are becoming more sophisticated and cheaper to execute through distributed computing, coupled with the relatively low risk of being apprehended by law enforcement, will only embolden criminals until the general public begins to place a higher priority on security.

The only way to combat dedicated attackers is with a comprehensive security plan and properly configured logging tools that enable visibility into your site traffic. For our part, we at Yottaa leverage our Traffic Analytics platform to generate real time visibility into traffic to client sites and offer tools to mitigate the traffic right from the dashboard. Our team of support engineers proactively monitors our client sites for suspicious traffic and notifies clients if any as found, as well as provides potential mitigation techniques.

I’ll part with the thought that maybe the critical mass has been reached; after such devastating breaches recently maybe this will be the year that layered defenses across the major web industries strengthen to the point that the economics cease to work for some of the more common attack vectors and techniques. Great as that would be, we must suggest bracing for another year of headline-making attacks.

 

The post 2015 Year in Review: Cyberattacks and Security appeared first on Yottaa.

Pages