Feed aggregator

Production: Performance where it REALLY matters!

“Production is where performance matters most, as it directly impacts our end users and ultimately decides whether our software will be successful or not. Efforts to create test conditions and environments exactly like Production will always fall short; nothing compares to production!” These were the opening lines of my invitation encouraging performance practitioners to apply […]

The post Production: Performance where it REALLY matters! appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Production: Performance where it REALLY matters!

Perf Planet - Tue, 12/01/2015 - 11:56

“Production is where performance matters most, as it directly impacts our end users and ultimately decides whether our software will be successful or not. Efforts to create test conditions and environments exactly like Production will always fall short; nothing compares to production!” These were the opening lines of my invitation encouraging performance practitioners to apply […]

The post Production: Performance where it REALLY matters! appeared first on Dynatrace APM Blog.

Reducing Single Point of Failure using Service Workers

Perf Planet - Tue, 12/01/2015 - 00:41

If you are building a website today there’s a good chance that you rely on 3rd party libraries to provide you with extra functionality. Tracking scripts, A/B testing and adverts are just a few of the many reasons why you would want to use a 3rd party library. The problem is that if the library that you use is hosted on another server, you risk creating a single point of failure (SPOF). If for any reason the server that is hosting these libraries goes down, or is slow to respond, your site will unfortunately be affected by this. For example, have a look at the video below.

This video is a recording showing what happened when you experience a SPOF. On the left of the video it shows the page without it being blocked by external scripts, and on the right of the video it shows the same page being affected by a SPOF. Notice the difference in load times! Even if you have a highly optimized site, it can take considerably longer to load your site because it will be blocked while it waits for the external resource to download. SPOF has been previously written about on the Performance Calendar, and it’s a problem that continues to plague us as developers.

Fortunately, we can use the power of Service Workers to reduce the risk associated with 3rd party libraries. Using Service Workers, we can force a resource to timeout if it takes too long to download. If you aren’t familiar with Service Workers, they are JavaScript Workers that run in the background, separate from a web page, and they open the door to features which don’t need a web page or user interaction. In this post, we will be using a feature of Service Workers that will allow us to intercept network requests and if they take longer than expected, we will be able to respond accordingly.

Imagine the following scenario – I have a simple web page and I am loading the Angular library from the Google CDN. If for any reason the Google CDN takes longer to load than expected, I can respond appropriately instead of blocking the load of my entire page. In order to get started with Service Workers, you’ll need to update your HTML file to reference a Service Worker file.

<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>title</title> </head> <body> <script> // Register the service worker if ('serviceWorker' in navigator) { navigator.serviceWorker.register('./spof.js').then(function(registration) { // Registration was successful console.log('ServiceWorker registration successful with scope: ', registration.scope); }).catch(function(err) { // registration failed :( console.log('ServiceWorker registration failed: ', err); }); } </script> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.15/angular.min.js"></script> </body> </html>

In the code above, I am first checking if the browser supports Service Workers and if it does, I am registering a file named spof.js. You’ll also notice that at the bottom of the page I’ve included a reference to Angular hosted on the Google CDN. This is merely an example JavaScript file, you could use any JavaScript files.

Next, we need to create a JavaScript file called spof.js that will contain the code for the Service Worker.

function timeout(delay) { return new Promise(function(resolve, reject) { setTimeout(function(){ resolve(new Response('', { status: 408, statusText: 'Request timed out.' })); }, delay); }); } self.addEventListener('fetch', function(event) { // Only fetch JavaScript files for now if (/\.js$/.test(event.request.url)) { event.respondWith(Promise.race([timeout(2000), fetch(event.request.url)])); } else { event.respondWith(fetch(event.request)); } });

The code above is pretty straightforward. I am adding an event listener to intercept any outgoing fetch requests. Next, I’m checking to see if the file is a JavaScript file – in your case this could be any file that is hosted on a 3rd party server. Then I am using an ES6 Promise race condition to determine if the fetch request is quicker than a set timeout period of 2 seconds (2000 milliseconds). The Promise race condition will then determine which of the functions finished first. If the request took longer than 2 seconds, we return a 408 response. However, if we downloaded the file before the 2 seconds, we simply return the successful resource. I’ve chosen 2 seconds as a timeout, but you could use any amount of time that you deemed appropriate.

That’s it! We can now easily test this in action. I find one of the best ways to do this is to use the built in throttling functionality in Google Chrome to simulate a slower network connection. Fire up your Developer Tools in Google Chrome and head over to the network tab. From there, check “disable cache” and choose an option for throttling from the drop down. In order to test, I chose from the presets – GPRS 50KB.

If you refresh the web page and view the network requests, you’ll be able to see the Service Worker kick in.

In this case it returns a 408 HTTP response, but it we remove the throttling and it returns in a time faster than 2 seconds, we will see a successful 200 HTTP response.

It’s worth noting that Service Workers require HTTPS in order to work. Service Workers are equipped with some pretty powerful features that could be used for man-in-the-middle attacks, which is why they require HTTPS in order to work. If you’d like to test this example out while in development, Service Workers are designed to work over localhost. It’s also worth noting that in order for this functionality to work, the Service Worker will need to be installed first, which means that this functionality will only work for subsequent visits to your site and not first time visits.

That said, using Service Workers to reduce SPOF can be a useful way to ensure that your site is available and isn’t at the mercy of 3rd party servers. As with any new browser technology, not every browser currently supports Service Workers. However, the best thing about them is that if your browser doesn’t support them, you will still have a normal website as the functionality won’t kick in. As Bruce Lawson puts it: “It’s perfect progressive enhancement.

If you’d like to play around with the code in this post, I’ve created a repo on Github. There is a demo version of this example that you can use to test over at deanhume.github.io/Service-Workers-Fetch-Timeout. Fire up your developer tools as above and you’ll be able to see the code in action.

It would be great to see this functionality rolled out across more websites as it can be an easy way to reduce SPOF. If you’d like to learn more about Service Workers, I recommend heading over to the repo on Github. The example in this article could even be extended further to include caching. Jake Archibald wrote a really helpful article called the Offline Cookbook, which is a great guide to caching using Service Workers. Finally, I recommend taking a look at the Service Worker Cookbook created by the team at Mozilla for more useful Service Worker tips and tricks!

It’s Never Been Easier to Analyze Third Party Content with Zoompf

Perf Planet - Mon, 11/30/2015 - 10:55

Share

CC0 Public Domain

Today I’m happy to announce the release of some great new features to help you analyze (or ignore) the performance impact of the Third Party Content on your website using Zoompf Web Performance Optimizer.

Third Party Content is any content linked from your webpage over which you do not have direct control. For example, an advertising beacon or externally hosted JavaScript library. First Party Content, on the other hand, refers to content you either host yourself or have the ability to update and change. JavaScript hosted on your servers, images on a separate image server, etc. In general, you can optimize your First Party content but not your Third Party (other than choosing who you wish to partner with).

The new features are designed to help you more easily classify which of your content is First vs. Third party so you can focus your performance improvement efforts on that content over which you have direct control.

Classifying First and Third Party Domains

To begin, let’s dive straight into an example. Go ahead and select New Test from the top level nav, and then Create New Test. Enter your site URL and you’ll immediately see a new expanding section similar to this:

 

This is a new real-time URL tester that not only tests the validity of the URL you provide, but also sniffs out the domains of all linked resources off the base HTML page. Pick the domains you know are yours (First Party Content) and leave out those that are known Third Party. This will seed the new performance test with the list of domains it should treat as First Party.

Go ahead and click Start Test and wait for the snapshot to complete. When done, if a large percentage of Third Party Content was identified (say you missed a domain in your initial selection), you’ll see a warning like this:

This warning typically only displays if you didn’t select a large First Party Domain in your initial selection above. It could also occur, though, if a large percentage of Third Party Content is loaded by JavaScript execution or linked from secondary resources like your CSS files.

If the warning is in error, just dismiss it and you’ll never see it again for that Performance Test. If it’s accurate, though, then follow the link to update your First Party Whitelist and you’ll see a view similar to this:

This is a list of all Third Party domains detected during the performance snapshot. To see more detail, just click any of the host name links and you’ll see all the linked content, filtered by domain, for that snapshot. For example:

The domain filter option is new, and an extension of the rich content filtering introduced last month.

Hit the Clear button to reset and filter on different domains, file types, or specific file names.

Once you’ve confirmed which domains were erroneously tagged as Third Party, return to the Third Party Hosts view (click the Third Party tab, Hosts sub-tab), select the First Party hosts and then click the Set as First Party toolbar button.

This action will reclassify all linked resources for the selected hosts as first party content, recalculating all totals and rollups to reflect this change. In addition, your selections will be applied to all future performance snapshots as well, so you’ll only have to do this once.

Hit Yes, and the page will refresh with the new totals reflecting your changes.

Notice how the Third Party percentage dropped from 90% to 26%, while the rollup amounts on all the other tabs increased. Click into any tab to see the updated results from your selection.

Now you may have noticed the original warning message above mentioned that this action can be undone later. To (re)classify content as Third Party, simply visit the Hosts sub-tab of the Content tab, and check off those domains that should be Third Party using a similar process to above. This will update the snapshot once again, and also (re)apply to future tests.

In addition, if you visit a specific detail page for an analyzed item, you’ll see options to set as First Party or Third Party at the top of the specific detail page, for example.

This makes it easy to correct results as you are digging down into specific details.

Whitelisting and Blacklisting

Okay, we’re almost done here, but there’s one last important topic to cover. You may have noticed earlier I mentioned all changes reflect the current and all future snapshots. This is important to recognize as it means you should only rarely have to administrate your Third Party content list. Once you establish your baseline settings, they’ll just auto-apply for each new test and snapshot.

If you wish to fine tune these settings, you can manage these on a new tab under Settings called Third Party Content.

This page manages your First Party Whitelist (domains to always classify as First Party) as well as Third Party Blacklist (domains to always classify as Third Party). These settings are automatically updated whenever you apply changes using the process in the previous section, but if you wish to fine tune your results further you can do so directly on this page.

For example, in the above screenshot you may have noticed there were a number of subdomains of wp.com listed. To simplify this configuration, as well as automatically handle any new subdomains, you could just replace all those entries with a single entry for wp.com like this:

You can also supply regular expressions in these selections if you want to get even fancier, but i’ll leave that as an exercise for the reader.

In Closing

Managing your Third Party/First Party whitelist is an important step in effectively managing the noise level of your performance test results. With some simple care and feeding, the quality of your results will improve greatly. We hope you find these tools helpful, and as always, we’re very open to suggestions for further improvements.

If you’re not currently using Zoompf Web Performance Optimizer, Contact Us to start your trial today!

 

 

The post It’s Never Been Easier to Analyze Third Party Content with Zoompf appeared first on Zoompf Web Performance.

Last-Minute Black Friday Business Rescue & Cyber Monday Readiness

In order to be ready for Christmas season, online retailers typically bring their shops into shape right before Black Friday. Together with Cyber Monday this is the most important day in the retailer’s year. Stilnest.com (@Stilnest) is a publishing house for designer jewelry, running their online shop on Magento. While the guys at Stilnest did a […]

The post Last-Minute Black Friday Business Rescue & Cyber Monday Readiness appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Last-Minute Black Friday Business Rescue & Cyber Monday Readiness

Perf Planet - Mon, 11/30/2015 - 09:16

In order to be ready for Christmas season, online retailers typically bring their shops into shape right before Black Friday. Together with Cyber Monday this is the most important day in the retailer’s year. Stilnest.com (@Stilnest) is a publishing house for designer jewelry, running their online shop on Magento. While the guys at Stilnest did a […]

The post Last-Minute Black Friday Business Rescue & Cyber Monday Readiness appeared first on Dynatrace APM Blog.

Ubuntu - Changing the bootsplash screen

Corey Goldberg - Sun, 11/29/2015 - 14:08
Plymouth is the default bootsplash app used in Ubuntu... It displays the Ubuntu logo while booting.

For as long as I can remember, every time I install the proprietary NVidia display driver, my bootsplash gets borked.  It just displays a black screen, then a few flickers of the Ubuntu logo or NVidia logo, then the desktop is presented.

It seems that simply resetting the default plymouth theme fixes this issue:

    $ sudo update-alternatives --config default.plymouth

apply changes:

    $ sudo update-initramfs -u

Here is how to switch the bootsplash to [my favorite] the solar theme:

$ sudo apt-get install plymouth-theme-solar
$ sudo update-alternatives --config default.plymouth
$ sudo update-initramfs -u

then reboot!

more info:
https://wiki.ubuntu.com/Plymouth
Categories: Load & Perf Testing

Black Friday Night – Saving your eCommerce business – Real Time!

It’s Black Friday in the US! For me it’s actually already early Saturday being located in Europe. The past hours I spent on troubleshooting eCommerce sites that went down during their Black Friday peak sales periods. My friends at hybris kept me in the loop with their effort on keeping their customer’s commerce sites up and running. […]

The post Black Friday Night – Saving your eCommerce business – Real Time! appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Black Friday Night – Saving your eCommerce business – Real Time!

Perf Planet - Fri, 11/27/2015 - 17:53

It’s Black Friday in the US! For me it’s actually already early Saturday being located in Europe. The past hours I spent on troubleshooting eCommerce sites that went down during their Black Friday peak sales periods. My friends at hybris kept me in the loop with their effort on keeping their customer’s commerce sites up and running. […]

The post Black Friday Night – Saving your eCommerce business – Real Time! appeared first on Dynatrace APM Blog.

Oracles from the Inside Out, Part 8: Successful Stumbling

DevelopSense - Michael Bolton - Thu, 11/26/2015 - 17:18
When we’re building a product, despite everyone’s good intentions, we’re never really clear about what we’re building until we try to build some of it, and then study what we’ve built. Even after that, we’re never sure, so to reduce risk, we must keep studying. For economy, let’s group the processes associated with that study—review, […]
Categories: Software Testing

Cool new stuff on the NGINX front

Although NGINX conf 2015 took place back in September, not a day passes without someone asking me to summarize highlights from the event, future NGINX plans, what’s coming up, and related information requests, so I prepared this blog as an update on “what’s up with NGINX”. But first, I would like to once again thank the NGINX team for […]

The post Cool new stuff on the NGINX front appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Cool new stuff on the NGINX front

Perf Planet - Tue, 11/24/2015 - 14:38

Although NGINX conf 2015 took place back in September, not a day passes without someone asking me to summarize highlights from the event, future NGINX plans, what’s coming up, and related information requests, so I prepared this blog as an update on “what’s up with NGINX”. But first, I would like to once again thank the NGINX team for […]

The post Cool new stuff on the NGINX front appeared first on Dynatrace APM Blog.

What Happened?

Alan Page - Mon, 11/23/2015 - 19:36

As I approach the half-century mark of existence, I’ve been reflecting on how I’ve ended up where I am…so excuse me while I ramble out loud.

Seriously, how did a music major end up as a high level software engineer at a company like Microsoft? I have interviewed hundreds of people for Microsoft, who, on paper, are technology rock stars, and I (yes, the music major) have had to say no-hire to most of them.

I won’t argue that a lot of it is luck – but sometimes being lucky is just saying yes at the right times and not being afraid of challenges. Yeah, but it’s mostly luck.

I think another part is my weird knack for learning quickly. When I was a musician (I like to consider that I’m still a musician, but I just don’t play that often anymore) – I was always the one volunteering to pick up doubles (second instruments) as needed, or volunteer to fill whatever hole needed filling in order for me to get into the top groups. Sometimes I would fudge my background if it would help – knowing that I could learn fast enough to not make myself look stupid.

In grad school (yeah, I have a masters in music too), I flat out told the percussion instructor – who had a packed studio – that I was a percussionist. To be fair, I played snare drum and melodic percussion in drum corps for several summers, but I didn’t have the years of experience of many of my peers. So, as  grad student, I was accepted into the studio, and I kicked ass. In my final performance of the year for the music staff, one of my undergrad professors blew my cover and asked about my clarinet and saxophone playing. I fessed up to my professor that I wasn’t a percussion major as an undergrad, and that I lied to get into his program. When he asked why, I told him that I though if I told him the truth, that he wouldn’t let me into the percussion program. He said, “You’re right”. And then nailed of a marimba piece I wrote and closed out another A on my masters transcript.

I have recently discovered Kathy Kolbe’s research and assessments on the conative part of the brain (which works with the cognitive and affective parts of the brain). According to Kolbe, the cognitive brain drives our preferences (like Meyers Briggs or Insights training measure), but conative assessments show how we prefer to actually do things. For grins, I took a Kolbe assessment, and sure enough, my “score” gives me some insights into how I’ve managed to be successful despite myself.

I’m not going to spout off a lot about it, because I take all of these assessments with a grain of salt – but so far, I give this one more credit than Meyers Briggs (which I think is ok), and Insights (which I find silly). I am curious if others have done this assessment before and what they think…

By the time my current product ships, I’ll be hovering around the 21 year mark at Microsoft. Then, like today, I’m sure I’ll still wonder how I got here. I can’t see myself stopping to think about this.

And then I’ll find something new to learn and see where the journey takes me…

(potentially) related posts:
  1. Twenty Years…and Change
  2. Twenty-Ten – my best year ever
  3. 2012 Recap
Categories: Software Testing

Holiday Shopping Preview: We’re Ready, Are you?

Perf Planet - Mon, 11/23/2015 - 08:00

The online holiday shopping season starts in earnest this week. Our benchmark e-commerce sites have been picked out, our tests have been set up, now we’ll see how well eRetailers have heeded our advice and the lessons of the past.

Who we’re watching:

We’ll be monitoring availability and performance for 50 sites. This includes top e-commerce pureplays like Amazon, Overstock, NewEgg, Groupon, Etsy, Wayfair and Jet along with the online storefronts of retailing giants like Target, Wal-Mart, Macy’s, JCPenney, Neiman Marcus, Best Buy, Staples, ToysRUs and more. We’ll monitor both desktop and mobile sites for these retailers. Mobile traffic was higher during the holiday shopping season last year though more than 2/3 of sales were still attributed to desktop shoppers, according to IBM’s Digital Analytics Benchmark.

What we’ll be monitoring:

We’ll monitor our benchmark sites for nine different metrics including:

  • Wait time: how long it takes for the browser to connect to the web server and receive the first byte of the response.
  • Response time: how long it takes from request being issued to last byte of data received from the base URL
  • Render Start: measurement from when content first displays on the user’s screen
  • Doc Complete: how long it takes for the webpage to become interactive, though it may not be fully loaded at this point
  • Webpage Response: the time it takes from the request being issued from browser to server to receiving the last byte of the final element on the page. This includes DNS resolution, wait time , and server connect time
  • Downloaded Bytes: the total size of the downloaded content of the web page in bytes.
  • Items: how many content items are on the webpage, including images, HTML and CSS files, Javascripts,etc.
  • Hosts: how many web hosts serve the content on the page
  • Availability: percentage of time the web server is available and can be accessed by the browser
When we’ll be monitoring:

We’ll be running our tests all this week as many sites have sales and deals the whole week long. But we’ll be focusing especially on Thanksgiving Night, Black Friday and Cyber Monday. Cyber Monday is typically the biggest online shopping day not only of the holiday season but the entire year. Last year, $2.04bn was spent online on Cyber Monday, according to Catchpoint customer ComScore. Black Friday was second at $1.5bn. However, Thanksgiving wasn’t far behind at just over $1bn and the Saturday and Sunday of the holiday weekend accounted for another $2.01bn in holiday spending. The ComScore numbers track online spending only from desktop computers.

What to expect:

Last year, we saw a marked increase in page size (downloaded bytes) for both mobile and desktop sites and a corresponding increase in download complete and webpage response times. Unless online retailers heed our advice to slim down their sites and get third-party tags under control we expect to see this trend continue this year. In which case, we hope e-tailers have a plan in place for when things go wrong.

The post Holiday Shopping Preview: We’re Ready, Are you? appeared first on Catchpoint's Blog.

Black Friday/Cyber Monday Live Blog: Retailers’ Mobile & Web Performance

We are several days away from the start of the Holiday season, and everything kicks off with Black Friday and Cyber Monday. You may recall that last year the Dynatrace team provided a live holiday shopping blog following the web performance action in real time. We will be doing the same this year, with some new coverage that […]

The post Black Friday/Cyber Monday Live Blog: Retailers’ Mobile & Web Performance appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Black Friday/Cyber Monday Live Blog: Retailers’ Mobile & Web Performance

Perf Planet - Mon, 11/23/2015 - 06:19

We are several days away from the start of the Holiday season, and everything kicks off with Black Friday and Cyber Monday. You may recall that last year the Dynatrace team provided a live holiday shopping blog following the web performance action in real time. We will be doing the same this year, with some new coverage that […]

The post Black Friday/Cyber Monday Live Blog: Retailers’ Mobile & Web Performance appeared first on Dynatrace APM Blog.

Don't lose user and app state, use Page Visibility

Ilya Grigorik - Fri, 11/20/2015 - 01:00

Great applications do not lose user's progress and app state. They automatically save the necessary data without interrupting the user and transparently restore themselves as and when necessary - e.g. after coming back from a background state or an unexpected shutdown.

Unfortunately, many web applications get this wrong because they fail to account for the mobile lifecycle: they're listening for the wrong events that may never fire, or ignore the problem entirely at the high cost of poor user experience. To be fair, the web platform also doesn't make this easy by exposing (too) many different events: visibilityState, pageshow, pagehide, beforeunload, unload. Which should we use, and when?

You cannot rely on pagehide, beforeunload, and unload events to fire on mobile platforms. This is not a bug in your favorite browser; this is due to how all mobile operating systems work. An active application can transition into a "background state" via several routes:

  • The user can click on a notification and switch to a different app.
  • The user can ivoke the task switcher and move to a different app.
  • The user can hit the "home" button and go to homescreen.
  • The OS can switch the app on users behalf - e.g. due to an incoming call.

Once the application has transitioned to background state, it may be killed without any further ceremony - e.g. the OS may terminate the process to reclaim resources, the user can swipe away the app in the task manager. As a result, you should assume that "clean shutdowns" that fire the pagehide, beforeunload, and unload events are the exception, not the rule.

To provide a reliable and consistent user experience, both on desktop and mobile, the application must use Page Visibility API and execute its session save and restore logic whenever visibilityChange state changes. This is the only event your application can count on.

// query current page visibility state: prerender, visible, hidden var pageVisibility = document.visibilityState; // subscribe to visibility change events document.addEventListener('visibilitychange', function() { // fires when user switches tabs, apps, goes to homescreen, etc. if (document.visibilityState == 'hidden') { ... } // fires when app transitions from prerender, user returns to the app / tab. if (document.visibilityState == 'visible') { ... } });

If you're counting on unload to save state, record and report analytics data, and execute other relevant logic, then you're missing a large fraction of mobile sessions where unload will never fire. Similarly, if you're counting on beforeunload event to prompt the user about unsaved data, then you're ignoring that "clean shutdowns" are an exception, not the rule.

Use Page Visibility API and forget that the other events even exist. Treat every transition to visible as a new session: restore previous state, reset your analytics counters, and so on. Then, when the application transitions to hidden end the session: save user and app state, beacon your analytics, and perform all other necessary work.

If necessary, with a bit of extra work you can aggregate these visibility-based sessions into larger user flows that account for app and tab switching - e.g. report each session to the server and have it aggregate multiple sessions together. Practical implementation considerations

In the long term, all you need is the Page Visibility API. As of today, you will have to augment it with one other event — pagehide, to be specific — to account for the "when the page is being unloaded" case. For the curious, here's a full matrix of which events fire in each browser today (based on my manual testing):

  • visibilityChange works reliably for task-switching on mobile platforms.
  • beforeunload is of limited value as it only fires on desktop navigations.
  • unload does not fire on mobile and desktop Safari.

The good news is that Page Visibility reliably covers task-switching scenarios across all platforms and browser vendors. The bad news is that today Firefox is the only implementation that fires the visibilityChange event when the page is unloaded — Chrome, WebKit, and Edge bugs to address this. Once those are resolved, visibilityState is the only event you'll need to provide a great user experience.

Let’s Do This!

Alan Page - Thu, 11/19/2015 - 15:47

A lot of people want to see changes happen. Some of those want to make change happen. Whether it’s introducing a new tool to a software team, changing a process, or shifting a culture, many people have tried to make changes happen.

And many of those have failed.

I’ve known “Jeff” for nearly 20 years. He’s super-smart, passionate, and has great ideas. But Jeff gets frustrated about his inability to make changes happen. In nearly every team he’s ever been on, his teammates don’t listen to him. He asks them to do things, and they don’t. He gives them a deadline, and they don’t listen. Jeff is frustrated and doesn’t think he gets enough respect.

I’ve also known “Jane” for many years. Jane is driven to say the least. Unlike Jeff, she doesn’t wait for answers from her peers, she just does (almost) everything herself and deals with any fallout as it happens (and as time allows). It doesn’t always go well, and sometimes she has to backtrack, but progress is progress. Jane enjoys creating chaos and has no problem letting others clean up whatever mess she makes. Jane is convinced that the people around her “just don’t know how to get things done.”

Over the years, I’ve had a chance to give advice to both Jeff and Jane – and I think it’s worked. Jeff has been able to get people to help him, and Jane leaves a few less bodies in the wake of progress.

Jeff – as you may be able to tell, can be a bit of a slow starter. Or sometimes, a non-starter. He once designed an elaborate spreadsheet and sent mail to a large team asking every person to fill in their relevant details. When nobody filled it out, Jeff couldn’t believe the disrespect. I talked with him about his “problem”, and asked why he needed the data. His answer made sense, and I could see the value. Next I asked if he knew any of the data he needed from the team. “Most of it, actually”, he started, “but I don’t want to guess”.

Thirty minutes later, we filled out the spreadsheet, guessing where necessary, and re-sent the information to the team. In the email, Jeff said, “Here’s the data we’re using, please let me know if you need any corrections.” By the next morning, Jeff had several corrections in his inbox and was able to continue his project with a complete set of data. In my experience, people may stall to do work from scratch, but will gladly fix “mistakes”. Sometimes, you just need to kick start folks a bit to lead them.

Jane needed different advice. Jane is never going to be someone who puts together an elaborate plan before starting. But, I was able to talk Jane into taking just a bit of time to define a goal, and then create a list, an outline, or a set of tasks (or a combination), and sharing it with a few folks before driving forward. The time impact was either minimal or huge (depending on whether you asked me, or Jane), but the impact on her ability to get projects done was massive no matter who you ask. These days, Jane not only gets projects done without leaving bodies in her wake, but she actually receives (and welcomes) help from others on her team.

There are lot of other ways to screw up leadership opportunities, and countless bits of advice to share to avoid screw-ups. But – the next time you want to make change and it’s not working, take some time to think about whether the problem is really with the people around you…or if the “problem” is you.

(potentially) related posts:
  1. Making Time
  2. Give ‘em What They Need
  3. The Easy Part
Categories: Software Testing

A Look Back at Black Friday Weekend ’14

Perf Planet - Thu, 11/19/2015 - 15:37

Black Friday weekend is around the corner, and while most people are counting down the days until they can feast on their favorite Thanksgiving dishes, others are focusing on getting their eCommerce websites ready for a weekend of shopping madness.

Despite the importance we’ve placed on preparing and optimizing your site several weeks before Black Friday weekend, it’s also a good idea to look back at last year’s results to get a glimpse of what could transpire during the nation’s biggest eCommerce weekend of the year.

Mobile Traffic Spikes Affect Load Times

Each year it seems like Black Friday evolves into something bigger than it was the year prior; this is especially the case for mobile traffic.

According to IBM’s Digital Analytics Benchmark, more than half of last year’s traffic between the hours of 6 pm EDT on Thursday and 6 pm EDT on Black Friday came from mobile. What’s even more astounding, though, is that mobile sites performed nearly as well as desktop sites during this time, but were still almost a second slower in loading other page assets which could negatively affect customer experience.

Another side effect of rapid mobile traffic increases can be severe outages. A major electronics retailer’s mobile site went down for 90 minutes on Black Friday last year as a result of mobile traffic spike.

A Surge in Requests Comes with Serious Consequences

Mobile traffic influxes weren’t the only culprits of performance problems last year; major retailers experienced problems and outages due to an increase in bytes and requests on their websites. Most of these cases consisted of primary content loading faster than average, while JavaScript or third parties caused serious lag in performance times.

One clothing apparel company in particular was among the slowest in mobile speed rankings for Black Friday Weekend 2014 due to a single third party element that was blocking other elements on the page from loading, which may have left their customers unhappy and navigating to a competitor’s site. This is a prime example of why limiting the amount of objects and properly configuring these requests on your page is crucial.

Using flashy design elements to lore your customers to the featured discounts and deals makes it a challenge to maintain lighter pages; therefore, paying close attention to optimization and page construction is a key ingredient to your business’s success. Ensuring that the non-essential elements be placed after onload is an effective way of minimizing their impact on your overall performance.

Have a Plan for When Failure Strikes

As we mentioned in one of our recent posts, some failures are just bound to happen. Having a contingency plan ready to be deployed will minimize the damages of the inevitable outage. Last year, Staples executed their contingency plan when they suffered from a series of partial outages and slowness on Cyber Monday. This consisted of an error screen that displayed a link to their weekly ads and a customer service phone number.

Black Friday Weekend 2014 was no stranger to performance problems. Issues like latency and outages hit some major retailers, and while some of which could have been prevented, others were simply unavoidable. The biggest lesson to be learned and carried into next week is that failing compromises your customers’ satisfaction, which obviously puts your revenue at risk. Taking the last few days before Holiday Shopping Season 2015 kicks-off to double check your site and build a contingency plan can mean the difference between keeping your customers happy and losing them to your competitors.

Download eCommerce Web Performance Results from Thanksgiving Weekend, 2014 to access a complete study of last year’s eCommerce results.

The post A Look Back at Black Friday Weekend ’14 appeared first on Catchpoint's Blog.

EnterpriseJS Boston Summit Highlights

I was lucky to be in town at the same time when EnterpriseJS Boston Summit took place earlier this week. I am just starting to wrap my head around Node.js (being a guy with a background in .NET and Java)  and am therefore not at all an expert in the field. Check out EnterpriseJS.io which is “A […]

The post EnterpriseJS Boston Summit Highlights appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Pages