Feed aggregator

UK News Sites’ Performance During “Brexit” Vote Reveal User Experience Gaps

Perf Planet - Fri, 06/24/2016 - 12:57

British news sites performed well during last night’s “Brexit” vote. Load times remained relatively steady as people flocked to online news outlets to follow the country’s referendum on leaving the European Union. Steady doesn’t necessarily mean good, though. The slowest sites took roughly seven times longer to load than the top performers. For users impatiently looking for updates on the outcome, where they checked made a big difference.

The Daily Mail’s desktop site took more than 20 seconds for pages to fully load as ballots were counted, while BBC and Sky News sites took less than 3 seconds. The fastest among the top five Alexa-ranked UK news sites, BBC and Sky News deviated little if at all from their performance over the previous six days. Meanwhile load times at DailyMail.com slowed from roughly 17 seconds to more than 25 seconds at one point early Friday morning – more than enough for users to notice, even if they were already used to slow performance.

The Guardian’s site performance, on the other hand, actually improved during the run-up to the announcement loading pages in roughly six seconds. The Daily Telegraph added about a second to its page load time, climbing toward 8 seconds during the same peak period.

This chart shows the median web page load times (document complete) in milliseconds for each of these sites during the seven-day span through Friday afternoon, June 24th in the UK. Times shown are Eastern US, five hours behind GMT.

It’s worth noting that this performance reflects the desktop experience without ad blocking. Especially in the case of the Daily Mail, ad content played the primary role in slowing load times. It’s possible that a growing number of regular visitors to these sites have responded to performance lags not by switching loyalties but by blocking ads. A recent Catchpoint study suggests that for most, it works: media sites generally delivered far better user experiences with ad blocking. For someone anxiously refreshing a news page for updates on a developing story like this one, blocking ads could change the user experience dramatically – but at the expense of the site’s revenue.

The post UK News Sites’ Performance During “Brexit” Vote Reveal User Experience Gaps appeared first on Catchpoint's Blog.

Brexit Crunch on Financial Services Sites

With the Brexit decision over, financial markets are reacting to Britain’s decision to leave the European Union.  Below is a view showing the performance of various financial services websites, aggregated by industry and country. The most immediate impact has been with UK based brokerage sites.  The team at Dynatrace proactively monitors hundreds of financial services […]

The post Brexit Crunch on Financial Services Sites appeared first on about:performance.

Categories: Load & Perf Testing

Brexit Crunch on Financial Services Sites

Perf Planet - Fri, 06/24/2016 - 08:39

With the Brexit decision over, financial markets are reacting to Britain’s decision to leave the European Union.  Below is a view showing the performance of various financial services websites, aggregated by industry and country. The most immediate impact has been with UK based brokerage sites.  The team at Dynatrace proactively monitors hundreds of financial services […]

The post Brexit Crunch on Financial Services Sites appeared first on about:performance.

Web Performance, Chicago-Style

Perf Planet - Fri, 06/24/2016 - 05:25

For a pizza connoisseur, comparing deep-dish Chicago-style pizzas to thin-crust New York-style can very often turn into a very heated debate. Just ask Jon Stewart.

On our blog, we tend to focus a lot on failures and performance issues because learning from our mistakes is usually the best way to prevent them happening again. Today, however, we’re diverting our focus to examine a story of how a major retail site took the hard road to build a faster website for their end users.

W.W Grainger, founded in 1927 in Chicago with the purpose of providing businesses with supplies, launched their first website in 1996. Online commerce is a huge focus for this Fortune 500 company.

While reviewing a recent update of our benchmark index, I discovered this amazing performance improvement the team achieved and even made a point to congratulate them at Velocity 2016 in Santa Clara.

Grainger Render Start Time

As displayed in the chart above, their render start time improved by 43%.

I love the render start metric—it gives you a sense of when the user stopped staring at a blank screen. Of course, my experience with Grainger’s site is vastly different form my recent experience with Seiko.

This significant performance improvement was achieved by moving to inline CSS, driving a faster render start time while moving the entire site to SSL.

Congratulations, Team Grainger!

PS: Feels like the deep-dish pizza became a thin-crust one here. Either way, the consumer is the ultimate winner!

Mehdi

 

The post Web Performance, Chicago-Style appeared first on Catchpoint's Blog.

Automatic Problem Detection with Dynatrace

Perf Planet - Thu, 06/23/2016 - 15:20

Can you imagine automatic problem detection being a reality?! What would it take to make it possible, practical and functional? Over the years we at Dynatrace have seen a lot of PurePaths being captured in small to very large applications showing why new deployments simply fail to deliver the expected user experience, scalability or performance. Since I started my […]

The post Automatic Problem Detection with Dynatrace appeared first on about:performance.

Automatic Problem Detection with Dynatrace

Can you imagine automatic problem detection being a reality?! What would it take to make it possible, practical and functional? Over the years we at Dynatrace have seen a lot of PurePaths being captured in small to very large applications showing why new deployments simply fail to deliver the expected user experience, scalability or performance. Since I started my […]

The post Automatic Problem Detection with Dynatrace appeared first on about:performance.

Categories: Load & Perf Testing

Stop Overlooking the Real Reason People Block Ads: User Experience

Perf Planet - Thu, 06/23/2016 - 09:13

At Velocity 2016, the first keynote session was led by Bruce Lawson, CTO of Opera, entitled “Making Bad Ads Sad.” Rad!

Having worked at Doubleclick for so long, this topic is near and dear to my heart, so it goes without saying that it hit the spot!

It’s not just about the money. So much of the talk about ad blocking focuses on revenue and business models, that it becomes easy to overlook the real force behind this disruption: people want a better user experience, especially on mobile devices, but also on mobile data plans. The ad industry can solve this problem, but only if it stays focused on the real reason consumers use ad blockers.

Don’t listen to self-described experts who claim that people hate ads. They don’t. They hate waiting. They hate ads interfering with their user experience. They hate ads eating into their battery life, eating their data plans. These things are all related to performance, not content. If consumers didn’t notice ads causing a big performance drop-off, they wouldn’t invest the time and effort to block them.

Here’s a surprise, though: sometimes it’s not ads that hurt performance, but ad blockers. That was one finding when we looked at how a handful of prominent sites were affected by ad blocking recently. In a few cases, pages actually loaded slower when ad blocking was turned on. We used Pi-Hole, a DNS-based ad blocker, to test the difference by the way – so our methodology could have produced different results than AdblockPlus or another popular extension might have generated. But the point is still valid: you cannot assume that ad blocking will always improve site performance.

Even if ad blocking doesn’t always solve the problem consumers want it for, there are plenty of examples – especially in the news media – of sites that deliver worse user experiences because ads perform poorly and slow them down.

The industry needs leadership to focus not on the economic problem, but on the customer experience problem and the performance problem. The IAB is in position to take the lead here, set and enforce standards, and make sure that ads do not interfere with user experience, whether they’re part of ecommerce sites, news sites, or any other sites. Consumers aren’t waiting. They’re taking action and blocking ads, and if the industry doesn’t wake up and fix this soon consumers will have already solved it – and not in a way that the advertising world wants.

The solution is not another tag that detects if you are blocking ads or not. That’s a hack. The solution is no more 300 redirects, no more malware, no more blocking of the content while a Javascript executes. It means more relevant ads and more integrated ads.

The following table lists the 20 mobile websites used for this study, with web page load times shown in milliseconds (ms). The sites which exhibited slower performance with ad blockers on are listed at the top in red. The other sites, with minus ( – ) signs in the Difference column, indicate how much faster the load time was with ad blocking engaged – which is one of its intended effects.

 

Company                                Ad blocking OFF      Ad blocking ON            Difference

                                                                                                                              (with ad blocking on)

Southwest 4,159 7,280 3,121 Bank of America 3,841 5,267 1,426 Chase 3,270 4,526 1,256 The Huffington Post 3,591 4,390 799 Citibank 3,814 4,588 774 Wells Fargo 3,387 3,794 407 Amazon 1,637 1,683 46 Booking 3,226 3,212 -14 Capital One 3,852 3,768 -84 BestBuy 4,376 3,985 -391 Walmart 6,090 5,267 -823 Costco 2,516 1,624 -892 Target 9,319 8,218 -1,101 Expedia 6,078 4,953 -1,125 Forbes 5,802 4,228 -1,574 Fox News 8,332 5,970 -2,362 Kayak 7,692 5,135 -2,557 New York Times 8,217 4,788 -3,429 Hotels 8,863 4,309 -4,554 CNN 14,764 7,604 -7,160

 

The post Stop Overlooking the Real Reason People Block Ads: User Experience appeared first on Catchpoint's Blog.

Fixing SQL Server Plan Cache Bloat with Parameterized Queries

Perf Planet - Wed, 06/22/2016 - 14:11

Developers often believe that database performance and scalability issues they encounter are issues with the database itself and, therefore, must be fixed by their DBA. Based on what our users tell us, the real root cause, in most cases, is inefficient data access patterns coming directly from their own application code or by database access frameworks […]

The post Fixing SQL Server Plan Cache Bloat with Parameterized Queries appeared first on about:performance.

Fixing SQL Server Plan Cache Bloat with Parameterized Queries

Developers often believe that database performance and scalability issues they encounter are issues with the database itself and, therefore, must be fixed by their DBA. Based on what our users tell us, the real root cause, in most cases, is inefficient data access patterns coming directly from their own application code or by database access frameworks […]

The post Fixing SQL Server Plan Cache Bloat with Parameterized Queries appeared first on about:performance.

Categories: Load & Perf Testing

Meeting Minutes from Velocity 2016

Perf Planet - Wed, 06/22/2016 - 13:09

Its Velocity Time and the people who care about Performance, Continuous Delivery and DevOps are gathered in sunny Santa Clara, California. Thankful to be here I want to share my notes with our readers who don’t have the chance to experience it live. Lets dig right into it! Keynotes on Wednesday, June 22nd There were several keynote […]

The post Meeting Minutes from Velocity 2016 appeared first on about:performance.

Meeting Minutes from Velocity 2016

Its Velocity Time and the people who care about Performance, Continuous Delivery and DevOps are gathered in sunny Santa Clara, California. Thankful to be here I want to share my notes with our readers who don’t have the chance to experience it live. Lets dig right into it! Keynotes on Wednesday, June 22nd There were several keynote […]

The post Meeting Minutes from Velocity 2016 appeared first on about:performance.

Categories: Load & Perf Testing

And Sometimes Ends With A

QA Hates You - Wed, 06/22/2016 - 05:57

Security Starts at the POS.

In this case, POS means Point of Sale.

However, not everyone is familiar with the acronyms and argot you are, so be careful when using them without explaining them first. This applies to your interfaces as well as your written work.

Categories: Software Testing

Webcast: Federating Data with Presto to Build an Enterprise Data Portal

O'Reilly Media - Tue, 06/21/2016 - 14:11
Join us as we dive deep into helping you understand why so many of today's leading companies are using flexible data models as the foundation for their data strategies. O'Reilly Media, Inc. 2016-06-21T11:15:57-08:10

Webcast: Detecting Anomalies in IoT with Time Series Analysis

O'Reilly Media - Tue, 06/21/2016 - 14:11
Join us for a live webcast where you will learn how to overcome common challenges in finding anomalies in time series data. O'Reilly Media, Inc. 2016-06-21T11:15:57-08:11

Custom monitoring plugins now available (EAP)

Perf Planet - Tue, 06/21/2016 - 13:49

With custom plugins you can extend Ruxit with additional metrics that support your organization’s unique monitoring needs. Custom metrics are displayed in the Ruxit UI alongside the standard set of performance metrics. Custom monitoring plugins are now available to participants of our Early Access Program.

Custom plugins can be created for any process that exposes an interface, such as metrics served over HTTP (for example, databases, applications, and load balancers). You can even define alerting criteria so that problems are raised whenever certain thresholds are breached.

Custom monitoring plugins are a great choice for:

  • Proprietary applications
  • Technologies that Ruxit detects, but doesn’t yet provide tech-specific insights into.

Each custom plugin package is comprised of a Python script and a JSON file that includes all the metadata required to run the script (including libraries, if needed). Ruxit Agent manages and executes plugins, so installation of Ruxit Agent is required.

The Ruxit plugin framework is equipped with advanced troubleshooting capabilities to help you pinpoint any issues in your plugin setup—from syntax to semantic and communication errors.

Get started with the demo plugin

To make it easy to get started with plugin creation, the Ruxit SDK provides a demo plugin that monitors a simple Python-based demo web app. The plugin listens on localhost:8080 and responds to HTTP requests with a small JSON document that looks like this:

{"counter": 147, "random": 907}   

The Counter value increases with each hit on the page, while Random is a randomly-generated number.

Design… Develop… Deploy!

This post explains everything you need to know to get started building your first plugin. The high-level steps are illustrated below.

During the design phase you specify the metrics that you want to collect. For the demo Python app we’ll write a plugin that gathers two metrics: Counter and Random. If you want your plugin to raise alerts, you can define alerting criteria in the plugin.json file.

  1. Prepare the environment. Your DEV environment needs to be equipped with:
    • Python 3.5
    • Ruxit Agent
    • Ruxit SDK
  2. The Ruxit SDK can be downloaded from Ruxit. Go to Settings > Monitored Technologies > Custom extensions.
    Click Download SDK.

    The Ruxit SDK is shipped as a zip archive, containing this documentation and a Wheel package. You may install the SDK Wheel package using pip, for example::
    pip install ruxit_sdk-1.98.0.20160601.192338-py3-none-any.whl 
  3. Write the Python code and create the plugin.json file.
    The plugin.json file defines one class, DemoPlugin, which in turn defines one method, - query. When your plugin runs, this method is called each minute to collect monitoring data and forward it to Ruxit Server. This particular query method does the following:
    pgi = self.find_single_process_group(pgi_name('ruxit_sdk.demo_app'))
    pgi_id = pgi.group_instance_idAll Ruxit metrics are associated with an entity ID. Briefly, process snapshots are data structures that contain a mapping between running processes and their IDs, as assigned by Ruxit Agent. To send data, we need to search the snapshot. We use the find_single_process_group() method for this.

    stats_url = "http://localhost:8080"
    stats = json.loads(requests.get(stats_url).content.decode())

    Once we’ve acquired the ID, we need to gather the stats. This is easily achieved with the help of a requests package.

    self.results_builder.absolute(key='random', value=stats['random'], entity_id=pgi_id)
    self.results_builder.relative(key='counter', value=stats['counter'], entity_id=pgi_id)

    Finally, once we have both the data to report and the entities that the data are associated with, we can handle things using the results_builder object. ResultsBuilder stores and processes the data that is sent to it (in other words, when you submit measurements as relative, ResultsBuilder automatically computes the difference from the previously reported measurement and only reports the difference).

    Now it’s time to take care of plugin.json.

    The plugin.json file begins with a few basic plugin properties: name, version, and type. The remaining properties (entity and processTypeNames) are used to inform Ruxit Agent when it’s time to execute the plugin.

    "source": {
       "package": "demo_plugin",
       "className": "DemoPlugin",
       "install_requires": ["requests>=2.6.0"],
       "activation": "Singleton"
    },

    The first two lines list the package name and className that are to be imported and executed by Ruxit Agent. These properties correspond to the name of the Python file that contains the code and the name of the class defined in it.

    “metrics”: [
    {
    “timeseries”: {
    “key”: “random”,
    “unit”: “Count”
    }
    },
    {
    “timeseries”: {
    “key”: “counter”,
    “unit”: “Count”
    }
    }
    ]
    }

    The metrics section describes the data that’s gathered by the plugin. This section mirrors both what the demo app serves and what the Python code collects.

  4. Build your plugin package. This step is easily accomplished with the help of the ruxit_build_plugin command, which is available in the Ruxit SDK. Execute the following command (directory with plugin files):ruxit_build_pluginYou should see output similar to this:

    Plugin deployed successfully into /opt/ruxit/plugin_development/custom.python.demo_plugin
    Plugin archive available at /opt/ruxit/plugin_development/custom.python.demo_plugin.zip

    The plugin package is now available for upload to the server.

  5. Upload the plugin. Custom plugins can be uploaded to Ruxit Server in one of two ways using either CLI or the Ruxit UI (the choice is yours).Note that command line-based upload requires a token to authenticate the action. To generate the right token for you environment, go to Settings > Monitored technologies > Custom extensions.Execute the following command via CLI:

    ruxit_upload_plugin –t generated token

    You should see output similar to this:

    Starting net HTTPS connection: tenantname.live.ruxit.com.
    Plugin has been uploaded successfully.

    Or upload via the Ruxit UI. Go to Settings > Monitored technologies > Custom extensions.
    Once uploaded, your plugin will appear in the custom plugins list.

  6. Restart Ruxit Agent so that it can load and run the new plugin. Depending on your operating system, restart of Ruxit Agent can be achieved in a few different ways. On Ubuntu, type:sudo service ruxitagent restartOn Windows, go to Service Manager and restart the Ruxit Agent service, or type:

    net stop "Ruxit Agent" && net start "Ruxit Agent"

  7. Check the metrics in the Ruxit UI. Custom plugin metrics are displayed on Process metrics dashboards (accessible by clicking the Further details tab on related Process details pages). 
It’s deployment time!

Once you decide that your plugin is ready for distribution, you need to promote it from the Staging state to the Production state. Go to Settings > Monitored technologies > Custom extensions. Select the plugin and click the Upgrade now button.

Now deploy the plugin by copying the files generated by the ruxit_build_plugin tool to the plugin_deployment folder in root of your Ruxit Agent installation. Or use configuration management tools such as Puppet, Chef, or Ansible. Don’t forget to restart all instances of Ruxit Agent once you’re done.

While this post gives you a great head start at plugin development, for more advanced scenarios and references, please refer to the Ruxit SDK.

If you’re interested in becoming a participant of our Early Access Program, please send us an email.

The post Custom monitoring plugins now available (EAP) appeared first on #monitoringlife.

Tracking Changes to Your Website With Performance Monitoring

Perf Planet - Tue, 06/21/2016 - 11:51

More and more companies are going agile, developing software in incremental, rapid cycles. This results in small incremental releases with each release building on previous functionality. Changes made in these incremental releases are continuously integrated with the existing code.

Testing is obviously very important in these agile environments to ensure that these frequent changes don’t result in errors. However, what we see is that in many cases the priority is on functional testing and not on ensuring the required level of performance.

As a result of this, every release might have a possible positive or negative impact on performance. These may be difficult to spot, even if you are looking for them.

Most performance degradations are not that easy to identify. They might not be noticeable at all hours of the day, only when the application is used on a slow connection or from a remote location with high latency. Because of this, it may take days or weeks before you become aware that something is not quite right and much longer before you have identified exactly what release has introduced the problem.

We’ve developed a way to track the changes that are being made to your website without compromising performance. By including the release number within your HTML, our Insights feature will recognize that information and add it to the measurement data. With three years of available data history, you can easily track the impact of each deployment.

The chart below demonstrates a performance degradation as well as increased Server and Load Balancer times. Because the server time increased, it’s clear that the decline in performance can be attributed to a change on the application side.

Using an on premise solution also allows you to benefit from this level of visibility even before going live with new changes. You can install the device in your own network where it can access your development and test environments. This provides valuable additional information to help you decide if a change is ready for production or not. It will also reduce the time spent on nightly troubleshooting and increase customer satisfaction.

The post Tracking Changes to Your Website With Performance Monitoring appeared first on Catchpoint's Blog.

4 Azure secrets you’ll be glad to know about

Perf Planet - Tue, 06/21/2016 - 08:14

While AWS has ruled the cloud computing market since 2006, Microsoft is increasingly gaining traction with its Azure offering. Azure’s key differentiator is not the number of services it offers—it’s the extension concept it offers that makes the difference.

Extensions make it easy to add value to your deployments—whether they feature Infrastructure-as-a-Service (i.e., virtual machines) or Platform-as-a-Service (i.e., website-only deployments).

Secret #1: Azure Site Extensions

Azure Site Extensions allows you to add functionality with a single mouse-click. For example, if you’re running PHP web sites and MySQL it’s almost certain that you’ll need to install phpMyAdmin. With Site Extensions, such an installation on Azure is easier than ever before.

Ease of phpMyAdmin installation is exceeded only by how great Azure itself is!

Extensions for Azure Web Apps are especially powerful as they build upon readily deployed artifacts. Deploy your website just as you’ve done in the past—extensions can take care of the rest!

Curious? Take a minute to browse through the available Azure site extension packages. You may be surprised by what you find.

Secret #2: Extensions for Azure VMs and VM Scale Sets

VMs aren’t the big game-changer they once were. While VMs have been widely adopted, scaling them remains a big challenge because certain components (for example, virtual networks and disks) must still be duplicated alongside the VMs themselves. Scale sets make things much easier by managing these components for you. Best of all, applied extensions are immediately available on all VMs within the scale set.

The Custom Script extension is the easiest way to automate Azure VM customization

Among my favorite VM Extensions is the Custom Script extension, which allows you to run any shell script on your Windows and Linux VMs (and Scale Sets). The Custom Script extension makes it simple to create Azure Resource Manager templates, which take your favorite base OS image and run your scripts on it.

Azure resource templates are easy to export.

By the way, you can easily export your existing Azure resources as template files. This means resources that you created on the Azure portal can be easily re-built by using scripts and those template files.

Secret #3: You can run multiple web apps on a single host

Usually, PaaS deployments are perceived as single, independent entities.

If you assign multiple web apps to the same cloud service plan, they will all be hosted on the same virtual host. This means, you can share your resources between multiple web apps.

Host multiple web apps on a single virtual host. Even with PaaS deployments.

This is a great way of consolidating your PaaS resources among multiple deployed web apps.

Secret #4: Complete Azure performance management

A common misconception of cloud environments—be they IaaS or PaaS—is that they lack the visibility required for careful analysis and troubleshooting. This is however just a matter of using the right tool for the job.

Did you know that there’s an IIS in front of all Azure-based web apps?

The right tool for this job is Dynatrace Ruxit. Ruxit is a complete Azure performance management suite wrapped up in a single tool. It provides best-in-class visualization for service flows, call backtraces, and environment topologies (not to mention all the basics like host, network, and process metrics that make Ruxit the most effective solution available).

Ruxit shows the traffic on your IIS, before it hits your app servers.

Azure’s extension concept makes Ruxit installation as simple as a few mouse clicks—for both VMs and websites. All technologies, processes, and hosts in your application are auto-detected. You’ll have Ruxit fully integrated into your Azure IaaS or PaaS in under five minutes.

Deep insights

Regardless of whether you’re running a virtual machine on Azure or taking full advantage of a PaaS-deployment, Ruxit provides insights that you can’t afford to miss.

Understand the root causes of problems

Problem notifications only point out the existence problems. What’s far more valuable is insights into the root-causes of problems, not just their symptoms.

Backtrace to the controller level is fine, but backtrace to the user-action level is better!

Learning about a problem’s symptoms is often not very informative. To find the root cause of a problem you need a service-level backtrace that shows the sequence of service calls and user actions that preceded the problem.

The service backtrace above shows you the entire sequence of incoming requests, all the way back to the user action in the browser that initiated the sequence (not just the first point of contact on the server-side).

This is a revolutionary new level of insight into the services that power your application. Ruxit shows the entire request/response cycle, from the frontend to the database—all in a single integrated view.

What about Azure Application Insights?

Application Insights is an awesome tool that can vastly enhance your productivity. With its tight integration with Visual Studio, Application Insights is the number one choice of developers. Its strength lies in optimizing your development process and enabling easy troubleshooting of individual components.

Dynatrace Ruxit provides the logical extension of Application Insights. Once your application moves into production, it’s not so much about isolated components and services as it is about the interaction and communication between those components and services. This is exactly where Ruxit functionality shines brightest.

Ruxit and Azure Application Insights are not alternatives to one another—they are complementary tools that together cover all your performance monitoring needs, all the way from development to production.

Best-in-class Azure performance management

To successfully operate an Azure environment you need full insight into what’s going on behind the scenes—just as if that environment were running on your own physical infrastructure.

Dynatrace Ruxit provides all this, including monitoring of Azure Compute, Storage, Container Service, Virtual Networks, and Web Apps.

Secret #5 (of 4) a.k.a “One last thing”

Best of all, Azure isn’t limited to it’s own offerings. It handles public-cloud and hybrid-cloud scenarios with equal ease. And it’s the same with Dynatrace Ruxit. Go ahead. Combine offerings from AWS, Azure, OpenStack, OpenShift and whatever else comes to mind — Azure acts great as a foundation for any infrastructure, and Dynatrace Ruxit has you covered.

The post 4 Azure secrets you’ll be glad to know about appeared first on #monitoringlife.

Another Branding Failure

QA Hates You - Tue, 06/21/2016 - 04:53

A couple weeks ago, I pointed out the some flaws with inconsistent application of the trademark symbol. Today, we’re going to look at a failure of branding in a news story.

Can you spot the branding failure in this story?

After the refi boom, can Quicken keep rocketing higher?:

Quicken Loans Inc, once an obscure online mortgage player, seized on the refinancing boom to become the nation’s third largest mortgage lender, behind only Wells Fargo & Co and JPMorgan Chase & Co.

Now, with the refi market saturated, Quicken faces a pivotal challenge — convincing home buyers to trust that emotional transaction to a website instead of the banker next door.

Okay, can anyone not named Hilary spot the problem?

Quicken Loans and Quicken are two different things and have been owned by two different companies since 2002. For fourteen years.

Me, I know the difference because earlier this year I did some testing on a Quicken Loans promotion, and the developers put simply Quicken into some of the legalesque opt-in and Terms of Service check boxes. So I researched it. And then made them use Quicken Loans in the labels instead.

After reading the story, I reached out to someone at Quicken Loans to see if they use “Quicken” internally informally, and she said $&#&^$! yes (I’m paraphrasing here to maintain her reputation). So maybe the journalist had some communication with internal people who used “Quicken” instead of the company name, or perhaps that’s what everybody but me does.

However, informal nomenclature aside, Quicken Loans != Quicken, and to refer to it as such could have consequences. If this story hit the wires and Intuit’s stock dropped a bunch, ay! Or something more sinister, which in this case means unintended and unforeseen consequences.

My point is to take a little time to research the approved use of trademarks, brand names, and company names before you start testing or writing about them. Don’t trust the developers (or journalists, apparently) to have done this for you.

Categories: Software Testing

QA Music: Where The Wild Things Are Running

QA Hates You - Mon, 06/20/2016 - 04:21

Against the Current, “Running With The Wild Things”:

I like the sound of them; I’m going to pick up their CD.

When the last CD is sold in this country, you know who’ll buy it. Me.

(Link via.)

Categories: Software Testing

Webcast: Practical color theory for people who code

O'Reilly Media - Fri, 06/17/2016 - 20:05
Natalya Shelburne breaks down color theory basics the developer way—by abstracting away her domain knowledge as an artist into variables and functions and sharing that information—to demystify design decisions, revealing them to be logical, predictable, and driven by principles that anyone can learn. O'Reilly Media, Inc. 2016-06-17T17:16:06-08:10

Pages