Feed aggregator

Pokemon Go Performance Issues – Gotta Catch Them All

Pokemon Go has stormed onto the scene this past week and the buzz is everywhere about it! However, being so successful in such a brief period of time can have a downside. Reports of players being unable to access services started appearing in relatively short order. Amazon’s CTO jumped into the fray offering their cloud […]

The post Pokemon Go Performance Issues – Gotta Catch Them All appeared first on about:performance.

Categories: Load & Perf Testing

Pokemon Go Performance Issues – Gotta Catch Them All

Perf Planet - Wed, 07/27/2016 - 14:20

Pokemon Go has stormed onto the scene this past week and the buzz is everywhere about it! However, being so successful in such a brief period of time can have a downside. Reports of players being unable to access services started appearing in relatively short order. Amazon’s CTO jumped into the fray offering their cloud […]

The post Pokemon Go Performance Issues – Gotta Catch Them All appeared first on about:performance.

As A Wise Man Once Said….

QA Hates You - Wed, 07/27/2016 - 08:28
Categories: Software Testing

5 Best PHP Frameworks for Agile Software Development

Perf Planet - Wed, 07/27/2016 - 03:26

Agile software development (ASD) is a technique that integrates flexibility in the needs and implements practicality for delivering the ultimate product. In other words, it can be stated that a method that provides an incremental model in fast ways and for the rapid cycles. The release of the product is ensured quickly with a thorough test to maintain the quality of the software. Agile development is employed for critical sort of applications.

There are multiple reasons to believe why agile development is necessary nowadays. The benefits associated with it are ample proofs of its significance. Engaging stakeholders, transparency, immediate delivery, minimal cost, and space for changes are the few major impetuses that let us understand that agile development is comparatively better and a dire need of the hour. But it should never be shrugged off that in return of all this and avail from the agile development to the fullest, hard work, human efforts and few more efforts are needed as nothing in the world is available for free.

Here we are presenting the top five PHP frameworks that are the best suitable frameworks for enterprise applications. The developers must be sagacious enough to select a framework that meets the client needs and ensures maintenance, stability, high performance and perfect functionality. Let’s have a look at the details of the five best frameworks for agile development.


If we categorize the few best agile development frameworks, Laravel is one among them. This is not only expensive but a very popular and widely used by the developers. This is the finest and perfect framework for the applications based on enterprise development. The high elegant syntax in the framework lets the developers do their projects with great ease irrespective of their nature and size. Here are the featured integrated into the structure.

  • Intelligent design that offers great flexibility to the developers by letting them develop applications from smaller sizes to the giant ones.
  • Smooth routing that manages all the server routes efficiently.
  • The composer provided the framework allows the developers manage third party applications
  • Built-in testing ability
  • Valid for creating web applications
  • Easy database access


This is another PHP framework best suited for the applications of enterprise development. When it comes to its features and prominent attributes, there is hardly any framework that can match it. The developers skilled in PHP can benefit from it because it doesn’t facilitate the novice or intermediate type of software developers. The important features are listed below:

  • It’s featured with workflows that are developed and designed with high excellence and perfection
  • The ability to produce high-quality agile applications for enterprise level
  • Cloud support on servers makes it a choice of many developers for providing apps
  • Excellent streamlining and automation process take the framework a step ahead of all
  • Connected database wizard for specialized database connections
  • A unique feature of instant online debugging


Phalcon is the best PHP framework for the developers when it comes to helping them fast their creation and develop applications quickly. The developers can use this framework in combination with the other frameworks as well, and it offers great support for all sort of development of applications. The featured that Phalcon offers are listed below:

  • A feature that lets you learn ORM easily
  • Little overhead if we compare it with other frameworks
  • Reliable and clean intuitive API with power design patterns
  • An excellent facilitation in easy creation of the software by injection methods. The facility helps in testing the applications at any moment of time.


If you are looking for a PHP framework that’s only feature enriched but focuses on the beauty of the code while letting you perform all the task in the background, Yii should be your first choice.  This is a very robust framework mainly developed for AJAX and developers can easily create and make codes as per their needs and conditions of the applications. The framework not only works with the third party applications but lets you overcome the errors as well. The features of Yii framework are as follows:

  • Robust caching system for fast loading of web applications
  • When it comes to security, Yii is at its best.
  • Superb support in writing and handling tests
  • A detailed documentation support for the developers if they are stuck at any moment during the development.
  • Built in authentication support for the developers and users
  • Additional supports in translations, updating time and date and formatting numbers as well
  • Last but not surely the least, it’s best compatible with relational and nonrelational databases.


This is the final PHP framework in our list for the best and top frameworks for agile development applications. This framework released under MIT license and is best for MVC applications. If we talk about the technical objectives of this framework, it’s primarily used for fast creation and repairing of the web applications.

The developers who have to create robust applications for enterprise level use Symphony mainly. With the help of this framework, the developers manage to get a full control of the configuration in the easiest manners. For the enterprise applications, the additional features and tools provided in Symphony are a great facilitation for the developers indeed. Its creation and launch is the outcome of getting inspirations from the previous frameworks. The best feature in Symphony is that it uses it components that can be easily downloaded for various projects.

The Bottom Line

The whole write-up includes the best PHP frameworks that the enterprise applications need for agile development. The first section contains the importance and we included the best frameworks, and all of them have their features, attributes, and characteristics that distinguish them from the others. From the whole discussion, we have concluded that Zend and Yii frameworks are the most influential in the list, and they offers great ease for the developers in the creation of applications.

The post 5 Best PHP Frameworks for Agile Software Development appeared first on artzstudio.

‘Dynatrace Ruxit’ becomes ‘Dynatrace’

Perf Planet - Tue, 07/26/2016 - 19:30

Now, nearly two years since the launch of Dynatrace Ruxit—thanks to the extraordinary success and value that Dynatrace Ruxit brings to our customers—it’s time for a product name change that reflects the deepest possible commitment of the world’s #1 performance management vendor. Going forward, Dynatrace Ruxit will be known simply as ‘Dynatrace’, same as our company name.

This is an exciting time for us to move the Dynatrace Ruxit technology and architecture to the forefront of our full-stack digital performance monitoring platform. This unification represents a giant step forward in bringing you the world’s most innovative and complete AI-driven monitoring solution. The Dynatrace platform provides everything your organization needs to modernize its operations, accelerate innovation, and optimize the customer experience of your digital business.

How will this affect my use of Dynatrace Ruxit?

This is only a product name change, so don’t worry—this won’t affect the functionality of Dynatrace Ruxit at all. This is a language change that only affects wording in the UI, Help, and some resource names. Your environment bookmarks (for example, myenvironmentid.live.ruxit.com) will continue to work until the end of the year. Within a couple of weeks you’ll be able to update your bookmarks to the new equivalent URLs (for example, myenvironmentid.live.dynatrace.com), but there’s nothing you need to do right now.

You’ll also eventually need to update all API endpoints that your organization relies on (for example, {myenvironmentid}.live.ruxit.com/api/v1/timeseries should now be accessed at {myenvironmentid}.live.dynatrace.com/api/v1/timeseries).

Note: Existing URLs will not be accessible after January 1, 2017.

Continuous product innovation

This product name change should be welcome news to you. Aligning the product name ‘Dynatrace’ with our company name demonstrates our long-term commitment to continuous, substantial investment into Dynatrace Ruxit product development. By doubling down on our commitment to Dynatrace Ruxit you’re assured of benefiting from ongoing innovation and industry firsts. Here are a few highlights from our product roadmap:

  • Custom dashboards and reporting
  • End-to-end transaction analysis (PurePath)
  • User behavior and application analytics

Stay tuned for updates on these great features and more from the Dynatrace team.

The post ‘Dynatrace Ruxit’ becomes ‘Dynatrace’ appeared first on #monitoringlife.

Introducing the Time frame selector

Perf Planet - Tue, 07/26/2016 - 19:28

A long-awaited feature is finally here! In combination with our new support for multiple dashboards, a globally available Time frame selector is now also available.

As this is the first release of the Time frame selector, support is currently limited to dashboards and other key pages. The Time frame selector will eventually be available application-wide.

You’ll find the Time frame selector in the upper-right corner of all pages that support time frame selection:

Use of the Time frame selector is straightforward and not much different than what we use for time-frame selection in charts. Just select the time frame you’re interested in analyzing. The difference is that the new Time frame selector affects data page-wide.

Previous and Next buttons enable you to move forward and backward in time.

Time frame selection is sticky—selected time frames are propagated to all pages you visit where time-frame selection is supported. For example, after changing the analysis time frame on your home dashboard, time-frame selection is preserved as you drill down from the Applications tile to individual application pages. The exception is when you click the Dashboard button in the header. The Dashboard button resets the time frame to your current dashboard’s default time frame.

Custom dashboards can be built for many different purposes, for example, infrastructure health or real user monitoring. This is why it makes sense to have different analysis time frames for different dashboards—you don’t have to reset the time frame to the default each time you switch dashboards.

Recently used time frames

In some instances it makes sense to switch back to a recently used time frame. The Time frame selector includes a list of the most recently used time frames (up to 10).

In the near future, we’ll expand the capabilities of the Time frame selector, not only by increasing the number of supported pages, but also by enabling you to set custom time ranges.

Your feedback is very important to us. Tell us what do you think about the new Time frame selector.

The post Introducing the Time frame selector appeared first on #monitoringlife.

User action detail page enhancements

Perf Planet - Tue, 07/26/2016 - 19:27

With the latest release of Dynatrace we’ve greatly enhanced our user action detail pages with their own look and feel that distinguishes them from application pages. User action detail pages are now more compact and provide quick access to all relevant data.

To access user action detail pages:
  1. Select Web applications from the Dynatrace menu.
  2. Select an application.
  3. Scroll down to the Top consumers section and click View full details.
  4. Select an action listed on the Top consumers page.

Once on a user action detail page, click areas of the summary infographic at the top of the page to navigate to detailed charts related to each summary metric, or type a string into the Filter user types field to view real-user or synthetic-monitoring specific data.


Contributors breakdown chart

When it comes to analyzing user actions, one of the main questions to address is on which tier is the most response time consumed? Was more time spent on the frontend (i.e., mainly the browser), the network, or the server? Now, as a complement to the existing Action duration contributors chart on each application page, we’ve added an always visible Contributor breakdown chart. This chart gives you a quick overview of time spent on the frontend, the network, and the server. For more detail, click the View resource impact in waterfall button to see which resources impact the action duration and Document interactive time milestone.


Key user actions

A typical web application has some user actions that are more important than others. Examples include login, checkout, and special landing pages. Other user actions may be very complex and have relatively long durations compared to other pages. Consider, for example, how complex an airfare search action on a travel website can be. Such business-critical user actions can be marked as “key” user actions (formally known as “high priority” user actions). For such key user actions you may want to define specific Apdex thresholds. While Dynatrace anomaly detection automatically detects performance degradations and increases in errors in all your user actions, it’s still a good idea to provide special handling for your key user actions. By defining a regular user action as a key user action, you get long-term historical trends and can specify an appropriate Apdex threshold for the action. Key user actions can be pinned to your dashboard for quick access and accessed via the Key actions tab for comparison with other actions.


The post User action detail page enhancements appeared first on #monitoringlife.

Create and share custom dashboards!

Perf Planet - Tue, 07/26/2016 - 19:26

Dynatrace custom dashboards can now be cloned and shared. Time-frame selection and support for simultaneous use of multiple dashboards is now also available.

Different needs, different dashboards

Modern digital businesses are application-based. Businesses rely on applications to deliver services and to automate and optimize sales, marketing, customer relations, and more. The thing is, every application is different. They feature varying components, resources, connected services, and infrastructure deployments. Not to mention, every application development and operations team is different. This is why Dynatrace has long offered a fully customizable home dashboard.

Multi-dashboard support

Dynatrace dashboard management is simple. You’ll find a new Dashboards entry in the Dynatrace menu. This link takes you to your Dashboards page where you can easily create, modify, delete, and switch between dashboards.

To create your own dashboard, click Create dashboard, type a name for the dashboard, and click Create 

Here’s an overview of the major changes:

  • The new Time frame selector enables you to switch between different time frames. You can set a different default time frame for each dashboard.
  • Dashboards can be shared or cloned.
  • You can operate multiple dashboards in multiple browser tabs at the same time.
  • The Add tile and Add section buttons have been moved into the dashboard context menus.
Time frame selector

You’ll find dashboards to be even more useful now that you can adjust the analysis their time frames. The new Time frame selector enables you to select a specific analysis time frame for each dashboard.

You may need varying default analysis time frames for your dashboarding purposes. For example, you can set a default time frame of Last 7 days for your Real user monitoring dashboard and Last 2 hours for your Infrastructure health monitoring dashboard. You’ll find each dashboard’s Default time frame button when in edit mode.

But that’s just the beginning! Read more about the new Time frame selector.

Share your dashboards

Ever wanted to share a custom dashboard with your team or management? Now you can. Just select Share dashboard from the context menu, copy the provided link, and share the link with team members who are logged into their own Dynatrace accounts within the same Dynatrace environment. Dashboards cannot be shared publicly or across environments.

Only dashboard owners can modify and delete dashboards, so you don’t have to worry about other team members changing your shared dashboards. Team members only have read-access, which enables them to clone dashboards.

When you receive a custom dashboard that you want to modify, clone the dashboard. As the owner of the cloned dashboard, you can now modify it any way you want.

Multiple dashboards, multiple tabs

With a few dashboards at your fingertips, you may feel the urge to have all your dashboards available at once in multiple browser tabs. Go for it! This is perfectly okay.

Home dashboard

You may have noticed that we’ve eliminated the “Home” entry in the breadcrumb navigation (see below). The new Home dashboard button is all you need now to return to your home dashboard whenever you want. To illustrate the value of this feature, imagine that you’ve drilled down through a few pages in the course of investigating the root cause of a problem. To return to your home dashboard, just click the Home dashboard button. This leads you right back to the dashboard you opened in the current browser tab.

As always we’re eager to hear your feedback. Feel free to share your thoughts with us.

The post Create and share custom dashboards! appeared first on #monitoringlife.

Webcast: Designing Data Layers for Modern Web and Mobile Apps

O'Reilly Media - Tue, 07/26/2016 - 19:10
Learn how you can leverage both relational and NoSQL databases, as well as managed services, to improve your apps and provide compelling systems of engagement. O'Reilly Media, Inc. 2016-07-26T15:16:00-08:10

Webcast: Best practices for using predictive analytics to extract value from Hadoop

O'Reilly Media - Tue, 07/26/2016 - 19:10
Zoltan Prekopcsak outlines the best practices that make life easier, simplify the process, and implement results faster, helping you organize approaches and select the right approach for the task. O'Reilly Media, Inc. 2016-07-26T15:16:00-08:11

Redesigned application overview pages

Perf Planet - Tue, 07/26/2016 - 19:09

Dynatrace is proud to present completely redesigned application overview pages. The new application pages are more compact and present a lot more information—making it unnecessary to switch between views to find the information you need. Beyond interface and navigation updates, we’ve also added some awesome new features.


The new application overview pages integrate the Performance analysis and User behavior analysis infographics and content sections into an integrated design. You can still switch between performance analysis and user behavior analysis, but you no longer need to switch views to do so.


Performance analysis view Tags and filtering

Assigned tags are displayed at the top each page within the expandable Properties, tags, and JavaScript frameworks area. It’s easy to add new tags here—just click Add tag (for details about tagging, go to Settings > Tagging). There’s also a filter text field that enables you to filter results based on user type. The filter affects all metrics on the page.


Segmentation analysis 

The left-hand portion of the performance analysis infographic shows dimensional breakdowns of your application traffic based on browser type, user type, and geographic region. The top finding for each dimension is displayed by default.

Beyond a few interface improvements, most of the detail sections behind the infographic metrics haven’t changed. For example, you can still directly access historical table data related to 3rd-party and CDN content providers to see if there have been any changes over time.

Application availability

If you have one or more web checks set up to test the availability of your application, you can now see the availability of your application directly on the application overview page. The availability chart shows you an aggregate view of all of the selected application’s web checks. You can click the View full details button to view details related to outages and navigate directly to web check details (your time-frame settings will remain intact) to perform further analysis.


JavaScript errors

The Top JavaScript errors section is now always visible by default. Click View full details to view a timeline distribution of errors and their impact on user actions.

Top consumers and key user actions

At a glance, you can see the Top included domains that your application relies on in addition to the user actions that are the Top consumers of response time (based on action duration and action count). Click the View full details button to view the top 100 user actions based on time consumption. You’ll find various user-action specific metrics here including DurationActions per minute, and JavaScript errors.

You can also select an alternative criterion by which to display the top 100 user actions (for example, action duration or time consumed).

As Time consumed is calculated by multiplying action duration by action count, your slowest user actions may not appear when displaying based on time consumption. The slowest user actions will appear in the list if you display the top 100 based on action duration (the slowest 100 user actions). If you’ve selected a long time frame, click anywhere in the chart to adjust the analysis time frame.

Most web applications include some user actions that are particularly important to the success of your digital business (for example, signup actions, checkouts, or product searches). Such “key” user actions may take longer to execute than the other user actions and therefore require special treatment (or maybe they have a requirement to have shorter than average duration).

Say for example that you’ve set your global Apdex threshold to 3 seconds. While this threshold may be acceptable for the majority of user actions, it may not be acceptable for a signup user action that must have a shorter duration. Conversely, maybe you have a search action that is complex and requires a longer threshold of 5 seconds.

The new key user action feature enables you to customize the Apdex thresholds for such actions. It also enables you to monitor key actions with a dedicated dashboard tile and track historic trends.

To define an Apdex threshold for a key user action
  1. Select Web applications from the left-hand Dynatrace menu.
  2. Select an application.
  3. Scroll down to the Top consumers section.
  4. Click View full details.
  5. Select a user action from the Top actions list.
  6. On the user action page, click Mark as key user action.
    The user action now appears in the Key actions list.
  7. To define a threshold for this action, select it in the Key actions list.
  8. On the user action page, click Browse (…).
  9. Select Edit to access the Apdex threshold page for this action.


JavaScript errors

If you scroll down to the Top JavaScript errors section and click View full details you can click in the timeline chart and select an analysis time frame to see when specific JavaScript errors occurred. You might use this for example when you arrive at the office on Monday morning to see what JavaScript errors occurred over the weekend.

User behavior analysis

To analyze user behavior use cases, click the User behavior infographic at the top of the page. Here you’ll find a range of filtering options for user behavior data. We’ve added a new section to this infographic, Overall conversion.

Conversion goals

You can configure one or more conversion goals for specific user actions to understand how successfully you’re meeting your conversion milestones (for example, successful checkouts, newsletter signups, demo signups, etc). Goals can be defined for reaching specific user actions or destination URLs and also for session information (for example, session duration longer than 5 minutes or sessions with more than 10 user actions). For this later example, a user would need to complete at least 10 user actions in a single session to reach this conversion goal.

At the bottom of the page you’ll find charts and data related to Top conversion goalsTop bounces, and Top entry and exit actions.


As explained above, most applications have certain workflows and user actions that are more important to your business success than others. Typical examples are subscription actions, checkouts, and newsletter registrations. You should carefully monitor these user actions, not only from a technical point of view—to ensure that everything is working as expected and that action durations are acceptable—but also to see how many customers are actually using the features. This can be achieved by following the key action and conversion goal approach.

Goals are defined per application. Goals can be based on destination URL, specific user action, session duration, or number of actions per session.

To define a conversion goal for an application
  1. Select Web applications from the left-hand Dynatrace menu.
  2. Select an application.
  3. On the application page, click Browse (…).
  4. Select Edit.
  5. Select Goals from the Settings menu.


From the Top conversion goals chart, click View full details to review overall metrics related to your conversion goals or to analyze progress toward specific goals.



To date, we’ve provided overview charts that show how bounce rates correlate with action duration and JavaScript errors, but it hasn’t been possible to see which user action has the most bounces and it hasn’t been possible to compare the duration of bounced actions with the same user actions in sessions where the actions didn’t bounce.

All of this and more is now possible with the latest release. The Top 3 bounces chart is included on the application overview page. Click View full details to view the full list and associated metrics for each bounce. Here you can compare the durations and JavaScript errors of bounced actions with the same action in other sessions in which the actions didn’t bounce.


Entry and exit actions

Ever wondered which of your application’s entry and exit actions are the slowest or most popular? Not necessarily the special landing pages that you’ve set up, but the pages that your users actually use to enter and exit your application?

At the bottom of the application overview page you’ll find the Top entry and exit actions section. Select a specific action here or click View full details. You’ll find historical trends for all entry and exit actions here.


The post Redesigned application overview pages appeared first on #monitoringlife.

Network overview page & dashboard tile

Perf Planet - Tue, 07/26/2016 - 19:07

According to our customers, Dynatrace infrastructure monitoring is best-of-breed. One of the reasons for this is our comprehensive network monitoring. We’ve enhanced our platform even further with two valuable and intuitive network-monitoring features, a Network status dashboard tile and a powerful Network overview page.

Network status tile

You’ll find the new Network status tile by default in the infrastructure section of all newly created dashboards.

The Network status tile shows you the activity and health of your network nodes (the hardware and software modules that transmit your network traffic). On the left side of the tile you’ll see the number of healthy and unhealthy hosts and processes (i.e., Talkers) from a network quality and connectivity perspective. Note that in the case of very large environments, only host information may be available.

Any network nodes that are experiencing a network problem (for example, packet drops, traffic overload, high retransmission rate, or connectivity issues) are indicated in red. On the right side of the tile you’ll see a visualization of the traffic volume that’s flowing through your monitored infrastructure.

Network overview page

The new Network overview page can be accessed either by clicking the Network status tile on your dashboard or by selecting Network from the Dynatrace menu on the left.

The Network overview page is organized around three fundamental aspects of network communication. These three network-communication perspectives—each quite different from the other—define goals for fast and reliable network communication, both in physical and cloud-based architectures:

  • Traffic – Quantity of data exchanged between network nodes.
  • Retransmissions – Quality of communication from the transmission control protocol point of view (i.e., retransmissions caused by dropped packets).
  • Connectivity – How well network nodes can communicate with one another.

To focus on the data and metrics associated with these three network-communication perspectives, click the TrafficRetransmissions, or Connectivity areas of the network overview infographic (see below).

Beneath the infographic on the Network overview page you’ll see an interactive timeline chart that you can click into to adjust the analysis time frame.

Beneath the timeline you’ll find three tabs that offer detail about the metrics that were used to calculate the KPIs featured above in the infographic. These tabs include Hosts (i.e., talkers), Interfaces, and Processes. To understand how these metrics fit together, consider that all network data exchange is performed by software modules (Processes) that run inside physical or virtual computers (Hosts) and communicate with each other over physical or virtual network devices (Interfaces).

By selecting one of these tabs and clicking on a row of interest you can quickly perform analysis of the most important network metrics on any host, interface, or process in your monitored infrastructure. Note that in case of very large environments, only host information may be available.

In the example above we quickly isolated a problematic node that was experiencing connectivity problems by first clicking the Connectivity area of the infographic at the top of the page and then selecting the Apache on VM3a row entry in the Environment details section at the bottom of the page. Note the red bar indicating the connectivity problem.

The post Network overview page & dashboard tile appeared first on #monitoringlife.

Website Speed Optimization Guide for Google PageSpeed Rules

Perf Planet - Tue, 07/26/2016 - 14:06

Page Speed/Site speed is termed as the speed with which your site or its pages get opened. It can be described as “page load time” (the time it takes to fully display the content) or “time to first byte” (how long it takes for your browser to receive the first by of information from the web server). It doesn’t matter usually how we define page speed, the only important thing to consider here is how we can get pages work better and faster.

The Current IT world is getting faster and faster with each passing day and now it’s all about speed and performance. Users on the internet don’t have enough time to sit back and wait for some website to get load. According to a Caltech Survey, more than 90% users close a website if it doesn’t load properly in under 7 seconds. Thus, speed optimization of your website is very crucial if you want to keep your website fastest in the market!

There are so many factors upon which site page speed depends. For a perfect site working lightning fast, you should consider all factors. Negligence on even any one factor’s part will result in slow speed of your site. In this article, we have researches and tried to collect all factors accoring to Google PageSpeed Rules. We’ll discuss the problems, solutions and tips regarding making your website work faster.
Let’s have a look on all factors and discuss them in detail:

Get Your CSS Perform Better

CSS is very important when we discuss page speed and ways to improve it. This is how you can make your CSS better:

1) Eliminate render blocking CSS

Render means to display and “render blocking” means something which is delaying the page from being displayed. Render blocking CSS is responsible for page delay most of the times. In fact, every one of your CSS files result in page delay. Also, as your CSS gets bigger or the number of CSS files increases, the longer it takes the page to get load. Render blocking CSS results in slow speed of your website, less visitors and less AdSense earning.

      • Properly call CSS

        While calling CSS from your files you should take care of certain things. For example you should not use @import to call your CSS files. Instead you should add your external CSS in main file or you may use <link> tag to call your CSS.

      • Use less CSS files overall

        Another way to eliminate render blocking CSS is that you should use less CSS overall. There are two ways to deal with this thing. First, you can combine your CSS files to make one big file and second you can use inline CSS files in HTML. This will reduce size of your main CSS file as well.

These tips for removing render blocking CSS are only useful when you have written your own CSS. If you have purchased some theme or using some template, you should confirm first that theme/template is not using any render blocking CSS.

2) Optimize your CSS

While using CSS, you should make sure that your CSS is optimized. Optimized CSS of a website should have following properties:

  • Total size of all CSS files should be less than 75KB.
  • There should be no @import CSS calls.
  • Inline CSS should be used in <style> tags.
  • CSS should not be used inside HTML <div> tags.
  • All CSS files should be combined to make one big file.
3) Minify CSS

In addition to following all above instructions, you should also minify your CSS. Minify CSS means making file size of CSS smaller so it will load faster. For minifying it, you should write as much CSS code in one line as possible. For example:

Here is an example of non-minified CSS:


body { background-color:#d0e4fe; } h1 { color:orange; text-align-center; }

And here is same CCS, minified:

body {background-color:#d0e4fe;} h1{color:orange;text-align-center;}

You can minify your CSS via some automatic tool or manually.

4) Always call CSS first


In your HTML page, you should always call CSS before JavaScript, images or any other thing. This will load your pages faster. This happens because browsers always download and parse complete CSS file before loading the view. If your images or JavaScript files will come before, browser will first download them and then come to your CSS file which will result in a delay which may go up to 2 seconds in worst cases.
Here are simple and straight forward rules to remember about delay times in this case:

  • If CSS will be called in the first place, the browser will get it in a maximum of one second.
  • If we call CSS in the second place before JavaScript, the page load time will become two seconds, because browser will take one second to retrieve JavaScript and one second to retrieve CSS.
  • If the CSS file will be called after images and JavaScript, the browser will get it in three seconds and in case of heavier and more images, it may take even more time.

This point is very important to remember as in very worst cases, this delay may go up to seven seconds which will decrease your traffic drastically.

5) Call CSS from one domain only

In the modern world, as the browsers and CSS is getting better, the trend of using online CSS resources is getting handy. Although it is very easy but it reduces your page load as well because when your browser loads the page, it gets CSS from different domains. This increases response time and files coming from different domains and sub domains take time. You should try to call all CSS files from one domain only. You may download online CSS resources. This will reduce page load time of your website and it will open lightning fast.

By just following these easy advices regarding CSS, you can get your page speed increased even up to 300%. If you are a developer yourself, you can easily follow them. However, if you are using a theme/template, although you may change most of things but still you should speak to the designer in order to get all these CSS things implemented properly.

Fix Your JavaScript

JavaScript factor is very important for page speed as JavaScript/JQuery files are usually the largest files in any website. There are many things which can be done on JavaScript end to make rendering of your page faster. Let’s have a look on all of them.

1) Eliminate Render Blocking JavaScript

Render means loading, so if something is render-blocking, it means that it is keeping the page from loading as quickly as it could. Google officials recommend to remove such JavaScript that interfere with loading the above the fold content of your webpages. “Above the fold” means what a user sees initially on the screen before they scroll or go down.

Eliminating render blocking JavaScript is very important because most of the times, developers use a JavaScript library JQuery. And JQuery files can be as big in size from original HTML file as ten times. That means making a decision to load JavaScript on the right time is very important. Otherwise, your site will get slow and it will let your users wait. It should be noted that it will also create problems with SEO as Google don’t give those links in search results which let users wait for a longer time.

In order to resolve this thing, you should always load your JavaScript after the whole page is being load. So, your JavaScript file links or script should be placed just before the end tag of the body. So, it will download and parsed after whole page is being loaded and thus it will result in more speed of your page.

2) Combine external JavaScript to make one file

JavaScript is usually written in several external files and this results in page delays because for calling and reading all files, the browser made separate requests for each file which results in a delay.

The problem can be resolved easily by combining all JavaScript into one file only. This can be done easily by simply copy pasting JavaScript from all files into one file and then calling that file. Since JavaScript is written in a specific way, it will still work perfectly fine.

After doing that, you should check your webpage speed and it will load faster. Combining JavaScript has very significant impact on page speed as we have observed in recent tests.

3) Use Inline JavaScript

When you place your JavaScript inside HTML file using <script> tags, it is called inline JavaScript. Inline JavaScript should be used when your JavaScript code is small and you don’t need to place it in a separate JavaScript file.

There are many benefits of using Inline JavaScript. It saves you from the hassle of handling extra files and results in excellent page speed as browser don’t need to make extra calls for fetching JavaScript files in this case.

4) Defer loading JavaScript

Following all above tips can help you make your pages load faster but you can even make it better by following this technique. Basically in this technique, we separate our JavaScript code into two files. The first file contains the code required to display the page properly. Second file contains the code which is required after the page is displayed for example code to run on some click on the page etc. Now we’ll call first file as usual so browser will call it while displaying page. The second file will be called after the page is displayed properly.

So, the question arises that how to call second file after page load. Placing the call at the end of the page is not enough. You should follow following method in order to do this properly.

Part #1 Copy below code

<script type="text/javascript">
 function downloadJSAtOnload() {
 var element = document.createElement("script");
 element.src = "defer.js";
 if (window.addEventListener)
 window.addEventListener("load", downloadJSAtOnload, false);
 else if (window.attachEvent)
 window.attachEvent("onload", downloadJSAtOnload);
 else window.onload = downloadJSAtOnload;

Part #2 Paste code in your HTML just before the </body> tag (near the bottom of your HTML file).

Part #3 Change the “defer.js” to the name of your external JS file.

Part #4 Ensure the path to your file is correct. Example: if you just put “defer.js”, then the file “defer.js” must be in the same folder as your HTML file.

In this way, you can properly implement this change. Doing this will result in very good results and your page speed will increase by many times.

As we saw, JavaScript can be a major factor behind the page speed, you should not only implement all above things but also you should use as less JavaScript as possible. Long and irrelevant scripts with redundant code lines should be avoided at all. If your code is small, always use it inline and avoid using external JavaScript files.

Improve Server Response Time

It is important that you do all efforts mentioned above, but there is no benefit of them if your server response time is bad. Server response time is the amount of time it takes for a web server to respond to a request from a browser. According to the Google, the server response time should be less than 200ms.

Server response time depends on several factors like website traffic, website resource usage, web hosting and web server software. All these factors can affect server response time. For example, as traffic on the website will increase, the response time of the server may get bad as the server has to load resources for everyone. This problem can be dealt by following these useful techniques.

1) Deal with CSS/JavaScript

You should follow all above directions for making your CSS, JavaScript better.

2) Use Fewer Resources per Page

Try to use less CSS, JavaScript and image resources on each page. The lesser, the better. It will help your server serve pages faster and to more people at a time.

3) Get a Perfect Web Hosting

Get a perfect web hosting from a right service provider. This thing is most important as if your web hosting is not up to the mark, you’ll run into the trouble. Web hosting choice depends on the type of application you are using. Let’s have a look at recommended options for all types of applications.

If you are on WordPress, you should get WordPress specific hosting so you’ll not run into any problem later. Recommended hosts for WordPress are WPEngine and GoDaddy.

If you are going with shared hosting, that can be a better and economical, but you should always get it from a trusted host. BlueHost is recommended for shared hosting.

For VPS Hosting/Dedicated servers, you may go for KnownHost or WiredTree.

4) Web Server Software and Configuration

Make sure that your webserver software and all configurations are up to date. Always select right web server software. Here are few choices:

Apache: Apache is an open source and free software maintained by Apache Foundation. It is most used web server with detailed documentation and resources. Apache has pretty good tutorials available on the internet as well. Although, The default install of Apache is not the best performer, but it has so many users and modules and add-ons that it can be made to do just about anything.

NginX: The NginX is a free web server and it is best for websites which receive high traffic. This server performs amazingly even at its default install. It uses very few resources and handles more traffic than other available web servers. It has very small response time, and PHP runs very fast with NginX.

Litespeed: This web server is available in free and premium versions. It is faster as compared to Apache with the added benefit of completely compatible with Apache. It uses same .htaccess file as Apache so that means moving to LiteSpeed from Apache is very easy and painless. PHP runs six times faster with Litespeed.

In a nutshell, all these factors can really create a difference in regarding page speed. Also, all factors related to the server are very important and you shouldn’t ignore them. If you have money, you should definitely invest it in hosting and servers to get things faster.

Use G-Zip compressed Server Files

G-Zip is a file format and a method of compressing files for faster network transfers. After G-Zip compression, your server has to serve smaller files which results in better response time.

G-Zip compression can be enabled easily in web server configuration. Different webservers have different instructions which I will explain below. Here is how you can enable G-Zip compression for .htaccess, Apache, NginX, and Litespeed Servers.


1) G-Zip Compression via .htaccess

For most of the people, compression can be easily enabled via .htaccess file. The .htaccess is a file which does many important things for your site. All you need to do is to add this code to your .htaccess file. After adding code, save the file, restart server and feel the difference!

<ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> 2) G-Zip Compression for Apache Servers

Above instructions of adding code to .htaccess file will work properly. If it doesn’t seem to work, you should remove that code and add this one instead:

AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript 3) G-Zip Compression on NginX Servers

For enabling G-Zip compression on NginX servers, add the following code in your config file:

gzip on; gzip_comp_level 2; gzip_http_version 1.0; gzip_proxied any; gzip_min_length 1100; gzip_buffers 16 8k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Disable for IE < 6 because there are some known problems gzip_disable "MSIE [1-6].(?!.*SV1)"; # Add a vary header for downstream proxies to avoid sending cached gzipped files to IE6 gzip_vary on; 4) G-Zip compression on LiteSpeed Servers

Enabling G-Zip compression on LiteSpeed is very easy. All you need to do is to go to the configuration under “tuning”. Just go down to “enable compression” and check if it is On. If it is Off, then click “edit” and turn it On. That’s it.

HTML, CSS, and JavaScript files may become fifty to seventy percent smaller after compression. That means it can result in a significant change in page speed of your website. All developers and system administrators should definitely implement this in their systems.

Leverage Browser Caching

Browser cache is a space where browsers stores thing temporarily for later use. Leveraging browser caching is a process in which webmasters instruct browsers to cache web pages so when a user will open a site second time, the browser will show her webpage from the cache and will not have to go to the server. This is why first time loading of the site takes more time as compared to the later. Browser caching can result in huge differences.
So know you must be wondering how to leverage browser caching. For doing this, you need to change the request headers of your resource files to use caching. Doing this is a very easy and simple process. All you need to do is to add some code to a file called .htaccess on your web host/server.

The code below tells browsers what to cache and how long to “remember” it. It should be added to the top of your .htaccess file.

## EXPIRES CACHING ## <IfModule mod_expires.c> ExpiresActive On ExpiresByType image/jpg "access 1 year" ExpiresByType image/jpeg "access 1 year" ExpiresByType image/gif "access 1 year" ExpiresByType image/png "access 1 year" ExpiresByType text/css "access 1 month" ExpiresByType text/html "access 1 month" ExpiresByType application/pdf "access 1 month" ExpiresByType text/x-javascript "access 1 month" ExpiresByType application/x-shockwave-flash "access 1 month" ExpiresByType image/x-icon "access 1 year" ExpiresDefault "access 1 month" </IfModule> ## EXPIRES CACHING ##

After the time specified in the code, the browser will automatically remove these resources from the browser cache. The above values are pretty optimized for most of the blogs and webpages but if you want to change them, you can do so. For example, if you want to store jpg images for one month only, you should replace “access 1-year” with “access 1 month”.
Another method of caching is known as Cache-control, which is more effective and easy for most of the people as compared to the above method. It also gives you more control over caching. Example use of cache control in .htaccess file maybe:

# 1 Month for most static assets <filesMatch ".(css|jpg|jpeg|png|gif|js|ico)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch>

This code is setting headers for the caching. Let’s understand this code line by line.

The first line is just a comment for our ease and doing nothing.

The second line is specifying types of files for which we are going to enable caching. You can also add and delete file types in this line. For example, if you want to cache HTML files as well, just add HTML in the second line.

The thrid line is doing the main job. It is setting headers for cache control and specifying that the caching will be public, and its age will be 2592000 seconds. You can also change this duration. Also, you can write this code twice as well and set different caching intervals for a different type of files. This duration is an optimized one though.

Although caching has benefits but it has a minor drawback as well. And that is the user won’t be able to view changes in the file soon if you change anything. I.e., they’ll keep viewing the same old file for some period of time.

Enable Keep Alive

Keep alive is a method to allow the same TCP connection for HTTP conversation instead of opening a new connection with each new request. More simply put, it is a communication  system between the web server and the web browser that says “you can grab more than just one file at a time.” Keep alive is also known as a persistent connection.

Keep alive is enabled in HTTP connections by default but shared hostings may turn it off as they have to serve millions of pages from one hosting. Generally, keep-alive is enabled using the “Connection: Keep-Alive” HTTP header. Keep Alive disabled means your HTTP headers are stating “Connection: close”. You should change that to “connection: keep-alive” to enable keep alive for your web page.

In addition to that, keep alive enabling also depends on the server you are using and what you have access to. Here are most common methods for enabling keep alive in different servers.

varvy.com 1) Enabling Keep Alive via .htaccess File

If you have access to the .htaccess file, you can add the following code in your file, and that’s it. It’ll enable keep alive for you in most of the cases.

<ifModule mod_headers.c> Header set Connection keep-alive </ifModule> 2) Enabling Keep Alive in Apache

If you are able to access your Apache config file, you can turn on keep-alive there. The applicable sections are shown below:

# KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 100 3) Enabling keep-alive in NGINX

Keep alive issues can be tackled using the HttpCoreModule. There is a specific directive you should look out for… “keepalive_disable”. If you see this make sure you know why it is disabling keep-alive before removing.

4) Enabling Keep-Alive in LiteSpeed

Keep-alive is usually set on in LiteSpeed servers by default but your server maybe using “smart keep-alive”. This setting in LiteSpeed servers is used for high volume websites. Keep this setting on will show page speed tools that keep-alive is disabled.

“Smart keep alive” will request the initial file (the HTML file) with a connection close in the HTTP header. It will then request all the other files (CSS, js, images, etc.) with keep-alive enabled. This allows more users to be able to connect at once when there are many concurrent requests.

LiteSpeed server users should disable keep smart keep alive. As a result, they will be able to experience full keep alive. However, If you are a high volume website, you should not do so.

Keep alive is very useful if your website is resource-intensive, i.e., it is using a lot of CSS, JavaScript and image files. Enabling keep alive will result in a drastic change in speed of your website, and your users will be able to enjoy high-speed web pages.

Minimize Redirects

A redirect is an instruction or method that automatically takes visitors of one location to another location or place. Redirects are not good for page speed as they keep taking visitors from one place to another. Redirectors for people using mobile phones make their experience worse. Therefore, you should take care of all redirects on your site.

There are two types of redirects:

1) Server Side Redirects

These are fast, cachable redirects. Common server-side redirects are 301 and 302 redirects which use HTTP to inform that a page or resource has moved. 301 redirects are temporary while 302 are permanent ones. These are server side redirects in which web server uses HTTP to direct the browser to the new location. The web browser can handle these types of redirect much quicker than client-side redirects and can cache the correct location of the file.

2) Client-Side Redirects

These are slow, not cachable redirects. Redirects that use the http-equiv=”refresh” attribute or javascript can introduce even longer waiting times and performance issues and should be not used if at all possible.

Here are some advice from Google regarding these redirects:

“Never link to a page that you know has a redirect on it. This happens when you have manually created a redirect, but never changed the text link in your HTML to point to the new resource location.”

“Never require more than one redirect to get to any of your resources.”

And this is how you remove redirects:

  1. Find all redirects using some online tool.
  2. Understand why this redirect is there.
  3. Check to see how it is affected/or affects other redirects.
  4. Remove if required/possible.
  5. Update your system if it affects/is affected by other redirects.

Although redirects are problematic and result in page delay, you should be very careful while removing them. If you are confident that you’ll remove all redirects with creating broken links, go ahead and do this otherwise ask some expert to do this for you.

Optimize Critical Rendering Path

The rendering path is a sequence of events which happen when a page is displayed on a web browser. For example, Example: get HTML > get resources > parse > display webpage is a rendering path. Optimization of this sequence of the event can result in significant increase in page speed.

Whenever browser calls a page for displaying it, it first downloads HTML file, parses it, understand it and locate other resources like images, CSS and JavaScript files. After reading HTML file, it starts downloading other resources and never shows anything to the user unless it downloads all CSS code. Therefore, it is very important to call resources in this sequence.

HTML>CSS>JavaScript (for above the fold only, remaining can be called later)>Images>Audio>video

This is the optimized way to call your resources and doing so will result in a fast speed of your web pages. Also, you should make sure that you are following all advices related to the JavaScript and CSS given above.


PageSpeed is very important for ranking yourself high in the Google and giving your users an amazing and fast experience. It depends on many factors and even one factor among all can reduce your page speed significantly. Here are the key points which should be considered for enhanced page speed.

  1. Eliminate render-blocking CSS, minify it and always call CSS first.
  2. Eliminate render-blocking JavaScript, combine all JavaScript to make one file and try to use inline JavaScript. Do Not forget to call JavaScript on the right place.
  3. Get a right web hosting plan and reduce server response time by considering above tips.
  4. Enable G-Zip compression on all your files. This will reduce the size of your resource files and increase speed.
  5. Enable keepalive. This has a significant impact on page speed.
  6. Minimize redirects and ask the developer beforehand to keep as fewer redirects as possible.
  7. Optimize your rendering path so your page will load many times faster.

Following these all tips will result in many times faster pages and you’ll be able to get more traffic on your website. Your feedback is always welcome. Let me know what you think about this guide in the comments section. I’ll be glad to hear from you in this regard.

The post Website Speed Optimization Guide for Google PageSpeed Rules appeared first on artzstudio.

Don’t just let Node.js take the blame

Perf Planet - Tue, 07/26/2016 - 10:37

No matter how well-built your applications are, countless issues can cause performance problems, putting the platforms they are running on under scrutiny. If you’ve moved to Node.js to power your applications, you may be at risk of these issues calling your choice into question. How do you identify vulnerabilities and mitigate risk to take the […]

The post Don’t just let Node.js take the blame appeared first on about:performance.

Don’t just let Node.js take the blame

No matter how well-built your applications are, countless issues can cause performance problems, putting the platforms they are running on under scrutiny. If you’ve moved to Node.js to power your applications, you may be at risk of these issues calling your choice into question. How do you identify vulnerabilities and mitigate risk to take the […]

The post Don’t just let Node.js take the blame appeared first on about:performance.

Categories: Load & Perf Testing

Mobile HTTP request performance & error rates

Perf Planet - Tue, 07/26/2016 - 02:30

The user experience of your iOS and Android apps depends largely on the performance of your app’s web requests. Slowly reacting app pages are often caused by low-performing web requests that call third-party providers—such as Facebook or Twitter—or your own backend services.

High error rates are often the cause of missing data on app pages. Even high crash rates can be caused by your app not gracefully handling missing data. All of this is reason enough to keep a close eye on your app’s web request performance and error rates as well as on payload sizes in general.

Dynatrace mobile app monitoring captures client and server side performance metrics for each individual HTTP request triggered by your mobile apps. Error rates are also included.

To view mobile app HTTP web request performance and error rate
  1. Select Mobile applications from the Dynatrace menu on the left side of the UI.
  2. Select the app you want to analyze.
    The mobile application infographic shows the number of Web requests triggered by your mobile apps worldwide within the selected period of time and the Error rate.
  3. Click the HTTP performance section of the infographic to view further details.

HTTP performance view shows detail related to the top HTTP requests by domain name, error rates, and request sizes (aka “payload”). As performance and error rates vary a lot between the different HTTP domains, Dynatrace mobile app monitoring allows you to expand each individual HTTP domain to view the specific URLs that called your app, along with related metrics.

Click View full details within the Top HTTP requests section to view even more information. Here you’ll find individual performance and error rate charts for each XHR action of the selected HTTP domain, as shown below.

The post Mobile HTTP request performance & error rates appeared first on #monitoringlife.

QA Music: They’ve Come To Snuff The Testing

QA Hates You - Mon, 07/25/2016 - 04:11

You know it ain’t gonna die.

Alice in Chains, “The Rooster”:

Categories: Software Testing

Southwest Airlines Suffers ‘Catastrophic Technological Failure’

Perf Planet - Fri, 07/22/2016 - 03:00

Today Southwest Airlines COO Michael Van De Ven went on twitter to explain the “catastrophic technological failure” that took down Southwest’s systems for most of the day yesterday, and how its customers were impacted. With no network, Southwest was unable to let passengers check in to flights, to move crews around, or to find baggage. Van De Ven said that the company has redundancies in place, but they did not kick in as expected, and about 800 of its servers were affected.

The minute I saw his tweet I had to look at the data. Here’s what I found (above). You can see right before 3:00pm EDT something that looks like a spike and an error. Then the site’s response time drops. This is weird. Shouldn’t there be something looking more like an error? At the surface it looks like their page got better!

The story becomes clearer if you follow Southwest’s tweets. At 1:54pm EDT the company tweeted that it was investigating systems issues. This appears to be in response to a spike in wait times at 1:20pm EDT. See below:

As you can see, the issue with wait times escalated quickly by 3:00pm EDT, before a drop in web page response. The drop appears to have happened because Southwest changed to their distress plan and only served its error page,. The company identified an issue and moved quickly to respond and manage client expectations.

This is a great example of how a company should respond to major interruptions of service: Detect an issue early to minimize the impact; use social media to keep your users informed; and as soon as it escalates, notify users further so they are not left in the dark.

Every company will eventually have to manage a situation like this one. Southwest may have had a difficult day yesterday, but its response to this failure is a model for others and probably minimized the issue’s negative impact on customer loyalty and the business.

The post Southwest Airlines Suffers ‘Catastrophic Technological Failure’ appeared first on Catchpoint's Blog.

Webcast: Scalable Data Science with R

O'Reilly Media - Thu, 07/21/2016 - 18:51
Join Teradata's Roger Fried, Senior Data Scientist and Brian Kreeger, Senior Data Scientist and local R User Group organizer, with over 30 collective years of experience designing and implementing big data analytic solutions in healthcare, finance and retail. O'Reilly Media, Inc. 2016-07-21T15:15:57-08:10

User Experience Monitoring on Hybrid Applications

Perf Planet - Thu, 07/21/2016 - 15:54

What are hybrid applications? Google says: “Hybrid applications are web applications (or web pages) in the native browser, such as UIWebView in iOS and WebView in Android (not Safari or Chrome). Hybrid apps are developed using HTML, CSS and Javascript, and then wrapped in a native application using platforms like Cordova.” I don’t think I could […]

The post User Experience Monitoring on Hybrid Applications appeared first on about:performance.