Feed aggregator

Webcast: Is Reporting Dead?

O'Reilly Media - Thu, 10/20/2016 - 14:40
In this webcast, you will learn how reporting is evolving as a result of modern APIs, datasources, delivery methods and architectures and how to use them to your advantage. O'Reilly Media, Inc. 2016-10-20T11:15:48-08:10

Introducing RabbitMQ monitoring (beta)

Perf Planet - Thu, 10/20/2016 - 11:34

We’re excited to announce the beta release of Dynatrace RabbitMQ monitoring! RabbitMQ server monitoring provides a high-level overview of all RabbitMQ components within your cluster.

With RabbitMQ message-related metrics, you’ll immediately know when something is wrong. And when problems occur, it’s easy to see which nodes are affected. It’s then simple to drill down into the metrics of individual nodes to find the root cause of problems and potential bottlenecks.

To view RabbitMQ monitoring insights

  1. Click Technologies in the menu.
  2. Click the RabbitMQ tile.
    Note: Monitoring of multiple RabbitMQ clusters isn’t supported in this beta release. All nodes are presented under a single process group.
  3. To view cluster metrics, expand the Details section of the RabbitMQ process group.
  4. Click the Process group details button. 
  5. On the Process group details page, select the Technology-specific metrics tab to view relevant cluster charts and metrics. RabbitMQ cluster overview pages (i.e., “process group” overview pages) provide an overview of RabbitMQ cluster health. From here it’s easy to identify problematic nodes. Just select a relevant time interval for the timeline, select a node metric from the metric drop list, and compare the values of all nodes in the sortable table.
  6. Further down the page, you’ll find a number of other cluster-specific charts.
RabbitMQ cluster charts

Queued messages
RabbitMQ’s queues are most efficient when they’re empty, so the lower the Queued messages count, the better.

Message rates
The Message rates chart is the best indicator of RabbitMQ performance.

Nodes health
Presents number of nodes in given state. Please be aware that this chart will be available not for  every RabbitMQ version.

Queues health
The Queues health chart shows more than just queue health. RabbitMQ can handle a high volume of queues, but each queue requires additional resources, so watch these queue numbers carefully. If the queues begin to pile up, you may have a queue leak. If you can’t find the leakage, consider adding a queue-ttl policy.

Cluster summary
The Cluster summary chart provides an overview of all RabbitMQ cluster elements.

For more RabbitMQ performance tips, have a look at this article about avoiding high CPU and memory usage.

RabbitMQ cluster monitoring metrics

Messages ready
The number of messages that are ready to be delivered. This is the sum of messages in the  messages_ready status.

Messages unacknowledged
The number of messages delivered to clients, but not yet acknowledged. This is the sum of messages in the messages_unacknowledged status.

The rate at which messages are acknowledged by the client/consumer.

Deliver and Get
The rate per second of the sum of messages: (1) delivered in acknowledgment mode to consumers, (2) delivered in n0-acknowledgment mode to consumers, (3) delivered in acknowledgment mode in response to basic.get, (4) delivered in n0-acknowledgment mode in response to basic.get.

The rate at which messages are incoming to the RabbitMQ cluster.

The number of unhealthy nodes. Please be aware that not every RabbitMQ version provides this metric. 

The number of healthy nodes. Please be aware that note every RabbitMQ version provides this metric.

Queues health chart
The number of queues in a given state.

The number of channels (virtual connections). If the number of channels is high, you may have a memory leak in your client code.

The number of TCP connections to the message broker. Frequently opened and closed connections can result in high CPU usage. Connections should be long-lived. Channels can be opened and closed more frequently.

The number of consumers

The number of exchanges

RabbitMQ node monitoring To access valuable RabbitMQ node metrics
  1. Select Hosts from the menu.
  2. On the Hosts page, select your RabbitMQ host.
  3. In the Processes section of the Hosts page, select the RabbitMQ process.
  4. Expand the Properties pane and select the RabbitMQ process group link.
  5. Select a node from the Process list on the Process group details page (see below).
  6. Click the RabbitMQ metrics tab.

    Valuable RabbitMQ node metrics are displayed on each RabbitMQ process page on the RabbitMQ metrics tab. The Messages chart indicates how many messages are queued (the fewer the better). The next two charts present the number of RabbitMQ elements that work on the current node. On the process/node page, all metrics are per node. The following metrics are available: Messages ready, Messages unacknowledged, number of Consumers, Queues, Channels, and Connections (See above for descriptions of these metrics).
  7.  To return to the cluster level, expand the Properties section of the RabbitMQ Processes page and select the cluster.
Additional RabbitMQ node monitoring metrics

More RabbitMQ monitoring metrics are available from individual Process pages. Select the Further details tab for more monitoring insights.

On the Further details tab you’ll find five additional charts.

Memory usage
The percentage of available RabbitMQ memory. 100% means that the RabbitMQ memory limit vm_memory_high_watermark has been reached. (by default,  vm_memory_high_watermark is set to 40% of installed RAM). Once the RabbitMQ server has used up all available memory, all new connections are blocked. Note that this doesn’t prevent the RabbitMQ server from using more than its limit—this is merely the point at which publishers are throttled.

Available disk space
The percentage of available RabbitMQ disk space. Indicates how much available disk space remains before the disk_free_limit is reached. Once all available disk space is used up, RabbitMQ blocks producers and prevents memory-based messages from being paged to disk. This reduces, but doesn’t eliminate, the likelihood of a crash due to the exhaustion of disk space.

File descriptors usage
The percentage of available file descriptors. RabbitMQ installations running production workloads may require system limits and kernel-parameter tuning to handle a realistic number of concurrent connections and queues. RabbitMQ recommends allowing for at least 65,536 file descriptors when using RabbitMQ in production environments. 4,096 file descriptors is sufficient for most development workloads. RabbitMQ documentation suggests that you set your file descriptor limit to 1.5 times the maximum number of connections you expect.

Erlang processes usage
The percentage of available Erlang processes. The maximum number of processes can be changed via the RABBITMQ_SERVER_ERL_ARGS environment variable.

Sockets usage
The percentage of available Erlang sockets. The required number of sockets is correlated with the required number of file descriptors. For more details, see the” Controlling System Limits on Linux” section at www.rabbitmq.com.

For more information about RabbitMQ statistics, see www.hg.rabbitmq.com.

  • The Rabbitmq-management plugin must be installed and enabled on all nodes you want to monitor.
  • A RabbitMQ management plugin user with monitoring privileges and access to all virtual hosts that you want to monitor.
  • Linux OS or Windows
  • RabbitMQ version 3.4.0 +
  • A single RabbitMQ cluster
  • Statistics available on the localhost interface via HTTP
  • Dynatrace OneAgent version 100+. OneAgent must be installed on a node that has a statistics database.
  • It’s recommended that you install OneAgent on all RabbitMQ nodes.
Enable RabbitMQ monitoring globally

With RabbitMQ monitoring enabled globally, Dynatrace automatically collects RabbitMQ metrics whenever a new host running RabbitMQ is detected in your environment.

All RabbitMQ instances must have the same username and password.

  1. Go to Settings > Monitoring > Monitored technologies.
  2. Set the RabbitMQ switch to On.
  3. Click the ^ button to expand the details of the RabbitMQ integration.
  4. Define a User.
  5. Define a Password and Port (the default port is 15672).
  6. Click Save.
Enable RabbitMQ monitoring for individual hosts

Dynatrace provides the option of enabling RabbitMQ monitoring for specific hosts rather than globally.

  1. If global RabbitMQ monitoring is currently enabled, disable it by going to Settings > Monitoring > Monitored technologies and setting the RabbitMQ switch to Off.
  2. Select Hosts in the navigation menu.
  3. Select the host you want to configure.
  4. Click Edit.
  5. Set the RabbitMQ switch to On.
Have feedback?

Your feedback about Dynatrace RabbitMQ monitoring is most welcome! Let us know what you think of the new RabbitMQ plugin by adding a comment below. Or post your questions and feedback to Dynatrace Answers.

The post Introducing RabbitMQ monitoring (beta) appeared first on #monitoringlife.

Introducing Elasticsearch cluster monitoring (beta)

Perf Planet - Thu, 10/20/2016 - 11:26

We’re excited to announce the beta release of Dynatrace Elasticsearch monitoring. Elasticsearch server monitoring provides a high-level overview of all Elasticsearch components within each monitored cluster in your environment.

Elasticsearch health metrics tell you everything you need to know about the health of your monitored clusters. So, when a problem occurs, it’s easy to see which nodes are affected. And it’s easy to drill down into the metrics of individual nodes to find the root cause of problems and potential bottlenecks.

To view Elasticsearch monitoring insights
  1. Click Technologies in the navigation menu.
  2. Click the Elasticsearch tile.
    Individual Elasticsearch clusters are represented as process groups. All detected Elasticsearch Process groups are listed at the bottom of the page.
  3. To view cluster metrics, expand the Details section of the process group you want to analyze.
  4. Click the Process group details button.
  5. On the Process group details page, select the Technology-specific metrics tab to view relevant cluster charts and metrics. Elasticsearch cluster overview pages (i.e., “process group” overview pages) provide an overview of individual cluster health. From here it’s easy to identify problematic nodes. Just select a node metric from the metric droplist, select a relevant time frame on the timeline, and compare the values of all nodes in a sortable table.
  6. Further down the page, you’ll find a number of cluster-specific charts.
Elasticsearch cluster monitoring metrics


All primary and replica shards are allocated. Your cluster is 100% operational.

All primary shards are allocated, but at least one replica is missing. No data is missing, so search results will still be complete. However, availability is compromised to a degree. If more shards disappear, you may lose data. Think of yellow as a warning that should prompt investigation.

At least one primary shard (and all of its replicas) is missing. This means that you are missing data. Searches will return partial results, and indexing a missing shard will return an exception.

Active shards
Indicates the number of primary shards in your cluster. This is an aggregate total across all indices.

Active primary shards
The aggregate total of all shards across all indices, including replica shards.

Replica shards
The aggregate total of replica shards. If the node holding a primary shard dies, a replica will be promoted to the role of primary shard.

Relocating shards
Shows the number of shards that are currently moving from one node to another. This number is often zero, but can increase when Elasticsearch decides that a cluster isn’t properly balanced, a new node is added, or a node is taken down.

Initializing shards
The count of freshly created shards. When you first create an index, all shards briefly reside in the initializing state. This is typically a transient event; shards shouldn’t linger in the initializing state too long. You may also see initializing shards when a node is first restarted. As shards are loaded from disk, they begin in the initializing state.

Unassigned shards
The shards that exist in the cluster state, but can’t be found in the cluster itself. A common source of unassigned shards is unassigned replicas. For example, an index with five shards and one replica will have five unassigned replicas in a single-node cluster. Unassigned shards are also present when a cluster is in red status (because primary shards are missing).

Delayed unassigned shards
The number of shards whose allocation has been delayed.

Number of nodes
Number of nodes.

Number of data nodes
Number of data nodes. Data nodes hold data and perform data-related operations, such as CRUD, search, and aggregations.

Number of indexes.

Deleted documents
Number of deleted documents. Deleted documents are items that were deleted, but continue to occupy space.

Number of documents.

Number of query cache evictions. The query cache implements an LRU eviction policy that evicts the oldest data when the cache becomes full to make way for new data.

The size of the query cache.

Number of items in the query cache.

Field data evictions
Number of field data cache evictions. A high number of evictions means that your cache/hit ratio is low. This results in excessive disk reading and has a negative impact on performance.

Field data size
The size of the field data cache. A large field data cache suggests that Elasticsearch has not been correctly implemented.

Segment count
Number of segments. In Elasticsearch, each shard is broken up internally into “segments.” The number of segments should not grow endlessly as they should be “merged” into bigger segments over time. This metric can indicate a problem if the number continues to increase.

Elasticsearch node monitoring To access valuable Elasticsearch node metrics
  1. Select a node from the Process list on the Process group details page.
  2. On the Elasticsearch Process page, select the Elasticsearch metrics tab.
    The Indexing chart shows the effectiveness of all indexing operations.
    The Search time chart serves as an indicator of how efficient your search operations are. More operations in a shorter time interval indicates improved performance
  3. To jump back to the cluster level, expand the Properties section of the Elasticsearch Processes page and select the  cluster.
Node metrics

Indexing total
Shows the number of docs that have been indexed. This number doesn’t decrease when docs are deleted. This number is incremented each time an internal index operation occurs, including updates.

Indexing time
Amount of time spent on indexing.

Number of queries
Number of query operations.

Number of fetches
Number of fetch operations.

Number of scrolls
Number of scroll operations.

Total search time
Aggregation of time spent on queries, fetches, and scrolls operations.

Additional Elasticsearch node monitoring metrics

Many more Elasticsearch monitoring metrics are available from individual Process pages. Select the Further details tab for additional monitoring insights.

On the Further details tab you’ll find six additional sections.

Circuit breakers are used to prevent operations from causing OutOfMemoryError errors. Each breaker specifies a limit for how much memory it can use. If the estimated query size is larger than the limit, the circuit breaker is tripped, the query is aborted, and an exception is returned. This happens before data is loaded, which means that an OutOfMemoryException won’t be triggered.

Shows additional in-depth information around Elasticsearch indices. Of particular interest is the Translog chart, which shows if Elasticsearch is keeping up with the data coming in by flushing it out to the indices on disk.

Java managed memory
Java heap usage and garbage collection.

Can show the root cause of problems when a system is under too much load and merging can’t keep up.

Shows additional in-depth information around Elasticsearch search operations, contains detailed performance metrics for queries, fetches, and scrolls.

Thread pools
Shows details about how much load the system is currently processing. Enables you to see if you can increase the rate of queries or the amount of writes. Also enables you to see if there’s a bottleneck in one of the thread pools.

A full description of all Elasticsearch stats is available at www.elastic.co.
Most metrics are taken directly from Elasticsearch stats and presented as is, with no additional computation.


• Elasticsearch 2.3+
• Linux OS or Windows
• Dynatrace OneAgent 101+
• OneAgent must have to be installed on all Elasticsearch nodes

Enable Elasticsearch monitoring globally

With Elasticsearch monitoring enabled globally, Dynatrace automatically collects Elasticsearch metrics whenever a new host running Elasticsearch is detected in your environment. All Elasticsearch  instances must have the same password (or no password).

  1. Go to Settings > Monitoring > Monitored technologies.
  2. Set the Elasticsearch switch to On.
  3. Define a user (or leave the field blank if authentication has not been set up on the Elasticsearch server.)
  4. Define a password and URL. (the default is: “http://localhost:9200)
Enable Elasticsearch monitoring for individual hosts

Dynatrace provides the option of enabling Elasticsearch monitoring for specific hosts rather than globally

  1. If global Elasticsearch monitoring is currently enabled, disable it by going to Settings > Monitoring > Monitored technologies and setting the Elasticsearch switch to Off.
  2. Select Hosts in the navigation menu.
  3. Select the host you want to configure.
  4. Click Edit.
  5. Set the Elasticsearch switch to On.

Please note that the beta version of Dynatrace Elasticsearch monitoring doesn’t support events on the cluster level. We’ll introduce this functionality in a future release.

Your feedback about Dynatrace Elasticsearch monitoring is most welcome! Let us know what you think of the new Elasticsearch plugin by adding a comment below. Or post your questions and feedback to Dynatrace Answers.

The post Introducing Elasticsearch cluster monitoring (beta) appeared first on #monitoringlife.

New “Low datastore space” problem type for VMware monitoring

Perf Planet - Thu, 10/20/2016 - 11:16

We’ve enhanced anomaly detection for VMware environments with the introduction of a new problem type called Low datastore space. This problem type is triggered whenever a datastore runs out of capacity.

When datastore space gets low, virtual machines can become corrupted, which can lead to major outages. Below is an example of a Low datastore space problem card. In this example, seven instances of this problem were detected. Seven ESXi hosts are connected to the problematic datastore.

To analyze this problem in detail, click any of the ESXi host’s Low datastore space instances.

This ESXi host page provides more detail about the low datastore problem. Click the Disks portion of the infographic and then select the Datastores tab.

By default, low datastore space problems are generated whenever free space falls below 2%.

To adjust low-datastore-space problem detection settings
  1. Go to SettingsAnomaly detection > Infrastructure > VMware section.
  2. Set the Detect low datastore space option to based on custom settings.
  3. Define the minimum threshold for free datastore space below which a Low datastore space problem should be generated.

Note that this feature requires Dynatrace Security Gateway v1.105 (or higher).

The post New “Low datastore space” problem type for VMware monitoring appeared first on #monitoringlife.

How to write effective seo content fast?

Perf Planet - Thu, 10/20/2016 - 01:55

To understand what marketers mean by SEO content, it’s helpful to break down the phrase into its component parts:

  • “SEO” refers to search engine optimization, or the process of optimizing a website so that people can easily find it via search engines like Google.
  • By “content,” we mean any information that lives on the web and can be consumed on the web

So, putting these two concepts together: SEO content is any content created with the goal of attracting search engine traffic. Here are some of the things you will need to do in order to SEO your web content.

Write a Magnetic Headline First :

A catchy headline persuades your target audience to click on it and read your article.This is an effective way to stand out in an otherwise crowded SERP’s. In fact, an article with insightful content may lose out on many prospective readers without an appealing headline.Coming up with a creative headline is hard; doing it quickly is harder still. The good news is there are tools available to make life easy for you.Hubspot’s Blog Topic Generator is one of them.

Keyword Research:

If you want to generate traffic through search, it’s best to do keyword research before writing. This way, you can focus on keywords for which a certain amount of search volume already exists. In other words, write about the topics that people are already searching for information about.

Use Google’s Related Search Feature:

When you enter your base keywords of your title, e.g, “write content fast” into Google’s Search Box, and hit the spacebar, you will see Google’s related searches. Those are long-tail search phrases that many of your target readers use to find your content.Therefore, by optimizing your content with those keywords, you are giving the best shot for your content to compete with others on SERP’s.

Use Data on Google’s Search Console:

Go to Google’s Search Console and click on Search Analytics under Search Traffic. You should see the search queries your website is already ranking for. Try to incorporate those keywords into your articles to rank better and quicker on SERP’s.

Keyword Optimization:

After through research on key-words, you should know where and how to use keywords in your content for maximum search ability. (SEOMoz offers a great guide to on-page optimization.)


Content Organization:

The content on your site should be organized in a logical way. This is not only good for SEO, it also helps visitors on your site find other related content easily. (The longer they stay on your site, the better.)

Content Promotion:

Increase visibility to new content you create by sharing it on social networks and building links to your content (both internally and from external sites).

The post How to write effective seo content fast? appeared first on Artz Studio.

What should you learn to become an organic SEO expert?

Perf Planet - Wed, 10/19/2016 - 06:31

Search engine optimization is a methodology of strategies, techniques, and tactics used to increase the count of visitors to a website. This count is increased by obtaining a high-ranking placement in the search results page of a search engine (SERP) — including Google, Bing, Yahoo and other search engines. Here are three basics things you should learn to become an organic SEO expert.


  • First of all, you have to educate yourself on the basics of the Google’s algorithm currently and in the past. The past to understand how Google’s algorithm has adapted over time with core updates like Panda, Penguin, Rank Brain and Hummingbird to name a few.
  • Google offers an excellent introductory read on the topic of SEO. The Search Engine Optimization Starter Guide will provide you tremendous insight into what SEO of sites is.
  • Nothing is perfect than practice. Do a lot of testing on the sites you don’t care about. Test particular things you think can improve the site. Test the theories that other people or experts share on the internet.
  • Ask yourself these two questions:
  • Does the site you want to optimize deserve the rank, and if not, figure out what to fix to make the site deserve to rank.
  • Do you know how to provide an SEO audit for a current site whether a year old or 10 years old knowing technical SEO, off page SEO, and what needs to be improved on the page before getting started?
On-Page SEO

On-page SEO is the practice of optimizing individual web pages to rank higher and earn more relevant traffic in search engines. It refers to both the content and HTML source code of a page that can be optimized. You should care about following things to improve On-page SEO.

  • Does the site load fast and have the best site speed (loads under 3 seconds min).
  • The site should:
    • have https as now it’s a ranking factor.
    • be mobile friendly and responsive.
    • have a fantastic design and frictionless conversion points and make it easy for users to find what they want, share it, or act on it.
  • H1 tags, site/page titles, structure of internal link flows from pages, posts, and content.
  • Provide only that content that answers questions, are clear, easy to read and useful.
Off-Page SEO

Off-page SEO refers to techniques (outside the site) that can be used to improve the position of a website in the search engine results page (SERPs). You should care about following things to improve Off-page SEO.

  • Reverse Engineer current first page(s) of relevant search queries and check out who and why Google ranks those websites (design, backlinks, speed, conversion points, content, etc)
  • Use tools like SEMRush and Ahrefs for competitor research, backlink research, popular content ideas with most shares.
  • If local SEO, know how to gain citations with tools like Bright Local, White Spark, and Moz Local along with niche citations based on industry/relevancy of the site you are optimizing. Also, overall list of the most impactful citations overall.
  • How to do SEO outreach, relevant guest posting, and create content usually 1–2k words in length that provide value you can’t find anywhere else.
  • How to promote that content via social, ads, other organic SEO means.
  • Google has recently said top 3 ranking factors are links, content, and rank brain. So do you understand the best way to attract or gain links and keep them relevant, authoritative or social.
  • How social media and Buzz affects SEO.

Learning these factors will help you become an SEO pro in no time. I hope you will like this Article. Keep visiting Artz Studio for more useful articles.

The post What should you learn to become an organic SEO expert? appeared first on Artz Studio.

Webcast: What Keeps You Up at Night: Understanding Security, Scale and the Monetization of the IoT

O'Reilly Media - Tue, 10/18/2016 - 14:29
In this webcast we'll address three of the top considerations enterprises need to take into account to avoid IoT insomnia: Security, Scalability, and Monetization. O'Reilly Media, Inc. 2016-10-18T11:15:48-08:10

Monitoring ASP.NET Core applications

One of the hot topics currently in the .NET community is CoreCLR and ASP.NET Core. As most probably you already know: Microsoft decided to create a cross platform, high performance version of ASP.NET, which they even open-sourced. What is ASP.NET Core? There are many great posts about the basics of ASP.NET Core, so in this […]

The post Monitoring ASP.NET Core applications appeared first on about:performance.

Categories: Load & Perf Testing

Monitoring ASP.NET Core applications

Perf Planet - Tue, 10/18/2016 - 10:27

One of the hot topics currently in the .NET community is CoreCLR and ASP.NET Core. As most probably you already know: Microsoft decided to create a cross platform, high performance version of ASP.NET, which they even open-sourced. What is ASP.NET Core? There are many great posts about the basics of ASP.NET Core, so in this […]

The post Monitoring ASP.NET Core applications appeared first on about:performance.

What’s Cannibalization in SEO and why you should avoid it?

Perf Planet - Tue, 10/18/2016 - 07:15

When searched in Oxford dictionary, the word ‘cannibalize’ referred to the reduction of sales of a product by introducing another same product. The literal meaning clears a lot what actually this word stands for. In SEO, the cannibalization is described as ranking more pages on a site with the same keyword. It’s done by mistake or intentionally but it carries a number of harms for the site and especially the product being sold out.

Cannibalization occurs when the webmasters or bloggers optimize multiple pages for the same keyword. Why it happens? There are many reasons and using the same keywords on different pages repeatedly might be due a mistake, lack of proper SEO techniques and many do intentionally. The intentions include getting better ranking of the site. When a certain page of a site doesn’t appear, or does, the bloggers intend to rank other pages too with that keyword and it confuses search engine to which pages show on top. The illustration below clears the confusion to understand cannibalization.

Here we can see that one keyword, bicycle is used four times and it’s really confusing for the search engine. The result will show only one page when a query is searched and all the rest of pages will go in vain. The efforts, time, capital and investment all go in waste when the webmasters commit cannibalization.

Problems Created with Cannibalization in SEO:

The scenario is that Google is not that sagacious to determine relevancy of pages. When you have done SEO of all pages with the same term, keyword, search engine will bring out one page for a certain searching term and you have no chances at all to rank other pages on top. Here are some of the problems attached with cannibalization in SEO.

Conversion Rate

It definitely does impact conversion rate. One optimized page against certain keywords is working better if, why do you need to work on other pages for that keyword. It reduces conversion rate directly. Working on other pages with different keywords will work better.

Content Quality

It’s obvious that different pages with same keyword will have the repeated subject. Thus in this way the quality of the content is deteriorated and it affects the site ranking as well. Low quality content leads to less referrals and ultimately the low audience and low conversions.

Handling Cannibalization in SEO SEO Specialist

Employing or outsourcing SEO specialists with wide exposure to such issues in SEO will be effective. The experienced professionals normally tend to avoid such problems and their efforts bring about positive results.

Use 301 Redirects

Using 301 redirect is a good way too if you are suffering from cannibalization problems. Send the link juice to one page by identifying all the pages with the very problem. This way replication will be reduced and Google will rank all the pages accordingly.

I hope you’ll find this article useful. Keep visiting ArtzStudio for more informative stuff.

The post What’s Cannibalization in SEO and why you should avoid it? appeared first on Artz Studio.

Monitoring Docker container environments

Perf Planet - Tue, 10/18/2016 - 06:53

If you’re responsible for running and monitoring Docker environments, you likely have two main concerns: (1) are the applications and services deployed on your clusters performing well, and (2) are your clusters healthy and properly sized for running your apps with the required workload?

Monitoring dockerized applications, not just containers

The metrics you get from your orchestration tool (e.g., Kubernetes, Openshift, or Mesos/Marathon) or by Google’s cAdvisor can help you to get a basic understanding of your cluster and container performance. However, you can’t determine if the applications and services deployed within your containers are working properly by analyzing only process-, container-, and host metrics. The information you need to understand the health of your containerized applications and services must be collected and instrumented from the applications themselves (for example, from the byte-code executed within the JVM).

In microservices environments, the complexity of deployments typically increases because orchestration tools dynamically start and stop containers on cluster nodes based on the health checks and thresholds that have been defined for each microservices container. A container can virtually “hop” from one host to another host, and the number of containers per app or microservice can change dynamically over time. This makes it difficult to draw conclusions about an application’s or service’s health. One solution for tackling this problem is analyzing and combining the monitoring data from the infrastructure level (which includes information about containers, processes, and hosts), and the application level, from within an application or service’s execution path and data.

Dynatrace addresses this problem with a full-stack monitoring approach in which all data from the infrastructure and application/service levels are integrated, enabling you to quickly get to the actual root cause of detected application and service problems.

OneAgent for container and application monitoring

To monitor Docker environments with Dynatrace, you only need to run Dynatrace OneAgent on your Docker hosts—either as a separate container or by installing Dynatrace OneAgent directly on the hosts. You don’t need to embed OneAgent into any of your application images or have it inherited from a special base image.

To run Dynatrace OneAgent as a Docker container, issue the following docker run command on all your Docker hosts. Be sure to replace the REPLACE_WITH placeholders in this command with the respective information explained above:

docker run -d --privileged=true --pid=host --net=host --ipc=host -v /:/mnt/root dynatrace/oneagent TENANT=REPLACE_WITH_ENVIRONMENT_ID TENANT_TOKEN=REPLACE_WITH_TOKEN SERVER=REPLACE_WITH_CONNECTION_ENDPOINT

Note that the connection endpoint in REPLACE_WITH_CONNECTION_ENDPOINT is https://REPLACE_WITH_ENVIRONMENT_ID.live.dynatrace.com . If you’re using Dynatrace Managed the connection endpoint is https://<YourManagedServerURL>/e/REPLACE_WITH_ENVIRONMENT_ID

Monitoring Mesos/Marathon

To use Dynatrace to monitor apps running in Mesos clusters, it’s recommended that you (1) deploy Dynatrace OneAgent on all Mesos agent nodes (formally known as “Mesos slaves”) by means of a Marathon app deployment, and (2) install Dynatrace OneAgent on all Mesos master nodes, as explained below.

If you’re using DC/OS to manage your Mesos clusters, you can take advantage of the Dynatrace package that we’ve made available within the DC/OS Universe. The DC/OS Universe package automatically deploys Dynatrace to all your Mesos agent nodes.

If you’re not using DC/OS, you can run Dynatrace OneAgent as a Marathon app. See the following example:

cat <<- EOF > dynatrace-oneagent.json {   "id": "dynatrace-oneagent",   "cpus": 0.1,   "mem": 256,   "instances": REPLACE_WITH_NUMER_OF_NODES,   "constraints": [["hostname", "UNIQUE"], ["hostname", "GROUP_BY"]],   "container": {     "type": "DOCKER",     "volumes": [       {         "containerPath": "/mnt/root",         "hostPath": "/",         "mode": "RW"       }     ],     "docker": {       "image": "dynatrace/oneagent",       "forcePullImage": true,       "network": "HOST",       "privileged": true,       "parameters": [         { "key": "pid", "value": "host" },         { "key": "ipc", "value": "host" }       ]     }   },   "args": [     "TENANT=REPLACE_WITH_ENVIRONMENT_ID",     "TENANT_TOKEN=REPLACE_WITH_TOKEN",     "SERVER=REPLACE_WITH_CONNECTION_ENDPOINT"   ] } EOF

Be sure to replace all REPLACE_WITH placeholders in the dynatrace-oneagent.json file with the respective information explained above. The connection endpoint in REPLACE_WITH_CONNECTION_ENDPOINT is https://REPLACE_WITH_ENVIRONMENT_ID.live.dynatrace.com. If you’re using Dynatrace Managed the connection endpoint is https://<YourManagedServerURL>/e/REPLACE_WITH_ENVIRONMENT_ID

Now, send an HTTP POST request to the Mesos master “leader” to deploy the Marathon app with Dynatrace OneAgent.

curl -X POST -H "Content-Type: application/json" http://your-mesos-master:8080/v2/apps -d@dynatrace-oneagent.json

Deploy Dynatrace OneAgent on Mesos master nodes

Marathon doesn’t allow you to deploy apps to master nodes (except nodes that are tagged as both master and agent). This is why you need to manually install Dynatrace OneAgent on all Mesos master nodes that haven’t additionally been configured as Mesos agents. Please use the default Linux installer for this.

Monitoring Kubernetes

To use Dynatrace to monitor apps that run within Kubernetes clusters, it’s recommended that you (1) deploy Dynatrace OneAgent on all Kubernetes nodes (previously known as “minions”) by means of a DaemonSet, and (2) install Dynatrace OneAgent on the Kubernetes master nodes. See the example below:

cat <<- EOF > dynatrace-oneagent.yaml apiVersion: extensions/v1beta1 kind: DaemonSet metadata:   name: dynatrace-oneagent spec:   template:     metadata:       labels:         run: dynatrace-oneagent     spec:       hostPID: true       hostIPC: true       hostNetwork: true       volumes:       - name: host-root         hostPath:           path: /       containers:       - name: dynatrace-oneagent         image: dynatrace/oneagent         args:         - "TENANT=REPLACE_WITH_ENVIRONMENT_ID"         - "TENANT_TOKEN=REPLACE_WITH_TOKEN"         - "SERVER=REPLACE_WITH_CONNECTION_ENDPOINT"         volumeMounts:         - mountPath: /mnt/root           name: host-root         securityContext:           privileged: true EOF

Be sure to replace all REPLACE_WITH placeholders in the dynatrace-oneagent.yaml file with the respective information explained above. The connection endpoint in REPLACE_WITH_CONNECTION_ENDPOINT is https://REPLACE_WITH_ENVIRONMENT_ID.live.dynatrace.com. If you’re using Dynatrace Managed the connection endpoint is https://<YourManagedServerURL>/e/REPLACE_WITH_ENVIRONMENT_ID

Now, run OneAgent by creating a DaemonSet using the Kubernetes CLI.

kubectl create -f dynatrace-oneagent.yaml

The command above runs Dynatrace OneAgent on all Kubernetes workers. You may also want to run Dynatrace OneAgent on the Kubernetes masters. Please use the default Linux installer for this.

Ready to get started?

Learn what full-stack monitoring for your dockerized applications in Kubernetes and Mesos/Marathon environments means for yourself. Simply run the OneAgent Docker image as illustrated above and sit back.

The post Monitoring Docker container environments appeared first on #monitoringlife.

What are the must-have plugins for a new WordPress site?

Perf Planet - Tue, 10/18/2016 - 02:45

If you are building a new WordPress website, you should be aware of the most common and most effective plugins for your site. You might have used multiple plugins to complete your website but you need to make sure that you are using the following must have plugins also. These are the plugins which work on all WordPress websites and are necessary to use them in order to make a successful WordPress website. Here, we will discuss each of them briefly:

1. Jetpack:

It is one of the coolest plugins on WordPress. It tells you your visitor stats, security services, speeding up images, and helping you get more traffic. All of this for free also. It helps in speeding up your site and attracting more users to your site.

2. WordPress SEO by Yoast:

If you are a building a new website then you obviously want your users to access your site easily and more new users come to your website as much as possible, then you have to take care about the Search Engine Optimization of your website. This plugin tops the SEO plugins and is very easy to use also.

3. Backup WordPress:

This WordPress plugin is used to take backup of your entire site including database and all other necessary files time to time.

4. Akismet

If you have a blogging website then you need to install this plugin before making the website. Akismet checks your comments against the Akismet Web service to see if they look like spam or not. It has more than one million downloads.

5. W3 Total Cache

This plugin manages Search Engine Optimization (SEO) and Performance Optimization (WPO) through caching. Hence, it makes the website much faster than usual.

6. Google Analytics By Monster Insights

The Google Analytics for WordPress by MonsterInsights tracks your website and is linked with Google Analytics which makes it able to tell information about the traffic, viewers etc

7. Yet Another Related Posts Plugin

Yet Another Related Posts Plugin or YARPP is an intelligent plugin. It shows suggestions of related posts or pages to users. This plugin helps in boosting the number of views on the site.


You should make sure that you have these top rated must have plugins for a WordPress website. It is understood that your site is not limited to these plugins and you need a lot more plugins according to your website requirements and nature of your website. Like, business websites will have payment plugins, subscription plugins, and graph representation plugins. But these are the plugins which should be present in all kinds of websites to make your website look better, have more viewers and make your website more secure.

The post What are the must-have plugins for a new WordPress site? appeared first on Artz Studio.


Tim Coulter - Mon, 10/17/2016 - 21:47


Categories: Software Testing

What are the most useful off-page SEO techniques?

Perf Planet - Mon, 10/17/2016 - 12:55

SEO is the way to maximize traffic to any web page. It is a technique to enhance the ranking of the webpage. There are two types of Page Optimization Techniques.

  1.  On-Page SEO Techniques
  2.  Off-Page SEO Techniques

They both play vital role to improve the rankings of your web pages and posts/articles. In this article the focus is made on Off-Page Optimization techniques/ranking factors.

Off-Page SEO:

“Off-Page SEO” refers to all of the activities that you and others do away from your site to raise the ranking of a page in search engine results.

Here are some of the best ways to boost your “Off-Page SEO”.

1.Social Networking Sites

Also known as online reputation management, this is the first and foremost step with which you have to initiate your process.You need to sign up to the most popular social networking sites, such as; FacebookLinkedIn, TwitterGoogle+, etc., and create a profile of your own.By making your site pages on these social networking sites, you get regular visits and shares, which increases your post reach and ratings. This is considered one of the best Off-page SEO techniques.


Blogging is one of the best ways to promote your website online! By writing a blog for your website and include lots of unique content. Be precise in what you’re trying to convey to the users in your blog entry and promote your blog in blog directories and blog search engines. You can also promote your blog/website by posting comments in other service-related blogs which allow links in the comments section that are crawlable by the search engines.

3.Forum Marketing

Create a forum/online discussion board of your own and start a discussion or share topics with your friends. Find forums online that are related to your sites niche and get involved within that community. Reply to threads, answer people’s questions, offer advice, etc. This all helps to build up your reputation as someone who is an expert within that niche. Try to use “Do-Follow” Forums so that you can include a link to your site within your signature, which helps search engines crawl your site.

4.Search Engine Submission

Submit your website to the most popular search engines like Google, Yahoo, MSN, Altavista, Alexa, Alltheweb, Lycos, Excite, etc., to get listed for free.

5.Photo Sharing

If you have used any of your website product photos or images on your site, then you can share then on many of the major photo sharing websites like Flickr, Picasa, Photo Bucket, etc. Other people will be able to see them and comment on them, hopefully following a link to your site.

6.Video Promotions

Similar to photo sharing, you can publish/share your product videos, expert opinions, and reviews of your product and make them public in YouTube, Dailymotion, etc.

7.Answer Questions:

Participate in Answers by asking and answering relevant questions and placing a link to your website in the source section if necessary. If you don’t spam, this is another great way to increase your link popularity (Yahoo Answers, Cha-Cha, Answer Bag, etc.)

I hope you’ll find this article useful. Keep visiting Artz Studio for more useful stuff.

The post What are the most useful off-page SEO techniques? appeared first on Artz Studio.

What are the best ways to boost SEO of YouTube videos?

Perf Planet - Sun, 10/16/2016 - 14:17

Most of the youtube video uploaders lazily upload their videos and then just hope that one of their videos will go viral. This technique, of course, doesn’t work mostly. There is a huge game going on for the ranking of videos – the SEO Game.  If you take some time to enhance SEO of your videos, you’ll get significantly more traffic than your competitors. Here are some of the best ways to boost your SEO of youtube videos.

1.Know your Video Key-words:

Like anything in SEO, the YouTube SEO process starts with keyword research.Your goal is to find keywords that have YouTube results on the first page of Google.These are called, “Video Keywords.” In general, Google tends to use video results for these types of keywords:

  • How-to keywords (“how to shave a cat”)
  • Reviews (“Bluehost review”)
  • Tutorials (“Setting up WordPress”)
  • Anything fitness or sports related (“Cardio kickboxing”)
  • Funny videos (“Cute animals”)

Why is this important?

Well, let’s say you optimize your video around a keyword that doesn’t have any video results in Google. In that case, you’ll ONLY get traffic from people searching on YouTube. But if you optimize for Video Keywords, you’ll also get targeted traffic to your video directly from Google’s first page.

 2.Write Super-Long Video Descriptions:

Remember that, youtube and Google cannot watch and listen to videos you upload (yet).That means they heavily rely on the text describing the video to understand what the video is about and why your video is more important than your competitor’s one. Well, the more YouTube knows about your video, the more confidently it can rank it for your target keyword. But more importantly, YouTube uses keywords in the description to rank you for super-long tail keywords.

3.Name the video file with key-word:

You have created and edited your video, and it is ready to upload on youtube, but you rendered your video file as mov001.avi or random_name.mp4, make sure you rename your video as your_keyword.mp4. Naming your video file as your focused keyword tells the search engine that what can be inside your video, Search engines cannot look inside video content this is the file name which shows search algorithm that what your video is about to.

Having video file name as your video title, helps search engine to easily index video, and it gets higher ranking. So put your targeted keyword into video file name.

4.Get More Views From Online Communities:

Online communities like Quora and LinkedIn groups are fantastic places to funnel traffic from. The thing is, most communities don’t take too kindly to someone dropping links to their content all over the place.But they’re usually open to people sharing helpful YouTube videos, like yours!Just find a question in the community that your video could help answer. Then provide some value and suggest that people watch your video if they want more information.

I hope you’ll find this article useful. Keep visiting ArtzStudio for more informative stuff.

The post What are the best ways to boost SEO of YouTube videos? appeared first on Artz Studio.

What are WordPress Hooks: Actions and Filters

Perf Planet - Sun, 10/16/2016 - 14:08

Hooks in WordPress allow developers to easily tie their own code in with the WordPress core code base, themes, and plugins. In this article, we’ll discover what hooks are, go over the different types of hooks, and look at a few examples of hooks in action.

What are WordPress Hooks?

WordPress hooks are, essentially, triggers of sorts that allow users to, with short snippets of code, modify areas a WordPress theme or plugin, or add their own code to various parts of WordPress without modifying the original files.

An example of this could be along the lines of either “when WordPress chooses which template file to load, run our custom code” or “when you generate the content for each post, add social bookmarking links to the end of the content”.

Types of Hooks

Hooks can be divided into “Action” and “Filter” hooks, the former allowing for insertion of custom code at various points (not unlike events in JavaScript) and the latter allowing for the manipulation of various bits of content (for example, the content of a page or blog post). Let us take a closer look at each of these, shall we?

Action Hooks

Action hooks are designated points in the WordPress core, theme and plugin code where it is possible for outside resources (outside of the scope of where the hook is… either in the core, theme or plugin) to insert additional code

To hook on to an action, create a function in your theme’s functions.php file (or in your plugin’s code) and hook it on using the add_action() function. To find out the number and name of arguments for an action, simply search the code base for the matching do_action() call. For example, if you are hooking into ‘save_post’, you would find it in post.php:

do_action('save_post', $post_ID, $post, $update);

Your add_action call would look like:

add_action('save_post', 'wpdocs_my_save_post', 10, 3 );

And your function would be:

function wpdocs_my_save_post( $post_ID, $post,$update) { // do stuff here } Filter Hooks

Filter hooks are used to manipulate output. Filter hooks can also be used for truncating text, changing formatting of content, or just about any other programming manipulation requirement (for example, adding to or overriding an array of values).

Custom code is added as a filter using the add_filter() function. The following code adds a sign-off to the end of each blog post, only when viewing the full blog post screen:

<?php add_filter( 'the_content', 'wpcandy_filterhook_signoff' );   function wpcandy_filterhook_signoff ( $content ) {     if ( is_single() ) {       $content .= '<div class="sign-off">Th-th-th That\'s all, folks!</div>' . " ";     } // End IF Statement     return $content;   } // End wpcandy_filterhook_signoff() ?> Summary

Action Hooks and Filter Hooks are essential and helpful customising our code, theme, and plugins or make or website look better. It also helps to make the code DRY by writing a code once which applicable for all similar tasks. In order learn more about actions and filters, my favourite resource is, without a doubt, the WordPress Codex. While there are many tutorials available online regarding filter and action hook applications, the best understanding comes, as they say, when heard from the horse’s mouth. The Codex provides useful examples, as well as up to date and informative explanations, which aid in gathering an overall understanding of the Plugin API (the API that handles action and filter hooks).

The post What are WordPress Hooks: Actions and Filters appeared first on Artz Studio.

Webcast: How Facebook built a self service data infrastructure

O'Reilly Media - Fri, 10/14/2016 - 14:05
Please join Ashish Thusoo, Co-Founder and CEO of Qubole, a big data as a service company and former head of the data team at Facebook that pioneered the self-service data infrastructure model as he shares just how he did this at Facebook. O'Reilly Media, Inc. 2016-10-14T11:16:00-08:10

Got IT complexity? Monitor the User Experience

Perf Planet - Fri, 10/14/2016 - 12:24

It’s no secret that IT operations has changed a lot over the last decade or so. New technologies like cloud, virtualization, containers, and microservices have made applications and infrastructure more distributed, more efficient, and more flexible at the expense of predictability and manageability.

This is why technologies like Catchpoint’s have come to the forefront in recent years: with technology back ends increasingly out of IT’s hands or otherwise more challenging to monitor and manage, monitoring the end user experience of applications and network services becomes paramount. But that’s our marketing narrative. We wanted to actually back up this narrative with real data from the field so we turned to research firm Enterprise Marketing Associates (EMA).

EMA interviewed 100 IT managers in the US and UK who make purchasing decisions for application performance monitoring technologies to see what their painpoints were and how they were dealing with them. They published these results in a new white paper titled: “Taming IT Complexity with User Experience Monitoring.” The results were striking but not really surprising to us:

  • Complexity was the No. 1 application support challenge
  • This complexity is becoming increasingly expensive to support and is taxing on support resources
  • More than 70% of respondents lack visibility to application performance
  • 80% of respondents ranked user experience monitoring as critical or very important to business and IT outcomes
  • Calls from users or helpdesk support tickets are the No. 1 way IT finds out about performance issues

The survey also showed that not just any user experience monitoring will do. To be effective, UEM needs to map to where users actually are, include testing and monitoring of distributed Internet infrastructure and services like DNS, CDNs and APIs, and offer reporting of service level agreement adherence.

We’ll be hosting a webinar with EMA research director Julie Craig, the report’s author, to go over the survey results in depth later this month. In the meantime, you can download and read the report at:

The post Got IT complexity? Monitor the User Experience appeared first on Catchpoint's Blog.

Code-level visibility now available for Node.js

Perf Planet - Fri, 10/14/2016 - 10:58

Dynatrace has long provided code-level visibility for Java, .NET, and PHP. Code-level visibility is now also available for Node.js services! You can use Dynatrace to access Node.js code-level insights in several different ways.

Global CPU profiler

The environment-wide CPU profiler view shows the CPU activity of all monitored processes down to the code level, enabling you to profile the CPU consumption of each individual process, process group, service.

  1. Click CPU profiler (code-level) in the navigation menu.
  2. In the Process group list, click the Show code-level button of the process group you want to analyze.

The Hotspots page provides a profile view of all the processes within the analyzed process group.

To view a single process, click a process group name in the list. You’ll then see a list of the processes that are part of the group, along with their individual contributions.

Currently, you can profile the CPU consumption of all monitored Node.js services within a process group. For other process types (for example, Java and .NET), Dynatrace additionally enables you to profile the CPU consumption of background threads as well. Background thread profiling will be extended to Node.js monitoring in an upcoming release.

Analyze CPU consumption of a single service or request type

For those situations where you aren’t interested in the CPU consumption of a process group or process, but rather the CPU consumption of a specific service or even a specific type of request, simply select the service you’re interested in. You’ll find the Show code-level button for the selected service beneath the CPU consumption chart (this applies to both entire services and specific requests).

  1. Select Transactions & services from the navigation menu.
  2. Select the service you want to analyze.
  3. Once on the Service page, click the Details button.
  4. Select the CPU consumption tab.
  5. Click the Show code level button.

Accessing code-level CPU-consumption analysis works the same for entire services (as shown above) as it does for individual web requests (as shown below).

The resulting hotspot tree represents all requests within the selected time frame.

Understand how your code impacts response time

Understanding how your code affects the response time of your service requests enables you to make code-level improvements that improve customer experience.

To view response-time hotspots of a service at the code level
  1. Select Transactions & services from the navigation menu.
  2. Select the service you want to analyze.
  3. Once on the Service page, click the Details button.
    Alternatively, you can select a specific request for analysis.
  4. Click the View response time hotspots button.

    Response time analysis shows you where a service’s requests consume the most time. If response time hotspots are detected within the analyzed service or request, you’ll see a Code execution entry in the Top findings portion of the infographic. In this example, JavaScript Code execution has been identified as the biggest contributor to response time.
  5. Click Code execution to view the analysis.

    Now you can analyze the response time of the code that’s executed by the analyzed request.

The post Code-level visibility now available for Node.js appeared first on #monitoringlife.

Webcast: Integrating Apache Spark and NiFi for Data Lakes

O'Reilly Media - Thu, 10/13/2016 - 17:03
This webcast will introduce Kylo, a soon-to-be-open-source data lake orchestration framework based on Apache Spark and NiFi. O'Reilly Media, Inc. 2016-10-13T15:16:04-08:10