Most Application Performance Management (APM) tools became really good at detecting failing or very slow transaction, and giving you system monitoring and code level metrics to identify root cause within minutes. While Dynatrace AppMon & UEM has been doing this for years, we took it to the next level by automatically detecting common architectural, performance […]
The post Automated Optimization with Dynatrace AppMon & UEM appeared first on about:performance.
While Docker container monitoring has been in beta for a while now, beginning with OneAgent v1.99, Docker container monitoring is now generally available to all customers.
With Docker container support, you can:
- View a summary of the most relevant Docker metrics, such as the number of running containers, the top 3 containers based on memory consumption, and the most frequently used images.
- Gain a deep understanding of container resource usage.
- Track image versions.
- Explore performance of Dockerized services.
For more details on Docker container monitoring, please refer to the earlier blog post, Beta availability of Docker container monitoring.Docker container monitoring setup
New customers: Docker container monitoring is automatically enabled for new environments. No setup required.To activate Docker container monitoring globally
- Make sure that Dynatrace OneAgent v1.99 or higher is installed on your Docker hosts.
- Go to Settings > Monitored Technologies.
- Enable the Docker containers switch.
- If you haven’t done so already, also enable the Extensions manager switch and the Docker switch further down the list.
- Make sure that Dynatrace OneAgent v1.99 or higher is installed on your Docker hosts.
- Open the left-hand navigation menu and select Hosts.
- Select a Docker-enabled host.
- Click Edit.
- Enable the Docker containers switch.
The post Docker container monitoring now generally available! appeared first on #monitoringlife.
On October 28th and 29th, GTAC 2014, the eighth GTAC (Google Test Automation Conference), was held at the beautiful Google Kirkland office. The conference was completely packed with presenters and attendees from all over the world (Argentina, Australia, Canada, China, many European countries, India, Israel, Korea, New Zealand, Puerto Rico, Russia, Taiwan, and many US states), bringing with them a huge diversity of experiences.
Speakers from numerous companies and universities (Adobe, American Express, Comcast, Dropbox, Facebook, FINRA, Google, HP, Medidata Solutions, Mozilla, Netflix, Orange, and University of Waterloo) spoke on a variety of interesting and cutting edge test automation topics.
All of the slides and video recordings are now available on the GTAC site. Photos will be available soon as well.
This was our most popular GTAC to date, with over 1,500 applicants and almost 200 of those for speaking. About 250 people filled our venue to capacity, and the live stream had a peak of about 400 concurrent viewers with 4,700 playbacks during the event. And, there was plenty of interesting Twitter and Google+ activity during the event.
Our goal in hosting GTAC is to make the conference highly relevant and useful for, not only attendees, but the larger test engineering community as a whole. Our post-conference survey shows that we are close to achieving that goal:
If you have any suggestions on how we can improve, please comment on this post.
Thank you to all the speakers, attendees, and online viewers who made this a special event once again. To receive announcements about the next GTAC, subscribe to the Google Testing Blog.
We are pleased to announce that the ninth GTAC (Google Test Automation Conference) will be held in Cambridge (Greatah Boston, USA) on November 10th and 11th (Toozdee and Wenzdee), 2015. So, tell everyone to save the date for this wicked good event.
GTAC is an annual conference hosted by Google, bringing together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It’s a great opportunity to present, learn, and challenge modern testing technologies and strategies.
You can browse presentation abstracts, slides, and videos from previous years on the GTAC site.
Stay tuned to this blog and the GTAC website for application information and opportunities to present at GTAC. Subscribing to this blog is the best way to get notified. We're looking forward to seeing you there!
The GTAC (Google Test Automation Conference) 2015 application process is now open for presentation proposals and attendance. GTAC will be held at the Google Cambridge office (near Boston, Massachusetts, USA) on November 10th - 11th, 2015.
GTAC will be streamed live on YouTube again this year, so even if you can’t attend in person, you’ll be able to watch the conference remotely. We will post the live stream information as we get closer to the event, and recordings will be posted afterward.
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations are 30 minutes and lightning talks are 10 minutes. Speakers should be prepared for a question and answer session following their presentation.
For presentation proposals and/or attendance, complete this form. We will be selecting about 25 talks and 200 attendees for the event. The selection process is not first come first serve (no need to rush your application), and we select a diverse group of engineers from various locations, company sizes, and technical backgrounds (academic, industry expert, junior engineer, etc).
The due date for both presentation and attendance applications is August 10th, 2015.
There are no registration fees, but speakers and attendees must arrange and pay for their own travel and accommodations.
You can find more details at developers.google.com/gtac.
The deadline to apply for GTAC 2015 is this Monday, August 10th, 2015. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to submit your proposal for consideration. If you would like to speak or attend, be sure to complete the form by Monday.
We will be making regular updates to the GTAC site (developers.google.com/gtac/2015/) over the next several weeks, and you can find conference details there.
For those that have already signed up to attend or speak, we will contact you directly by mid-September.
We have completed the selection and confirmation of all speakers and attendees for GTAC 2015. You can find the detailed agenda at: developers.google.com/gtac/2015/schedule.
Thank you to all who submitted proposals!
There is a lot of interest in GTAC once again this year with about 1400 applicants and about 200 of those for speaking. Unfortunately, our venue only seats 250. We will livestream the event as usual, so fret not if you were not selected to attend. Information about the livestream and other details will be posted on the GTAC site soon and announced here.
The ninth GTAC (Google Test Automation Conference) commences on Tuesday, November 10th, at the Google Cambridge office. You can find the latest details on the conference site, including schedule, speaker profiles, and travel tips.
If you have not been invited to attend in person, you can watch the event live. And if you miss the livestream, we will post slides and videos later.
We have an outstanding speaker lineup this year, and we look forward to seeing you all there or online!
If you are a webmaster or a blog writer you might already have a vague idea about SEO, but for your business to flourish successfully, you must understand it fully.What is SEO (search engine optimization) and how does it work?
We know the popular a site is, the more it is successful. The SEO (Search engine optimization) helps in making your site popular.
Think of it as a set of rules which are to be obeyed by everyone in the cyber world, so they can make their business stand out. These rules help owners by improving their search engine rankings and hence, expanding their website. The purpose of the rules are to show search engines that your site belongs at the top of the list. The rules themselves are a collection of techniques used to improve your sites present position.How do search engine determine relevance and popularity?
Search engines don’t just flash a list of stuff that it might think is what we are looking for. Present day search engines are much smarter, they bring out the most popular and active sites. They believe those sites contain some valuable information and in most cases, they are right.
The popular the page, the more valuable the information.
The popularity of a page is not manually decided but is rather decided through the magic of maths! Algorithm is what brings out the most relevant sites and then chooses the best ones, on the basis of popularity. These are the sites that come out on top and also hold the position, everyone else is competing for.What are keywords search tools? (100-200)
These are the tools that help you know your audience.
In order to gain popularity in any field, you must know your subjects; you must know where their head is at. Keyword search tools help you do just that. It shows you what people are typing into their google search engines and how you can use that knowledge to your advantage.
For instance, if you are a content writer and know what people are interested in these days, you can easily attract them by writing about that topic.Keyword tool work.
We all know it starts with a few words and a search box.
The right keywords can help you a thousandfold, so if you are serious about SEO you should definitely invest in a keyword search tool. The alternative is to work manually and sort out words one by one but this is much less reliable and will only slow you down.
Five best keyword search tools
Since I already mentioned, keyword search tools are the best sidekick to SEO. It is time to dust off the real question, which search tool to use? Apparently, there are many to choose from. I however, narrowed it down to the best five ones.
1) Google keyword planner:
This is one of the most famous keyword research tool out there. Its popularity is basically because of its connection with Google AdWords. (Google AdWords itself also works for promoting businesses, by making advertisements). Originally Google keyword planner was created for Google AdWords but with time that changed.Research keywords
Your target keywords are the ones that everyone is typing into their search engines. You already know by now, that adding those keywords to your website and campaigns will help you attract more followers. This is the main goal.How to get started?
1) First you have to log in to ‘Google Keyword planner’ with your Google account.
2) After joining, an interface would open which would allow you to enter your main keyword and also to customize the resultant keywords.Pros
- Keyword planner also allows its users to know what kind of words were popular in the past and which are trending, By estimating an accurate forecast you can get ahead of others, in arranging your bids and business proposition in the general public’s favor but still profitable to you.
- The new face of keyword planner allows you to track down popularity of content by location. They manage to do so by using their knowledge of individual zip codes. This is why keyword planner is extremely recommended for local and service based businesses.
- A lot of Google’s own apps are switching to paying mode. Majority of all keyword search engines require some kind of payment. In a world like that, keyword planner remains to be free of charge.
- It shows no knowledge about trends. This restricts you from being able to tell which trends are going up and which are not. Keyword planner goes ahead and gives an average of the last twelve months. (You can obtain the information, if you look into it and screen shot the last twelve months and analyze them).
- Another con is that you cannot tell which device your target audience is using anymore. This information was helpful in optimizing content for commonly used devices in the past, but recently it has been removed.
Keyword planner helps you get great bids and profitable budget deals to benefit your campaign.2) Long Tail Pro:
If you are looking for an alternative to Google Keyword Planner than long Tail Pro is just what you are looking for. Unlike Google Keyword Planner, Long Tail Pro is paid software and is a lot more reliable Since Google keyword planner introduces the same keywords, everyone is already using. Long Tail Pro also helps discover words that would go unnoticed by Google keyword planner.How to get started?
- Download a copy of long Tail Pro from the internet.
- Allow it to access your Google AdWords account.
- Now you can start your research project by generating a new keyword with any name.
- Finally, you will be asked to enter your seed keywords.
(You can also customize your search options by including research punctuality and geographical position to target).Pros
- The module generates over 800 keywords for each seed word.
- It also provides the keywords with their search volume, PPC and competitiveness.
- You can search more than a single word at a time.
- It automatically checks ranking on Google, Bing and Yahoo.
- You can also check the viability of a niche.
- It is not the fastest keyword research tool out there. SEMRush surpasses it in speed.
- It is not helpful in finding your rivals main keywords.
- It costs thirty seven dollars a month.
Long tail pro is a very powerful keyword research tool that also allows its user to look up competitiveness.
If you are in a hurry to boost your search traffic and take over and crush your competitors then SEMrush is waiting for you. This is a top level keyword search tool used by professionals. It has 26 different databases (in different parts of the world) which are updated regularly. The databases contain over a 100 million words and over 70 million websites.
SEMrush retrieves information from keyword searches through an algorithm called LiveUpdate. The 100 millions words from 26 different databases are selected and brought forth on the bases of search volume.
Google’s top twenty research pages are stored as screenshots. The purpose of this is to gather information for every keyword. This information is then interpreted by algorithms.
The obtained result is brought forward to its users.Pros
- It locates any websites traffic, accurately.
- It can analyze the back links of any website.
- It can also create a site audit.
- It can help spy on 5 competitors at once (domain to domain comparison).
- Provides Monthly search reports.
- Helpful in Tracking of keywords.
- It has only one downside which is that It costs 70$ a month.
You can get real in depth information by using SEMrush4) Soovle:
‘The search tool for search engines’.
It gathers information from 15 different search engines and provides us with the best keywords out there. You can customize your search by choosing which extra search engines you want involved (Amazon, Wikipedia, Ask, eBay, Yell, YouTube etc) apart from Google, Bing and Yahoo.
You are also able to change the outlook of your Soovle by adding logosWorking
A search box, a Soovle link, a number of search engines and a few icons, that’s the interface. Simply type in your keyword and wait for the search engine to suggest words for that query. You can also switch amongst the research engines by pressing the right arrow on the little icons under the search box. It is simple to work with.Pros
- If you have no idea what you are specifically looking for then worry not Soovle’ s got your back
- In keyword research, it is a great source of ideas (for gifts using Amazon and buy.com will help you a lot).
- Soovle allows you to save your searches for later use.
- In Soovle, Deleting saved searches is a headache. You mostly end up deleting them all.
- If you type fast then Soovle may not keep up and may give bizarre and unrelated results.
The name says it all, it provides you with knowledge about what keywords and ads your competitors are using. You can use this knowledge to defend your campaigns position or to offend the opponents.
Keyword Spy is a web-based intelligence tool and just like any research tool it cannot promise 100% success but maneuvering and tactfully using it can play a major role in your victories ahead.Pros
- It’s everything you need to spread your business.
- It is helpful in competitive targeting and going through keyword information in different niches. This can help you easily point out the key places to target.
- It creates targeted campaigns.
- It shows rank based on geographical location.
- The tracking features do not benefit smaller businesses.
- It costs 89.95$ a month for tracking and $139 for professional accounts (which include both).
SEO is extremely important for the success of any business. Even if a beginner steps into the field he must be well aware of these things or else he’d be at a great disadvantage.
The above five keyword search engines are not the only ones out there. They however are the most popular and easy ones to work with According to me. I mentioned different types so that everyone can benefit from the article according to their need.
If you are looking for something cheap or even free then ‘Google Keyword Planner’ is your thing. It however, will not be unique since many users out there work with it already. Long tail pro is amazing for finding long tail keywords. This is better than Google Keyword Planner in that it’s a little vaster and provides over 800 keywords. ‘SEMresh’ is my favorite, with its enormous database it is amongst the most professional search tools; it can help you boost your business in no time! Then we have’ Soovle’, which is extremely infinite since it connects up to 15 search engines and then makes a search. However, this is a very bad choice for short tempered people because it is slow. The last one ‘Spy words’ is great for keeping an eye on your competitors and using it to your advantage.
So from the above paragraph, you can easily understand that the different types of keyword research tools are designed for different types of people and for different kinds of businesses. For instance, if you are running a small business you can easily work with long tail pro and if you are working with a huge organization you might need Spy words.
The post 5 Best Keyword Research Tools Pros and Cons for SEO & PPC appeared first on artzstudio.
One of the key driving factors behind the various web/mobile performance initiatives is the fact that end-users’ tolerance for latency has nose-dived. Several studies have been published whereby it has been demonstrated that poor performance routinely impacts the bottom line, viz,. # users, # transactions, etc. Examples studies include this, this and this. There are several sources of performance bottlenecks, viz., but not limited to:
- Large numbers of redirects
- Increasing use of images and/or videos without supporting optimizations such as compression and form factor aware content delivery
- Performance tax of using SSL (as discussed here and here)
- Increasing use of third-party services which, in most cases, become the longest pole from a performance standpoint
Over five years back we had written a blog on the anatomy of HTTP. In the Internet era, five years is a long time. There has been a sea change across the board and hence we thought it was time to revisit the subject for the benefit of the community as a whole. The figure below shows the key components of a HTTP request.
Besides the independent components, key composite metrics are also annotated in the figure and are defined below.
Time to First Byte (TTFB): The time it took from the request being issued to receiving the first byte of data from the primary URL for the test(s). This is calculated as DNS + Connect + Send + Wait. For tests where the primary URL has a redirect chain, TTFB is calculated as the sum of the TTFB for each domain in the redirect chain.
Response: The time it took from the request being issued to the primary host server responding with the last byte of the primary URL of the test(s). For tests where the primary URL has a redirect chain, Response is calculated as the sum of the response time for each domain in the redirect chain.
Server Response: The time it took from when DNS was resolved to the server responding with the last byte of the primary URL of the test(s). This shows the server’s response exclusive of DNS times. For tests where the primary URL has a redirect chain, Server Response is calculated as the response time for each domain in the redirect chain minus the DNS time.
Anomalies in any metric can be viewed and detected via Catchpoint’s portal in a very straightforward fashion. An example illustrating an anomaly in TWait is shown below:
Other composite metrics of interest include:
Render Start: The time it took the browser to start rendering the page.
Document Complete: Indicates that the browser has finished rendering the page. In Chrome it is equivalent to the browser onload event. In IE it occurs just before onload is fired and is triggered when the document readyState changes to “complete.”
Webpage Response: The time it took from the request being issued to receiving the last byte of the final element on the page.
- For Web tests, the agent will wait for up to two seconds after Document Complete for no network activity to end the test
- Webpage Response is impacted by script verbs for Transaction tests
- For Object monitor tests, this value is equivalent to Response
Wire Time: The time the agent took loading network requests. Equal to Webpage Response minus Client Time.
Content Load: The time it took to load the entire content of the webpage after connection was established with the server for the primary URL of the test(s), this is the time elapsed between the end of Send until the final element, or object, on the page is loaded. Content Load does not include the DNS, Connect, Send, and SSL time on the primary URL (or any redirects of the primary URL). For Object monitor tests, this value is equivalent to Load.
In the context of mobile performance, a key metric which is very often the target of optimization is Above-The-Fold (ATF) time. The importance of ATF stems from the fact that while the user is interpreting the first page of content, the rest of the page can be delivered progressively in the background. A threshold of one second is often used as the target for ATF. Practically, after subtracting the network latency, the performance budget is about 400 milliseconds for the following: server must render the response, client-side application code must execute, and the browser must layout and render the content. For recommendations on how to optimize mobile websites, the reader is referred to this and this.
A key difference between the figure above and the corresponding figure in the previous blog is the presence of the TSSL component. Unencrypted communication – via HTTP and other protocols – is susceptible to interception, manipulation, and impersonation, and can potentially reveal users’ credentials, history, identity, and other sensitive information. In the post-Snowden era, privacy has been in limelight. Leading companies such as Google, Microsoft, Apple, and Yahoo have embraced HTTPS for most of their services and have also encrypted their traffic between their data centers. Delivering data over HTTPS has the following benefits:
- HTTPS protects the integrity of the website by preventing intruders from tampering with exchanged data, e.g., rewriting content, injecting unwanted and malicious content, and so on.
- HTTPS protects the privacy and security of the user by preventing intruders from listening in on the exchanged data. Each unencrypted request can potentially reveal sensitive information about the user, and when such data is aggregated across many sessions, can be used to de-anonymize identities and reveal other sensitive information. All browsing activity, as far as the user is concerned, should be considered private and sensitive.
- HTTPS enables powerful features on the web such as accessing users’ geolocation, taking pictures, recording video, enabling offline app experiences, and more, requiring explicit user opt-in that, in turn, requires HTTPS.
When the SSL protocol was standardized by the IETF, it was renamed to Transport Layer Security (TLS). TLS was designed to operate on top of a reliable transport protocol such as TCP. The TLS protocol is designed to provide the following three essential services to all applications running above it:
- Encryption: A mechanism to obfuscate what is sent from one host to another
- Authentication: A mechanism to verify the validity of provided identification material
- Integrity: A mechanism to detect message tampering and forgery
Technically, one is not required to use all three in every situation. For instance, one may decide to accept a certificate without validating its authenticity; having said that, one should be well aware of the security risks and implications of doing so. In practice, a secure web application will leverage all three services.
In order to establish a cryptographically secure data channel, both the sender and receiver of a connection must agree on which ciphersuites will be used and the keys used to encrypt the data. The TLS protocol specifies a well-defined handshake sequence (illustrated below) to perform this exchange.
TLS Handshake. Note that the figure assumes the same (optimistic) 28 millisecond one-way “light in fiber” delay between New York and London. (source: click here)
As part of the TLS handshake, the protocol allows both the sender and the receiver to authenticate their identities. When used in the browser, this authentication mechanism allows the client to verify that the server is who it claims to be (e.g., a payment website) and not someone simply pretending to be the destination by spoofing its name or IP address. Likewise, the server can also optionally verify the identity of the client — e.g., a company proxy server can authenticate all employees, each of whom could have their own unique certificate signed by the company. Finally, the TLS protocol also provides its own message framing mechanism and signs each message with a message authentication code (MAC). The MAC algorithm is a one-way cryptographic hash function (effectively a checksum), the keys to which are negotiated by both the sender and the receiver. Whenever a TLS record is sent, a MAC value is generated and appended for that message, and the receiver is then able to compute and verify the sent MAC value to ensure message integrity and authenticity.
From the figure above we note that TLS connections require two roundtrips for a “full handshake” and thus have an adverse impact on performance. The plot illustrates the comparative Avg TSSL (the data was obtained via Catchpoint) for the major airlines:
However, in practice, optimized deployments can do much better and deliver a consistent 1-RTT TLS handshake. In particular:
- False Start – a TLS protocol extension – can be used to allow the client and server to start transmitting encrypted application data when the handshake is only partially complete—i.e., onceChangeCipherSpec and Finished messages are sent, but without waiting for the other side to do the same. This optimization reduces handshake overhead for new TLS connections to one roundtrip.
- If the client has previously communicated with the server, then an “abbreviated handshake” can be used, which requires one roundtrip and also allows the client and server to reduce the CPU overhead by reusing the previously negotiated parameters for the secure session.
The combination of both of the above optimizations allows us to deliver a consistent 1-RTT TLS handshake for new and returning visitors and facilitates computational savings for sessions that can be resumed based on previously negotiated session parameters. Other ways to minimize the performance impact of HTTPS include:
- Use of HTTP Strict Transport Security (HSTS) that restricts web browsers to access web servers solely over HTTPS. This mitigates performance impact by eliminating unnecessary HTTP-to-HTTPS redirects. This responsibility is shifted to the client which will automatically rewrite all links to HTTPS.
- Early Termination helps minimize latency due to TLS handshake.
- Use of compression algorithms such as HPACK, Brotli.
For further discussion on SSL/TLS, the reader is referred to the paper titled, “Anatomy and Performance of SSL Processing” by Zhao et al. and the paper titled, “Analysis and Comparison of Several Algorithms in SSL/TLS Handshake Protocol” by Qing and Yaping.
By: Arun Kejariwal and Mehdi Daoudi
IntroductionSoftware development is difficult. Projects often evolve over several years, under changing requirements and shifting market conditions, impacting developer tools and infrastructure. Technical debt, slow build systems, poor debuggability, and increasing numbers of dependencies can weigh down a project The developers get weary, and cobwebs accumulate in dusty corners of the code base.
Fighting these issues can be taxing and feel like a quixotic undertaking, but don’t worry — the Google Testing Blog is riding to the rescue! This is the first article of a series on “hackability” that identifies some of the issues that hinder software projects and outlines what Google SETIs usually do about them.
According to Wiktionary, hackable is defined as:
hackable (comparative more hackable, superlative most hackable)
- (computing) That can be hacked or broken into; insecure, vulnerable.
- That lends itself to hacking (technical tinkering and modification); moddable.
Obviously, we’re not going to talk about making your product more vulnerable (by, say, rolling your own crypto or something equally unwise); instead, we will focus on the second definition, which essentially means “something that is easy to work on.” This has become the mainfocus for SETIs at Google as the role has evolved over the years.
In PracticeIn a hackable project, it’s easy to try things and hard to break things. Hackability means fast feedback cycles that offer useful information to the developer.
This is hackability:
- Developing is easy
- Fast build
- Good, fast tests
- Clean code
- Easy running + debugging
- One-click rollbacks
- Broken HEAD (tip-of-tree)
- Slow presubmit (i.e. checks running before submit)
- Builds take hours
- Incremental build/link > 30s
- Can’t attach debugger
- Logs full of uninteresting information
Pillar 1: Code Health“I found Rome a city of bricks, and left it a city of marble.”
Keeping the code in good shape is critical for hackability. It’s a lot harder to tinker and modify something if you don’t understand what it does (or if it’s full of hidden traps, for that matter).
TestsUnit and small integration tests are probably the best things you can do for hackability. They’re a support you can lean on while making your changes, and they contain lots of good information on what the code does. It isn’t hackability to boot a slow UI and click buttons on every iteration to verify your change worked - it is hackability to run a sub-second set of unit tests! In contrast, end-to-end (E2E) tests generally help hackability much less (and can evenbe a hindrance if they, or the product, are in sufficiently bad shape).
Figure 1: the Testing Pyramid.
I’ve always been interested in how you actually make unit tests happen in a team. It’s about education. Writing a product such that it has good unit tests is actually a hard problem. It requires knowledge of dependency injection, testing/mocking frameworks, language idioms and refactoring. The difficulty varies by language as well. Writing unit tests in Go or Java is quite easy and natural, whereas in C++ it can be very difficult (and it isn’t exactly ingrained in C++ culture to write unit tests).
It’s important to educate your developers about unit tests. Sometimes, it is appropriate to lead by example and help review unit tests as well. You can have a large impact on a project by establishing a pattern of unit testing early. If tons of code gets written without unit tests, it will be much harder to add unit tests later.
What if you already have tons of poorly tested legacy code? The answer is refactoring and adding tests as you go. It’s hard work, but each line you add a test for is one more line that is easier to hack on.
Readable Code and Code ReviewAt Google, “readability” is a special committer status that is granted per language (C++, Go, Java and so on). It means that a person not only knows the language and its culture and idioms well, but also can write clean, well tested and well structured code. Readability literally means that you’re a guardian of Google’s code base and should push back on hacky and ugly code. The use of a style guide enforces consistency, and code review (where at least one person with readability must approve) ensures the code upholds high quality. Engineers must take care to not depend too much on “review buddies” here but really make sure to pull in the person that can give the best feedback.
Requiring code reviews naturally results in small changes, as reviewers often get grumpy if you dump huge changelists in their lap (at least if reviewers are somewhat fast to respond, which they should be). This is a good thing, since small changes are less risky and are easy to roll back. Furthermore, code review is good for knowledge sharing. You can also do pair programming if your team prefers that (a pair-programmed change is considered reviewed and can be submitted when both engineers are happy). There are multiple open-source review tools out there, such as Gerrit.
Nice, clean code is great for hackability, since you don’t need to spend time to unwind that nasty pointer hack in your head before making your changes. How do you make all this happen in practice? Put together workshops on, say, the SOLID principles, unit testing, or concurrency to encourage developers to learn. Spread knowledge through code review, pair programming and mentoring (such as with the Readability concept). You can’t just mandate higher code quality; it takes a lot of work, effort and consistency.
Presubmit Testing and LintConsistently formatted source code aids hackability. You can scan code faster if its formatting is consistent. Automated tooling also aids hackability. It really doesn’t make sense to waste any time on formatting source code by hand. You should be using tools like gofmt, clang-format, etc. If the patch isn’t formatted properly, you should see something like this (example from Chrome):
$ git cl upload
Error: the media/audio directory requires formatting. Please run
git cl format media/audio.
Source formatting isn’t the only thing to check. In fact, you should check pretty much anything you have as a rule in your project. Should other modules not depend on the internals of your modules? Enforce it with a check. Are there already inappropriate dependencies in your project? Whitelist the existing ones for now, but at least block new bad dependencies from forming. Should our app work on Android 16 phones and newer? Add linting, so we don’t use level 17+ APIs without gating at runtime. Should your project’s VHDL code always place-and-route cleanly on a particular brand of FPGA? Invoke the layout tool in your presubmit and and stop submit if the layout process fails.
Presubmit is the most valuable real estate for aiding hackability. You have limited space in your presubmit, but you can get tremendous value out of it if you put the right things there. You should stop all obvious errors here.
It aids hackability to have all this tooling so you don’t have to waste time going back and breaking things for other developers. Remember you need to maintain the presubmit well; it’s not hackability to have a slow, overbearing or buggy presubmit. Having a good presubmit can make it tremendously more pleasant to work on a project. We’re going to talk more in later articles on how to build infrastructure for submit queues and presubmit.
Single Branch And Reducing RiskHaving a single branch for everything, and putting risky new changes behind feature flags, aids hackability since branches and forks often amass tremendous risk when it’s time to merge them. Single branches smooth out the risk. Furthermore, running all your tests on many branches is expensive. However, a single branch can have negative effects on hackability if Team A depends on a library from Team B and gets broken by Team B a lot. Having some kind of stabilization on Team B’s software might be a good idea there. Thisarticle covers such situations, and how to integrate often with your dependencies to reduce the risk that one of them will break you.
Loose Coupling and TestabilityTightly coupled code is terrible for hackability. To take the most ridiculous example I know: I once heard of a computer game where a developer changed a ballistics algorithm and broke the game’s chat. That’s hilarious, but hardly intuitive for the poor developer that made the change. A hallmark of loosely coupled code is that it’s upfront about its dependencies and behavior and is easy to modify and move around.
Loose coupling, coherence and so on is really about design and architecture and is notoriously hard to measure. It really takes experience. One of the best ways to convey such experience is through code review, which we’ve already mentioned. Education on the SOLID principles, rules of thumb such as tell-don’t-ask, discussions about anti-patterns and code smells are all good here. Again, it’s hard to build tooling for this. You could write a presubmit check that forbids methods longer than 20 lines or cyclomatic complexity over 30, but that’s probably shooting yourself in the foot. Developers would consider that overbearing rather than a helpful assist.
SETIs at Google are expected to give input on a product’s testability. A few well-placed test hooks in your product can enable tremendously powerful testing, such as serving mock content for apps (this enables you to meaningfully test app UI without contacting your real servers, for instance). Testability can also have an influence on architecture. For instance, it’s a testability problem if your servers are built like a huge monolith that is slow to build and start, or if it can’t boot on localhost without calling external services. We’ll cover this in the next article.
Aggressively Reduce Technical DebtIt’s quite easy to add a lot of code and dependencies and call it a day when the software works. New projects can do this without many problems, but as the project becomes older it becomes a “legacy” project, weighed down by dependencies and excess code. Don’t end up there. It’s bad for hackability to have a slew of bug fixes stacked on top of unwise and obsolete decisions, and understanding and untangling the software becomes more difficult.
What constitutes technical debt varies by project and is something you need to learn from experience. It simply means the software isn’t in optimal form. Some types of technical debt are easy to classify, such as dead code and barely-used dependencies. Some types are harder to identify, such as when the architecture of the project has grown unfit to the task from changing requirements. We can’t use tooling to help with the latter, but we can with the former.
I already mentioned that dependency enforcement can go a long way toward keeping people honest. It helps make sure people are making the appropriate trade-offs instead of just slapping on a new dependency, and it requires them to explain to a fellow engineer when they want to override a dependency rule. This can prevent unhealthy dependencies like circular dependencies, abstract modules depending on concrete modules, or modules depending on the internals of other modules.
There are various tools available for visualizing dependency graphs as well. You can use these to get a grip on your current situation and start cleaning up dependencies. If you have a huge dependency you only use a small part of, maybe you can replace it with something simpler. If an old part of your app has inappropriate dependencies and other problems, maybe it’s time to rewrite that part.
The next article will be on Pillar 2: Debuggability.
As we gear up for today’s live Ask Me Anything, we’re poring through the questions submitted about the distinctions, overlaps, and best practices to build high-performing organizations with devops and SRE teams. The ones below represent only a handful of what IT professionals have asked.
Questions like these will be answered on the spot by a panel of industry experts including Google SRE Liz Fong-Jones, Chef CTO Adam Jacob, Honeycomb Co-founder and Engineer Charity Majors, and Catchpoint Performance Engineer Andrew Smirnov. (There’s still time to register for the live session – just click here. You can also get the recording through this link.)
Boundaries and role definitions inspired a number of questions as people look for guidance on specific responsibilities:
- Since SREs are engineers, are they expected to directly modify product code? What about when the SRE team is “embedded” with the product development team?
- Are build and release engineering part of DevOps?Where does programming (delivery team’s work) end, and where does SRE begin?
Other questions focus on IT and development environments, and how they affect team dynamics and processes:
- How can you overcome cultural bias in which Ops is viewed as second class compared to development or business departments?
- How do you introduce DevOps culture in high-context, hierarchical, and authoritarian cultures, such as those found in Korea and Japan?
- How is software development process different when moving from product (like appliance) to a cloud service?
- How does the delivery pipeline and software versioning have to be changed in order to migrate smoothly? What about building both appliance and cloud service from single code base?
The general range of questions was remarkable, from broad, philosophical issues to specific, administrative topics:
- How can we prevent “you build it, you own it” from becoming “jack of all trades, master of none?”
- How do manage the Windows administrative privileges and standards in the DevOps environment?
We’ll detail the experts’ responses to questions that sparked interest and debate in a follow-up article next week. Until then, we can’t wait to hear the discussion that “devops vs. SREs” touches off.
There are many reasons why Dynatrace is the APM market leader. Our unified digital performance monitoring platform helps more than 8,000 organizations optimize customer experience, modernize operations, and accelerate innovation across cloud and on-premises environments. We’ve even revolutionized the way that businesses manage the operational complexity of today’s highly dynamic environments with the introduction of the world’s first AI-assisted monitoring solution.
It’s true that Gartner has recognized Dynatrace as a Magic Quadrant leader for 6 years in a row, but we’re most proud of the ways that we’ve helped our customers overcome their digital-performance challenges and grow their businesses. Nothing shows off the value of Dynatrace better than customer testimonials.
Here are just some of the reasons that our customers love what we do.Dynatrace works out of the box
“Coming from New Relic we could be easily convinced by the ease of setup and an amazing dashboard.”
— BARBRI (USA)
“Dynatrace is the best tool for monitoring our fully Dockerized application stack. Out-of-the-box it offered deep insights into our hosts, Docker containers, and the services that they provide.”
— Axel Springer Ideas (Germany)
“When we saw how easy it would be to integrate Dynatrace into our AWS environment, we decided to give it a try.”
— Lemniscus (Germany)
“Dynatrace’s ease of deployment and its auto-detection capabilities that reduce installation effort to near zero really impressed us. We got meaningful alerts right away.”
— Ofertia (Spain)
“We’re intrigued by its capability to work almost out of the box as well as being able to monitor system aspects as well as application performance and user experience.”
— Sofico (Belgium)
“We chose Dynatrace over New Relic and other companies because it allowed us to identify and get in front of performance issues more quickly and efficiently. With Dynatrace, we can be more agile, proactive, and address issues before they ever reach our customers.”
— ZoomInfo (USA)
“Instead of waiting for a customer to call in and report a problem, we can now see where the problems and slow applications occur and fix them before they affect the customers.”
— Stratsys (Sweden)
“Since changing over from New Relic, we have been able to take more proactive approach to our decision making and less reactive takes the guesswork out of fixing issues.”
— Norwall PowerSystems (USA)
“Dynatrace gives us the ability to catch any problems before they happen as we continue to improve our site.”
— Lucky Vitamin (USA)
“We started with Nagios and similar tools. The problems with both solutions were the same: Adding new hosts and adjusting monitoring as well as defining proper thresholds for alerts was always an issue. Wrong thresholds in too many cases led to alert spamming. As if this was not bad enough it also resulted in us missing the really important problems.”
— CHIP Digital (Germany)
“After deploying a new release, we spent over 30 hours trying to find a performance problem. Once we found the issue, it only took us 30 minutes to fix it. With Dynatrace, finding a similar problem would only take us minutes to identify, saving us precious time and resources.”
— ZoomInfo (USA)
“When we first implemented Dynatrace, we saw immediate and impressive benefits. No other APM product gives you the ability to address issues as quickly as Dynatrace does.”
—McGraw-Hill Education (USA)
“Now, none of us have to waste time going through logs to remediate issues. We just look at the dashboard, drill down and get the answer.”
— Woodmen of the World (USA)
“A lot of other monitoring tools are great, but they are too noisy. We don’t have time for noise. Dynatrace tells us the problem and points to the root cause fast.”
— III Digital Rock Studios (USA)
“It is obviously designed to help identify problems rather than overwhelm us with metrics.”
— Usersnap (Austria)
“We were using NewRelic, but we never got the full picture of our production environment.”
— BARBRI (USA)
“Our software stack looks easy on the whiteboard, but the reality is much more complex. Smartscape helps us to get real-time visibility into the health of our systems and show where we need to optimize.”
— mediaDESK (Austria)
“For monitoring I often went with AppDynamics and for pure local machine code-level insight I used tools like JProfiler. Often I had to combine them with other tools to get the full picture. Now I only need to check the Dynatrace mobile app.”
— Lemniscus (Germany)
“Consolidation and migration is a key part of our daily work. Understanding the dependencies of our application is a prerequisite for us. Dynatrace is the only solution that can provide this.”
— CHIP Digital (Germany)
“The dashboard kicks ass and what they call ‘Smartscape’ provides an excellent overview of system components. The shortcuts to response time analysis for the slowest percentile definitely comes in handy.”
— Sofico (Belgium)
“We want to have the right tool. Instead of using separate tools for web monitoring, application performance management and server monitoring we now have a single all-in-one solution, Dynatrace. We want to have the right tool. Instead of using separate tools for web monitoring, application performance management and server monitoring we now have a single all-in-one solution, Dynatrace.”
— Channel IQ (USA)
“We needed visibility of the front-end too, but New Relic Browser was too expensive. We decided to go with Dynatrace because it provides better full-stack monitoring and proactive alarms at a more affordable price.”
— Bluesoft (Brazil)
“Having reassessed the market, we quickly realized that Dynatrace was the only provider able to offer the level of visibility that we needed.”
— Bond International Software (UK)
“If something isn’t the way it is supposed to be, our DevOps Engineers have the full picture within one tool. We get to the root cause much quicker and only involve team members that are actually needed for fixing.”
— Planon (USA)
“We wanted a tool that could monitor our infrastructure, application, and real user experience—and provide deep dives, too. Not only is Dynatrace easy to use with its simple user interface, it was painless to install and begin monitoring our environment.”
— Document Processing Solutions (DPS) (USA)
“I really like the support I’ve received from Dynatrace on several occasions—responsive, honest, accurate.”
— Corporate Value Metrics (USA)
“The guys at Dynatrace are really responsive. . . . Privacy and data security is a major issue for our customers, including world market leaders in security critical industries. Dynatrace’s chief software architect addressed all our security concerns personally in a comprehensive call.”
— Planon (USA)
“The Dynatrace guys were great to work with, and pricing turned to be great value for money.”
— BARBRI (USA)
“Dynatrace’s expert leadership is off the charts. They’ve met our expectations and delivered great value to my organization. They’ve always gone above and beyond what was ask of them and more.”
—Regions Bank (USA)
“In the past we partnered with other monitoring vendors. Dynatrace is special in that it not only is very easy to do joint business and drive additional revenue but also made us part of their ecosystem connecting us with their user base.”
— FCamara (Brazil)
As you can see, our customers love Dynatrace for a number of reasons. It’s our customers who have made Dynatrace #1 in APM market share for three years in a row. So if you’re looking for actionable insights to master complexity, gain operational agility, and grow revenue by delivering amazing digital customer experiences, join our customers and give Dynatrace a try.
Dynatrace now supports AngularJS 2 applications. As the Angular team changed how bundling and initialization are done in AngularJS 2, we’ve responded with a modification to the Dynatrace AngularJS 2 initialization file that enables RUM user action detection in AngularJS 2 applications.
- Select Web applications from the left-hand navigation menu.
- Select your AngularJS 2 application.
- Click the Browse (…) button.
- Select Edit.
- Select XHR (Ajax) detection.
You can now use the global Time frame selector to filter all Hosts monitoring data based on a specific analysis time frame.
With this new option, you can explore host performance over longer time intervals. This can be particularly handy during troubleshooting or in identifying resource usage patterns. For example, if you select Last 30 days from the Time frame selector (upper-right corner in the image below), the Hosts tile indicates the total number of hosts (14), and the total number of hosts that were either offline or unhealthy (2) during the selected time frame.
Click the Hosts tile to view the Hosts page, which includes a complete list of all the hosts detected in your environment during the selected time frame. Host State (offline, running, or shutdown) and useful host metrics are displayed here in a sortable view.
All charts on individual Host pages reflect the time-frame selection, including the Availability chart and the Processes list.
Note that the Problems, Logs, and Events sections don’t reflect selections made using the Time frame selector—they still reflect only the last 72 hours, regardless of the selected time frame. These sections will be enhanced soon to reflect changes made using the Time frame selector, so stay tuned for updates.
The All hosts page behaves slightly differently when custom time frames are selected using the Time frame selector’s Previous/Next navigation buttons. Rather than showing the most recent values, the values reflect the average values over the selected time frame. Also, State information is replaced with Availability insights (note the Availability column header below).
The post Time frame selector now filters all Hosts monitoring data appeared first on #monitoringlife.