While sitting in a restaurant, I saw that the closed captioning on the sports program was frequently emitting a string of random characters in the speech:
Forensically speaking, we could assume that this bug occurs in one of the following places:
- The software transliterating the text to speech. That is, when the software encounters a certain condition, it puts a cartoon curse word into the data.
- The network transmitting the information. That is, the transmission of the data introduces garbage.
- The device displaying the transmitted information. That is, the television or satellite box that introduces the captions into the picture inserts the junk every two lines or so.
Okay, I’ll grant you the fourth option: That the broadcasters were actually cursing that much. However, given that the FCC has not announced fines daily, I’m willing to say that it’s nonzero, but unlikely.
The beauty of a defect that could occur almost anywhere, between disparate parts of the product and across different teams and technologies, means that it could ultimately be nobody’s fault. Well, if you ask one of the teams, it’s one of the other team’s fault.
You know, a little something squirrelly happens, you log a defect, and the server, interface, and design teams spend megabytes reassigning the defect to each other and disclaiming responsibility. It drives me nuts.
So what do you do? You find a product owner or someone who’ll take charge of it and pursue it across fiefdoms or who’ll put the screws to the varied factions until it gets fixed.
Because everybody’s got something they’d rather be working on than somebody else’s problem. Even if it’s everybody’s problem.
At least, I hope this is the result of the screen size being different in production than it was in the spec.
Otherwise, the implication would be that the interface was not tested.
Remember when you’re testing that the spec or requirements are merely suggestions, and you should go afield of your testing matrix as often as you can.
I feel like today’s a good day to share a few stories about my first few months at Microsoft, and the (very) small part I played in shipping Windows 95.
My start at Microsoft is a story on its own, and probably worth recapping here in an abbreviated form. I started at Microsoft in January 1995 as a contractor testing networking components for the Japanese , Chinese, and Korean versions of Windows 95. I knew some programming and even a bit of Japanese (I later became almost proficient, but have forgotten a lot of it now). I also knew, for better or for worse, a lot about Netware and about hardware troubleshooting, and that got me in the door (and got me hired full time 5 months later).
Other than confirming that mainline functionality (including upgrade paths) were correct, there were two big parts of my job that were unique to testing CKJ (Chinese, Korean, Japanese) versions of Windows. The first was that at the time, there were a dozen or so LAN cards (this was long before networking was integrated onto a motherboard) that were unique to Japan, and I was (solely) responsible for ensuring these cards worked across a variety of scenarios (upgrades from Windows, upgrades from LanMan, clean installs, NetWare support, protocol support, etc.). One interesting anecdote from this work was that I found that one of the cards had a bug in its configuration file causing it to not work in one of the upgrade scenarios. Given the time it typically took to go to the manufacturer to make a fix and get it back we decided to make the fix on our end. Because I knew the fix (a one liner), I made the change, checked it in, and that one liner became the first line of “code” I wrote for a shipping product at Microsoft.
The other interesting part of testing CKJ Windows was that Windows 95 was not Unicode; it was a mixed byte system where some (most) characters were made up of two bytes. Each language had a reserved set of bytes specified as Lead Bytes, that indicated that that byte, along with the subsequent byte were part of a single double-byte character. Programs that parsed strings had to parse the string using functions aware of this mechanism, or they would fail. Often, we found UI where we could put the cursor in the middle of a character. The interesting twist for networking was that the second byte could be 0x7c (‘|’), or 0x5c (‘\’). As you can imagine, these characters caused a lot of havoc when used in computer names, network shares, paths, and files, and I found many bugs testing with these characters (more explanation on double-byte characters, along with one of my favorite related bugs is described here).
While I didn’t do nearly as much for the product as many people on the team who had worked on the product for years, I think I made an impact, and I learned so many things and learned from so many different people.(potentially) related posts:
Victor Frankenstein’s creation speaking in Mary Shelley’s Frankenstein pretty much sums up my testing approach:
I will revenge my injuries: if I cannot inspire love, I will cause fear; and chiefly towards you, my arch-enemy, because my creator, do I swear inextinguishable hatred. Have a care: I will work at your destruction, nor finish until I desolate your heard so that you shall curse the hour of your birth.
Nineteenth century curses are the best.
Here’s a statement of work from Frankenstein himself later in the book:
My present situation was one in which all voluntary thought was swallowed up and lost. I was hurried away by fury; revenge alone endowed me with strength and composure; it moulded my feelings and allowed me to be calculating and calm, at periods when otherwise delirium or death would have been my portion.
It’s my great pleasure to announce the immediate availability of DaSpec v1.0, the first stable version ready for production use. DaSpec is an automation framework for Executable Specifications in Markdown. It can help you:
- Share information about planned features with non-technical stakeholders easily, and get actionable unambiguous feedback from them
- Ensure and document shared understanding of the planned software, making the definition of done stronger and more objective
- Document software features and APIs in a way that is easy to understand and maintain, so you can reduce the bus factor of your team and onboard new team members easily
- Make any kind of automated tests readable to non-technical team members and stakeholders
DaSpec helps teams achieve those benefits by validating human-readable documents against a piece of software, similar to tools such as FitNesse, Cucumber or Concordion. The major difference is that DaSpec works with Markdown, a great, intuitive format that is well supported by a large ecosystem of conversion, editing and processing tools. Run and play with the key examples in your browser now, without installing any software, to see what DaSpec could do for you.
DaSpec’s primary target are teams practising Behaviour Driven Development, Specification by Example, ATDD and generally running short, frequent delivery cycles with a heavy dependency on test automation. It can, however, be useful to anyone looking to reduce the cost of discovering outdated information in documentation and tests.
For more information on what’s new in version 1.0, check out the release notes.
My dev teams are using Git Flow. A suite of check-in tests run in our Continuous Integration upon merge-to-dev. The following is the model I think we should be using (and in some cases we are):
- A product programmer creates a feature branch and starts writing FeatureA.
- A test programmer creates a feature branch and starts writing the automated tests for FeatureA.
- The product programmer merges their feature branch to the dev branch. The CI tests execute but there are no FeatureA tests yet.
- The test programmer gets the latest code from the dev branch and completes the test code for FeatureA.
- The test programmer merges their test code to the dev branch, which kicks off the tests that were just checked in. They should pass because the test programmer ran them locally prior to merge to dev.
ViewState tokens are a common performance issue I come across while helping customers. I’ve found some information on how to get performance improvements by modifying ViewState tokens.
What is a ViewState token? It is ASP.NET’s way to ensure the state of the page elements sent and received match. It is an encoded string sent out to the user which gets sent back to your server in a POST request. The main problem with this kind of token is that it needs to represent every form element on the page. This gets large when complex elements such as a drop-down menu is in the form. Countries, companies, customers, or other database tables with many entries are good drop-down examples. In some cases it can double the size of your page. The largest ViewState I’ve seen was 300,000 characters long. The generated HTML for that page was at least 600,000 characters. That’s a lot of response text.Performance Impacts
As the ViewState grows larger. It affects performance in the following ways:
- Increased CPU cycles to serialize and to deserialize the ViewState.
- Pages take longer to download because they are larger.
- Large ViewStates can impact the efficiency of garbage collection.
According to one source I found:
ViewStates work best when serializing basic types such as strings, integers, and Booleans. Objects such as arrays, ArrayLists, and Hashtables are also good with those basic types. When you want to store a non-basic type, ASP.NET tries to use the associated type converter. If it cannot find one, it uses the expensive binary serializer. The size of the object is proportional to the size of the ViewState. Avoid storing large objects.Improvement Options
From much of what I read there are only three options:
- Remove ViewStates when they’re not needed.
- Remove any unnecessary elements in the form.
- Compress the ViewState to reduce the data transferred.
I rarely see ViewStates used that are not needed, and in some cases a few elements can be removed from the form. Compression is often the best improvement for the end-user experience. Keep in mind that compressing and decompressing the ViewState causes more CPU work. This could have a poor impact on performance at scale.
Compress a ViewState whenever it is above a particular size:
More suggestions on compressing the ViewState:
As the child of two former United States Marines, I already know plenty of naughty strings; however, a client pointed me to this resource on GitHub: Big List of Naughty Strings.
It is a pretty comprehensive list that includes a couple things not in my standard bag of tricks.
Recrudescence is the revival of material or behavior that had previously been stabilized, settled, or diminished. In medicine, it is the recurrence of symptoms in a patient whose blood stream infection has previously been at such a low level as not to be clinically demonstrable or cause symptoms, or the reappearance of a disease after it has been quiescent.
I don’t normally mention the mouthfeel of words, but this one has it.
I’m looking forward to using this when reopening bugs whose behavior recurs.
Also note I plan to mispronounce it as re-CRUD-escence.
Acceptance Criteria. When User Stories have Acceptance Criteria (or Acceptance Tests), they can help us plan our exploratory and automated testing. But they can only go so far.
Four distinct Acceptance Criteria does not dictate four distinct test cases, automated or manual.
Here are three flavors of Acceptance Criteria abuse I’ve seen:
- Skilled testers use Acceptance Criteria as a warm-up, a means of getting better test ideas for deeper and wider coverage. The better test ideas are what need to be captured (by the tester) in the test documentation...not the Acceptance Criteria. The Acceptance Criteria is already captured, right? Don’t recapture it (see below). More importantly, try not to stop testing just because the Acceptance Criteria passes. Now that you’ve interacted with the product-under-test, what else can you think of?
- The worst kind of testing is when testers copy Acceptance Criteria from User Stories, paste it into a test case management tool, and resolve each to Pass/Fail. Why did you copy it? If you must resolve them to Pass/Fail, why not just write “Pass” or “Fail” next to the Acceptance Criteria in the User Story? Otherwise you have two sources. Someone is going to revise the User Story Acceptance Criteria and your test case management tool Acceptance Criteria instance is going to get stale.
- You don’t need to visually indicate that each of your distinct Acceptance Criteria has Passed or Failed. Your Agile team probably has a definition of “Done” that includes all Acceptance Criteria passing. That being said, if the User Story is marked Done, it means all the Acceptance Criteria passed. We will never open a completed User Story and ask, “which Acceptance Criteria passed or failed?”.