On the front page of the September 9, 2014, Wall Street Journal, we find that GE has exited the kitchen with the sale of its appliance business:
However, on page B3 the same day, the drop quote in an article would seem to indicate otherwise:
Of course, General Electric is not buying the food company Annie’s; the headline makes clear that the General in this case is General Mills (stock symbol: GIS).
But there’s nothing in the drop-quote to indicate something is wrong within its own context. Maybe General Electric often pays premiums like that during an acquisition. Maybe the copy editor or whomever did this dropquote finished the GE story from page 1 just minutes before working on the General Mills story.
However, we’ve got to retain context when testing and proofreading.
Where does this come into play in testing?
The foremost example in my mind is when we’re doing things to trigger error conditions to make sure that an error message displays. It’s possible that the system will throw up the wrong error message and we’ll miss it. I once wrote automated tests that triggered error conditions and parsed the error message (mostly to make sure an error message applicable to the screen and operation displayed). However, I did not write it smart enough to compare the error message that displayed to the error message expected. So when the application started failing by displaying the wrong message for the occasion, the tests didn’t catch it.
So you’ve got to remember to see the forest and the trees–along with the underbrush, the soil, the other flora, and the carnivorous fauna–when you’re testing.
As mobile app usage increases, load testing mobile apps is becoming a key part of the software development lifecycle to make sure your application is ready for traffic. If your mobile app interacts with your application server via REST or SOAP API calls, then LoadStorm PRO can load test mobile app servers like yours.
LoadStorm PRO uses HTTP archive (HAR) recordings to simulate traffic, and we normally create these recordings using a browser’s developer tools on the network tab to record all requests sent to the target server. In this article, I’m going to introduce you to two ways to make recordings of user traffic that can be used in LoadStorm. The first involves packet capturing from your mobile device, and the second involves a Chrome app called Postman used in combination with the Chrome developer tools.Simulate traffic by using Packet Capturing
If you like to keep things simple, then this method should save you the trouble of manually creating requests as shown in the next method. You’ll need a packet capturing mobile app (such as tPacketCapture) that stores requests in the PCAP file format, and allows you to share the PCAP file. The PCAP file generated by the app can then be converted to a HAR file to be uploaded into LoadStorm for use in a load test. To do this, you can use PCAP Web Performance Analyzer for free without needing any setup, or you can install your own converter from the pcap2har project on GitHub.
For Android devices, this method works as follows:
- Install tPacketCapture or other mobile app for capturing packets.
- Close any other apps that you have running to avoid unnecessary packets.
- Open the packet capturing app and click the Capture button.
- Open the mobile app you wish to record, and begin emulating user behavior.
- When you’re done emulating user behavior, swipe your notifications area open to click the “VPN is activated by tPack..” message.
- Click the Disconnect button to stop capturing packets.
- Switch back to the tPacketCapture app to open the File list tab.
- Select the PCAP file you just created, and share it using email (or method of your choice).
- Convert the PCAP into a HAR using the PCAP Web Performance Analyzer or your own pcap2har converter.
- Upload the HAR to LoadStorm.
Click to Zoom
For iOS devices:
At this time, packet capturing mobile apps are only offered on Android devices. Apple products do not support direct packet capture services. However, if you hook up your iOS device to a Mac via USB then you can use OS X supported software to capture the packets, as described on the Apple developer site.Record requests made in Postman using Chrome developer tools
To make a recording with the Postman app, follow these steps:
- From the Chrome Web Store, you can install Postman by rickreation, and the Postman launcher.
- Click the Postman launcher icon at the top-right of the browser.
- Open the developer tools by right-clicking the page you’re on, and selecting Inspect Element.
- Switch to the Network tab in the developer tools and check the Preserve log box.
- Create your GET or POST request.
- Click the Send button and observe the POST request in your developer tools.
Note: Extra requests generated by Postman won’t appear in LoadStorm.
- Repeat steps 5 and 6 as needed to mimic user behavior.
- Right-click in the network log and choose Save as HAR with Content.
Click to Zoom
Making RESTful GET or POST requests will rely heavily on your knowledge of how your mobile app interacts with the application server. If you have a network log that shows the incoming requests from the mobile application, that can help simplify the reconstruction of those requests in the Postman app. Postman actually offers 11 methods for interacting with an application server, and of these we’ll typically only need to use GET and POST requests:
- GET – For a RESTful request to interact via a GET request you’ll usually need to include some parameter(s) as a query string that tells the application server what you would like returned. For example, you could request to see your all of your tweets on your twitter timeline.
- POST – POSTs are used to give the target application new information like adding a tweet to your twitter account from your phone. The content that you would like to POST is sent via the request payload in one of three options; form-data used for multi-part forms, x-www-form-urlencoded used for standard forms, and raw used for a variety of content (JSON, XML, etc.).
In some cases you will also need to add some form of authorization token in the request headers to let the application server know you have the rights to GET or POST information. Postman offers a few options to assist with authorization settings (i.e. Basic Auth, Digest, and OAuth 1.0), but you can always manually input the authorization header if you know the name of the header, the value, and format to send it in.
Even though Postman is primarily designed to work with RESTful API calls, it can also work with SOAP.
To make SOAP requests follow these steps:
- Change the method to POST.
- Select the raw option for content type.
- Set the raw format to “text/xml”.
- Manually add the SOAP envelope, header, and body tags as required.
- Send requests as needed.
Click to Zoom
Additional information about creating SOAP requests can be found on the Postman blog.
This video is a short guide on recording a HAR using Postman in combination with the Chrome developer tools.
Also check out Ruairi Browne’s Postman tutorial covering RESTful API GETs and POSTs to Twitter as well as OAuth1.0 credentials.Upload a Recording
To load test a mobile application using LoadStorm PRO, a HAR recording must be made to simulate the traffic. Once you’ve decided on a recording method, all you have to do is upload your HAR file into LoadStorm, and you’ll be on your way to load testing your app and ensuring end-user satisfaction. If you have questions or need assistance, please contact email@example.com, visit our learning center, or leave a comment below.Sources:
You’ve spent days wandering the cavernous halls of a convention center, trapped in windowless rooms, drinking too much coffee and talking yourself hoarse. Does anyone ever emerge from a conference as the organizers intended, feeling recharged with new ideas, contacts and energy?
New York City marketing executive Stefany Stanley does. Among conference organizers she is known as a savvy convention-goer, someone with a strategy for rising above the dreary rounds of networking and breakout sessions. Ms. Stanley says she has gained valuable contacts, ideas and insights from the 15 conferences she has attended in the past five years.
The article goes on with tips and tricks to maximizing your meeting other people to sell your services to or to meet people who might help you get a leg up, basically.
I must be doing it wrong; when I go to conferences, I go to attend the sessions and to learn what the speakers have to offer as to professional insight. Maybe I’ll meet someone I know off the QAternet or something, but I don’t count on it, and if I don’t, I don’t think that I’ve lost something.
Of course, I don’t go to enough conferences and conventions often enough to have my soul crushed, and I don’t think of them primarily as mass in-person sales cold calls, so I’m probably doing them wrong when I do go.
But maybe you’ll find the article useful.
I go to a dojo filled with positive, encouraging, uplifting people, from the kyoshi to the black belts to the other students. You want to talk about imposter syndrome, and I’ll explain how I feel when surrounded by nice people.
At any rate, members of the dojo often post motivational images to, well, motivate each other. And immediately, my mind works to find the condition where the assertion is not true. I must subvert it.
Your excuses get you 0% closer to your goals. Unless you’re writing a book about your excuses, I think.
I don’t know how my story will end, but nowhere in my text will it ever read… “I gave up.”
That, my friends, is the essence of the QA mindset: When presented with a proposition, you find some way to subvert or suborn it. Remember that the contradiction of All are…. is not Nothing is…. but Some are not….. (See also the Aristotelian Square of Opposition.)
Plan your tests accordingly.
Real-estate agents, better take out that red pen.
An analysis of listings priced at $1 million and up shows that “perfect” listings—written in full sentences without spelling or grammatical errors—sell three days faster and are 10% more likely to sell for more than their list price than listings overall.
On the flip side, listings riddled with technical errors—misspellings, incorrect homonyms, incomplete sentences, among others—log the most median days on the market before selling and have the lowest percentage of homes that sell over list price. The analysis, conducted by Redfin, a national real-estate brokerage, and Grammarly, an online proofreading application, examined spelling errors and other grammatical red flags in 106,850 luxury listings in 52 metro areas in 2013.
Think it applies only to real estate and not your product interface? Are you willing to take that gamble?
You’d better make sure your Web labels, error messages, and helpful text are grammatically correct, or you won’t be able to quantify how many people don’t use your software because they thought it was written by third graders. Because they won’t be your users.
(Yes, I know the story is nine months old, but I’m between contracts right now and have a little time to catch up on my newspapers for the last year.)
When teaching the importance of credibility, I always say "Just look at the current events to see how people have lost or gained credibility. You will find examples everywhere."
The issue of credibility came to mind in recent days as I have been watching the response unfold to the new revelation that Brian Williams admitted to incorrect reporting of an incident that occurred while embedded with troops in Iraq. Some might say Williams' account was mere embellishment of the event. Others, however, take a much stronger view and call it an outright lie. Williams has acknowledged the inaccuracy and issued an apology, which is always a good start to rebuild credibility.
In the case of Williams, the stakes are high since he is a well-respected news presenter on a major television network. This is also a good example of how credibility can rub-off on the entire team. Williams' story many well cause many to stop watching the entire network for news reporting. People may ask, "How many other reporters embellish or hide the truth?"
Now other reports, such as Hurricane Katrina, from Williams have been called into question with reliable information from those who were there. Williams reported seeing a dead body floating face-down from his hotel in the French Quarter. The New Orleans Advocate states, "But the French Quarter, the original high ground of New Orleans, was not impacted by the floodwaters that overwhelmed the vast majority of the city."
I have a feeling this story will take on a life of its own. Now, there are accounts from the pilot of the helicopter saying they did take on fire, just not RPG fire. That may help clarify things, but it adds confusion to the issue which doesn't help rebuild credibility.
As a software tester, you may be tempted to overstate how many problems you have seen in a system or application, just to get people to pay attention to the fact there are problems. Don't succumb to that temptation. Instead, just show a really good example of a serious problem. If there are other similar issues, say so. As a tester, you can only report what you have seen. You can't predict what other problems may exist, even though you might be right.
In telling your testing story, the accurate, complete truth is all you need.
If you would like to read my three-part article series on How to Build, Destroy and Rebuild credibility, here are the links:
Building Your Credibility as a Tester
How to Destroy Your Credibility as a Tester
How to Rebuild Your Credibility as a Tester
I hope all this gives us all something to remember as a we report what we see in testing!
I would love to hear your comments, questions and experiences.
And...I would like to wish my son, Ryan Rice, a happy 34th birthday today!
Have a great weekend,
This weekend the Seattle Seahawks and the New England Patriots battled to give fans quite possibly the strangest game ending ever. From a one-in-a-million catch, a turnover, and an all out fist fight, this game will go down as one of the most interesting in recent Super Bowl history.Super Bowl Site Performance Game
But while we may love football, we all know that this is only half of the competition. At $4.5 million dollars a piece, Super Bowl commercials only get a few seconds to vy for our attention, beat the competition, and make a valuable impression. After previous notorious site crashes that resulted in harsh social media backlash, we wondered if the companies who were willing to spend so much for 30 seconds worth of airtime also invested in their site performance and scalability that the $4.5 million dollar ads would incur.
We tested each advertiser’s site with 2 virtual users, requesting the home page every 30 seconds over the duration of the game. Here’s what we found:Worst Super Bowl Site Performance Error Spikes
Surprisingly, 27 of the 45 companies had errors on their home pages before super bowl ads even aired. During the tests, we saw several companies experience a spike in error rates for short periods of time.
These 5 companies experienced spikes in error rates of over 50% from the HTML requests.The Fattest
The ever-increasing size of web pages today can be one of the main reasons behind slow load times and scalability issues. And, as mobile traffic increases, mobile users are becoming the biggest victims of page bloat, and risk a shock with the arrival of their phone bills. These were the 7 sites with the fattest home pages:
The Slowest – Peak Page Completion
Page load time is an extremely important metric when it comes to considering performance, and a recent study has shown that ad revenue increases as page load time improves. We saw several companies experience page completion time spikes during the test. Although the pages were completed, that doesn’t necessarily mean people stayed around to wait for the page to load.
The 10 highest spikes in page completion timeBest Super Bowl Site Performance Error Free
Only 7 of the 45 companies experienced ZERO errors for the duration of the test. Great job!
On the opposite end of the spectrum, several super bowl sites seem to have realized the performance benefits of trimming down the size of their pages. These were the top 5 lightest pages :
While we definitely saw high peak completion times in our tests, there were several companies who saw successful average page load times. The 5 best average page load times came from these guys :
This year, the overall performance of the Super Bowl advertiser websites was hit or miss. While there appeared to be some companies who took time to prepare their site’s scalability, such as Esurance or the No More campaign, there seemed to be equally as many who allowed errors to remain on their home pages. This not only points to a lack of preparation for scalability, but also a lack of preparation for basic performance. This year, many companies didn’t wait to release ads until super bowl Sunday, and about half of the ads being released a week early. The lack of a hard deadline makes me wonder how there were still so many website performance issues overall. In the age of the internet, we’re seeing a shift in advertisement that relies heavily on social interaction on websites, with 50% of the 66 commercials aired using a twitter hashtag. It could be that companies are relying on social media platforms to get business, rather than their own websites.
Over at Medium, they discuss how the intersection of Polish typing custom, Microsoft Windows custom, and user interface colluded to create a defect: The curious case of the disappearing Polish S:
A few weeks ago, someone reported this to us at Medium:
“I just started an article in Polish. I can type in every letter, except Ś. When I press the key for Ś, the letter just doesn’t appear. It only happens on Medium.”
This was odd. We don’t really special-case any language in any way, and even if we did… out of 32 Polish characters, why would this random one be the only one causing problems?
Turns out, it wasn’t so random. This is a story of how four incidental ingredients spanning decades (if not centuries) came together to cause the most curious of bugs, and how we fixed it.
It’s hard to test for the conditions to recreate this bug unless you’re Polish. It’s also a humbling example of how we’re going to miss things because of our (as yet) limited omniscience.
(Link via IDisposable tweet.)
The Testing on the Toilet (TotT) series was created in 2006 as a way to spread unit-testing knowledge across Google by posting flyers in bathroom stalls. It quickly became a part of Google culture and is still going strong today, with new episodes published every week and read in hundreds of bathrooms by thousands of engineers in Google offices across the world. Initially focused on content related to testing, TotT now covers a variety of technical topics, such as tips on writing cleaner code and ways to prevent security bugs.
While TotT episodes often have a big impact on many engineers across Google, until now we never did anything to formally thank authors for their contributions. To fix that, we decided to honor the most popular TotT episodes of 2014 by establishing the Testing on the Toilet Awards. The winners were chosen through a vote that was open to all Google engineers. The Google Testing Blog is proud to present the winners that were posted on this blog (there were two additional winners that weren’t posted on this blog since we only post testing-related TotT episodes).
And the winners are ...
Erik Kuefler: Test Behaviors, Not Methods and Don't Put Logic in Tests
Alex Eagle: Change-Detector Tests Considered Harmful
The authors of these episodes received their very own Flushy trophy, which they can proudly display on their desks.
(The logo on the trophy is the same one we put on the printed version of each TotT episode, which you can see by looking for the “printer-friendly version” link in the TotT blog posts).
Congratulations to the winners!
This book really does contain a set of rules for programmers to follow: The left pages have the rules in large font, and the right pages have the rules explained in a paragraph or two. The book focuses not only on programming best practices, but also on software development best practices, and these are much more applicable to modern programming than the pre-object oriented programming lessons.
For example, first and foremost are the rules about making sure your program answers the users’ needs. Rules like:
- Fit your program to your users’ needs.
- Aim your program at the widest circle of users
- Explain to your user how to use the program
- Make it easy for the user to run the program
Other rules cover interface design, such as Display results with pertinent messages which are just as relevant now as it was when the interface designed displayed only green or amber text.
Even the discussion of loops, variables, and breaking your program into sections has a sort of relevance because it discusses these things philosophically, at a high level, in a way that programming how-to books and online language tutorials do not.
It’s a quick read or browse; although it’s roughly 220 pages (which is still slim by modern, $60 computer book standards), it’s really less than that since the text is not densely packed on the pages as described above, and it’s worth the time for the insights not only into the crystallized rules but also in the recognition of some software development problems and goals predate the Internet, which I am pretty sure some of our younger co-workers don’t know.
Books mentioned in this review:
“Fly By Night”, Rush
I just ended a twenty-six month contract, and I’m excited to start something new. Which is TBD, but still exciting.
Super Bowl ads crashing websites isn’t a new story. But it is one that deserves a bit of attention about this time every year.
Back in 1999, Victoria’s Secret made a big splash with their Super Bowl Ad. The ad was one of the very first ever to tie in TV and web. It promoted an online lingerie fashion show and over a million viewers logged on to watch. . . crashing the site.
Since then, using Super Bowl ads to drive traffic to a website has become a very popular marketing technique. Many companies use the massive marketing power of a Super Bowl ad to drive viewers to their websites and take a specific action (signing up for some promotion, voting for a favorite team, etc.). The resulting massive spike in traffic that hits the websites, should be very much expected. However, every year there are a handful of websites that have website crashes resulting in massive social media and PR backlash.
In 2013, a study by Yottaa found that over 13 companies had Super Bowl ads that crashed their websites, including Coca-Cola, SodaStream, Calvin Klein, and Axe. Coca-Cola invited viewers to log on to CokeChase.com and vote for their favorite team, Axe offered a sweepstakes to send someone to space. Both resulted in massive numbers heading to the websites to find crashed websites. The result? Viewers took to social media in droves :
One notable crash of Super Bowl 2014 was from Maserati. The ad was rumored to cost between $11 million and $17 million. It announced the new Maserati Ghibli and sent masses to the website MaseratiGhibli.us, which promptly crashed.
Load testing is critical for any website expecting a rush of traffic. Whether it is a rush from a major ad campaign or a launch, it is imperative that the website be able to handle the pressure of the traffic. The only way to know for sure that your site is prepared is to test it.
As a load testing provider, we see all of these examples as avoidable problems. Load testing allows companies to simulate a large volume of traffic hitting the website or application while monitoring the sites responses (response times, throughput, errors, etc.). Therefore, if the companies had load tested, it is quite reasonable that their web development teams would have found the performance bottlenecks, addressed them, and the crashes would never have happened.
Will we see any big website crashes from Super Bowl ads this Sunday? Comment below if you have any guesses of which websites will fail and then check back next week as we analyze the hard data from our very own testing done on Super Bowl Sunday 2015!
The post Super Bowl Ads = Lots of Traffic. Are Sites Ready? appeared first on LoadStorm.
This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.
You have just finished refactoring some code without modifying its behavior. Then you run the tests before committing and… a bunch of unit tests are failing. While fixing the tests, you get a sense that you are wasting time by mechanically applying the same transformation to many tests. Maybe you introduced a parameter in a method, and now must update 100 callers of that method in tests to pass an empty string.
What does it look like to write tests mechanically? Here is an absurd but obvious way:
// Production code:
def abs(i: Int)
return (i < 0) ? i * -1 : i
// Test code:
for (line: String in File(prod_source).read_lines())
1: assert line.content equals def abs(i: Int)
2: assert line.content equals return (i < 0) ? i * -1 : i
That test is clearly not useful: it contains an exact copy of the code under test and acts like a checksum. A correct or incorrect program is equally likely to pass a test that is a derivative of the code under test. No one is really writing tests like that, but how different is it from this next example?
// Production code:
def process(w: Work)
// Test code:
part1 = mock(FirstPart)
part2 = mock(SecondPart)
w = Work()
It is tempting to write a test like this because it requires little thought and will run quickly. This is a change-detector test—it is a transformation of the same information in the code under test—and it breaks in response to any change to the production code, without verifying correct behavior of either the original or modified production code.
Change detectors provide negative value, since the tests do not catch any defects, and the added maintenance cost slows down development. These tests should be re-written or deleted.
Can improved automobile tyres really make the world a better place? Should we trust developers the one to change the tyres?
To be honest, probably better than me: the last time I changed a tire, I cross-threaded two of the lug nuts and then snapped one of them off (with only a lug wrench, sir; I was motivated to remove that bolt). BECAUSE I BREAK THINGS.
Last November, Amazon Web Services (AWS) announced a new database service, codenamed Aurora, which appears to be a real challenger to commercial database systems. AWS will offer this service at a very competitive price, which they claim is one-tenth that of leading commercial database solutions. Aurora has a few drawbacks some of which are temporary, but the many benefits far outweigh them.Benefits
With a special configuration they’ve come up with that tightly integrates the database and hardware, Aurora delivers over 500,000 SELECTS/sec and 100,000 updates/sec. That is five times higher than MySQL 5.6 running the same benchmark on the same hardware. This new service will utilize a sophisticated redundancy method of six-way replication across three availability zones (AZs), with continuous backups to AWS Simple Storage Service (S3) to maintain 99.999999999% durability. In the event there is a crash, Aurora is design to almost instantaneously recover and continue to serve your application data by performing an asynchronous recovery on parallel threads. Because of the amount of replication, disk segments are easily repaired from other volumes that make up the cluster. This ensures that the repaired segment is current, which avoids data loss and reduces the odds of needing to perform a point-in-time recovery. In the event that a point-in-time recovery is needed, the S3 backup can restore to any point in the retention period up to the last five minutes. Aurora has survivable cache which means the cache is maintained after a database shutdown or restart so there is no need for the cache to “warm up” from normal database use. It also offers a custom feature to input special SQL commands to simulate database failures for testing.
- Primary instance – Supports read-write workloads, and performs all of the data modifications to the cluster volume. Each Aurora DB cluster has one primary instance.
- Aurora Replica – Supports only read operations. Each DB cluster can have up to 15 Aurora Replicas in addition to the primary instance, which supports both read and write workloads. Multiple Aurora Replicas distribute the read workload, and by locating Aurora Replicas in separate AZs you can also increase database availability.
Aurora replicas are very fast. In terms of read scaling, Aurora supports up to 15 replicas with minimal impact on the performance of write operations while MySQL supports up to 5 replicas with a noted impact on the performance of write operations. Aurora automatically uses its replicas as failover targets with no data loss while MySQL replicas can have this done manually with potential data loss.
In terms of storage scalability, I asked AWS some questions about how smoothly Aurora will grant additional storage in the event that an unusually large amount of it is being consumed since they’ve stated it will increment 10GB at a time up to a total of 64TB. I wanted to know where the threshold for the autoscaling was at and if it were possible to push data in faster than it could allocate space. According to the response I received from an AWS representative, Aurora begins with an 80GB volume assigned to the instance and allocates 10GB blocks for autoscaling when needed. The instance has a threshold to maintain at least an eighth of the 80GB volume as available space (this is subject to change). This means whenever the volume reaches 10GB of free space or less, the volume is automatically grown by 80GB. This should provide a seamless experience to Aurora customers since it is unlikely you could add data faster than the system can increment the volume. Also, AWS only charges you for the space you’re actually using, so you don’t need to worry about provisioning additional space.
Aurora also uses write quorums to reduce jitter by sending out 6 writes, and only waiting for 4 to come back. This helps to isolate outliers while remaining unaffected by them, and it keeps your database at a low and consistent latency.Pricing
For the time being Aurora is free if you can get into the preview which is like a closed beta test. Once it goes live there are a few things to keep in mind when considering the pricing. The cost of each instance is multiplied by each AZ. Storage is $0.10 a month per 1GB used, and IOs are $0.20 per million requests. Backups to S3 are free up to the current storage being actively used by the database, but historical backups that are not in use will have standard S3 rates applied. Another option is to purchase reserved instances which can save you money if your database has a steady volume of traffic. If your database has highly fluctuating traffic then on-demand instances tend to be the best so you only pay for what you need. For full details, please visit their pricing page.Drawbacks
Currently, the smallest Aurora instances start at db.R3.large and scale up from there. This means once the service goes live there will be no support for smaller instances like they offer for other RDS databases. Tech startups and other small business owners may want to use these more inexpensive instances for testing purposes. So if you want to test out the new Aurora database for free you better try apply for access to the preview going on right now. AWS currently does not offer cross region replicas for Aurora, so all of the AZs are located in Virginia. On the other hand, that does mean that latency is very low.
It only supports InnoDB and any tables from other storage engines are automatically converted to InnoDB. Another drawback is that Aurora does not support multiple tablespaces, but rather one global tablespace. This means features such as compressed or dynamic row format are unavailable, and it affects data migration. For more info about migrations, please visit the AWS documentation page.Temporary Drawbacks
During the preview, Aurora is only available in the AWS North Virginia data center. The AWS console is the only means for accessing Aurora during the preview, but other methods such as CLI or API access may be added later. Another important thing to note is that during preview, Aurora will not offer support for SSL, but they are planning to add this in the near future (probably before the preview is over). Another feature to be added at a later date is the MySQL 5.6 memcached option, which is a simple key-based cache.Conclusion
All in all, it sounds amazing, and I for one am very excited to see how it will play out once it moves out of the preview phase. Once it is fully released, we may even do a performance experiment to load test a site that relies on an Aurora DB to see how it holds up. If you’re intrigued enough to try and get into the preview, you can sign up for it here.Sources: