priligy price uk

by Ken Collier and Lynn Winterboer

This article was first published in TDWI BI This Week on July 9, 2013

When quality and testing is moved up front on a project, everyone enjoys a higher quality project. Here’s how your team can be happier with an agile approach to testing, even before implementing key automation tools.

Rather than explain the well-understood theory and benefits of agile, let’s examine how agile testing works in the real world and see how agile business and delivery teams reap the benefits of the methodology and have more fun in the process.

Beth, a sales analyst, has an existing report showing total number of orders and related income by sales rep by month. The director of sales, a key user of this report, has noticed that over the past six months the number of orders increased an average of 1.5 percent each month while the total income has only increased an average of 0.75 percent per month. He hypothesizes that customer discounts may explain this and asks Beth to provide insight into discounts.

Because the present report does not contain discount data, Beth asks the BI product owner for an enhanced report with two additional fields. The new user story reads, “As director of sales, I need to see the total value and percent of discounts applied to orders by sales rep each month, so that I can determine if reps are applying discounts outside normal sales guidelines.”

This BI team uses an agile approach to deliver frequent value to business stakeholders, which involves several important practices including behavior-driven development, representative data sets, and frequent and early testing. This team is still establishing a robust agile development environment with rigorous version control, test automation, and a dedicated continuous integration server. Nonetheless, they are eager to get started with agile testing practices and then evolve toward better automation.

Behavior-Driven Development (BDD):

Before the team begins development on the new story, Beth pairs with the team’s lead tester to define how the team will know they are done and have met the director’s needs. Together they craft a set of acceptance criteria that will determine when the objective has been met. This team uses a behavior-driven development approach for story specification. BDD tests follow the pattern, “Given [some initial context], when [action performed or event occurs], then [expect result occurs].” Examples of acceptance criteria include:

  • Order discounted: Given an order for $200, when the discount is $50, then the report should reflect a discount of $50 and 25% as new fields appended to the right of the existing columns.
  • Order not discounted: Given an order for $300, when there is no discount applied (i.e., the discount value is $0 or [null]), then the report should reflect a discount of $0 and 0% as new fields appended to the right of the existing columns.
  • Negative discount: Given an order for $100, when the discount is -$1.00, then the report should reflect a discount of -$1.00 and -1% as new fields appended to the right of the existing columns, as well as highlight the order in red and show a red flag next to any roll-ups that include that order. (Note that the BI team would not have been aware of this specific need based solely on Beth’s initial requirements. The discussion with the test lead, who is trained to look for oddities and anomalies in data, was useful in prompting Beth to think about something she normally wouldn’t have considered until she saw the issue in the final report)

When acceptance criteria are formulated in this BDD style, they are easily captured as automated test cases using testing frameworks such as the open source Cucumber and others.

Representative Data Sets

To automate these test cases, the team needs a test data set that is as small as possible while still containing a representative sample of actual production data. Because these functional tests are not testing performance and load, they only need a small test set to reflect the spectrum of data found in production. The resulting test data includes at least one order representing each of the following discounts: $0, $1, $1,000,000, [null], and -$1. There may very well be additional data sets Beth and the team decide to include.

Frequent Testing

Once the tests and test data are defined, the developers can start working on the request. They ask Beth if she can plan on coming in that afternoon to take a look at their results.

Two of the team’s developers will bring the discount data from the order management system into the data warehouse (DW), add the discount data to the reporting infrastructure, and update the sales report. They have a short meeting with the team’s lead data modeler to determine where the new fields should reside in the DW and BI data models.

The developers take a short time to do their work and immediately unit test their code. They discover one of them accidentally forgot to set the discount-percent field type correctly, and fix it immediately. Once the code has passed unit testing, they integrate their code on the development server and run the pre-defined test cases using the data sets Beth and the test lead provided. The tests pass. (Note that BI teams can leverage automated unit testing tools to save developer time. We advocate using version control and a dedicated integration sandbox. However, teams without this agile infrastructure can still get started.)

Later, Beth smoke tests the updated report on the demo environment and notices that some anomalous negative discounts are appearing. She asks the team to suppress them from the report and provide an exception report for these records so they can be addressed separately.

The team is able to quickly suppress these negative discount records, re-run the tests, and show Beth the results. However, they agree that the exception report request represents a new user story, which should be prioritized into the backlog for a future iteration.

Beth stops by the director’s office to let him know the Discount report will be in production with the next release. He is relieved to hear that, and mentions how nice it is to get answers so quickly these days. It’s so much easier to do his job now that the BI team doesn’t need six months to deliver anything useful!

As this scenario demonstrates, the benefits to DW/BI teams of these agile testing techniques include:

  • Clarity: Tests written at the beginning of the project provide clarity to all involved about where the “goal line” is for each requirement:
    • Acceptance criteria are the definition of “done”
    • Passing tests are the measure of “done”
    • Regression tests are the measure of “still done”
  • Quality: Focusing on quality and testing up-front ensures a higher level of quality in the resulting product by eliminating the tendency to “squeeze” tail-end testing when schedules get tight
  • Feasibility: Testing with small, representative data sets reduces the complexity of large data loads and related server infrastructures
  • Alignment: Frequent testing provides timely results to users, quick feedback to developers, reduces the time it takes for a developer to respond to defects, and keeps the users and delivery team closely aligned on delivering value, together
  • Regression Testing: Each set of tests is added to the test suite for the project and are re-run for regression testing with every new development delivery; the ability to confidently regression test reduces the risks related to future changes and enhancements

Furthermore, by driving testing into the story specification conversation early, testers become better partners with product owners and developers.

There’s one final benefit we’ve seen in our work: those business and delivery teams surrounded by passing tests have more fun!

Lynn Winterboer teaches and coaches DW/BI teams on how to effectively apply agile principles and practices to their work. She is the founder of Winterboer Agile Analytics, and can be reached atLynn@LynnWinterboer.com.

6 Responses to “Agile DW/BI Testing – Just Get Started!”

  • Nice post, congrats! However, I’d like to ask about the teams avaliability. All seemed to go well just because Beth had not to wait until the current sprint or iteration has finished – or so it sounded to me. Is it so? No sprints, just a continuosly running pipeline? Or didn’t I note the timebox underlining the post? All the action seemed to take a couple of days.

    Also, on the current post same line of thinking, I would enjoy seeing your recounting of an OLAP cube testing.

  • Fabio, One of the key goals in agile testing is to test early so that deployment of new features becomes a business decision rather than a technical limitation. In this scenario, Beth collaborates well with the team to define the behavioral tests, and the team tests as they develop. Therefore, they know when they have met all of Beth’s acceptance tests. This generally takes a few days, but may take a full timeboxed iteration (sprint). High functioning agile teams sometimes have continuous deployment pipelines to which they can quickly add newly tested and accepted features. When you work in small chunks and deploy in small chunks, production releases becomes much less burdensome and ceremonious. I hope this answers your question.

  • Thanks, Ken, much clearer now. So, as per your example, delivery is not meant to be continuously – just faster and with less rework than traditional approaches. Ok, I believe I got it!

    On the other question, have you ever faced OLAP project with this kind of testing? As OLAP pretty much renders a kind of report, maybe you can have the same approach.

  • Yes, I’ve done this type of testing in OLAP environments. My contention is that if you do rigorous testing beneath the UI (e.g. not testing through the BI tool), then your testing of the actual OLAP presentation becomes much simpler. The problem is that you can’t reasonably test all drill paths through your BI tool. Therefore, the goal is to verify that the data arrived in the cube or mart completely, consistently, and correctly. Then you can focus your BI testing only on derived or calculated measures that are done at the UI layer. Does that make sense?

  • Totally, Ken, thank you very much.

    The kind of test Beth and the team designed was easy to understand, and made a lot of sense. The OLAP data mart testing should follow the same model, isn’t it? But now I am not testing the report, but the ETL and the dimensional model themselves. Now what? To have sample origin data sources loaded with correct as well as erroneous data, a corrected loaded data mart, the data mart itself and finally a query matching both results? (I feel like I answered my own question, but I am still gropping in the dark with BI testing.)

Leave a Reply


I fear that, “agile as the latest magic bullet” has crossed the chasm, but that “agile as a different way of behaving” has not."
- Ken Collier
agilealliance.com
cutter.com
tdwi.org
innovationgames.com
Get Adobe Flash player
Site Meter