Has anyone had any experience yet with the testing framework “C/AL Application Test Toolset” introduced in NAV 2009 SP1?
Well, yes. It’s used as one of the main app test tools for developing V7 NAV. Anything specific you want to know?
The context is that I will be recommending that we upgrade from NAV 5.0 to NAV 2009 SP1 to gain acess to this tool so we can introduce automated unit testing. Before I make that recommendation, I want to get some feedback from end users of how capable and how stable it is. My understanding is that is a “1.0” release of the functionality for NAV.
Sorry, I am somewhat of a “newbie” to the NAV environment. What is “V7 NAV”? I am aware of Version 5.0 of NAV and Version 2009. Is this a release in between those two?
NAV V7 is the future version, V6 is called NAV 2009. From my experience the framework is very stable. The advantage is that you don’t test through the UI, rather directly on the business logic layer. OTOH, there are limitations in what you can test. For example triggers cannot be tested with this. But for covering your business logic it’s great. Also, it runs faster than testing through UI.
That is very helpful feedback. Could you say then that you have written 100’s of tests in the framework? How about speed? Is it reasonable to ‘mock out’ the database in these tests? If not is it possible to use an in-memory database for performance reasons? Any other significant limitations we should be aware of? The decision to test only the business logic and separate out the UI is perfect as far as I am concerned.
Hi Dirk,
where do we get access to these tools? Since you are posting this on a public forum, I assume that none of this is NDA, or if it was it isn’t now I guess.
Hi Joel,
At present we have >1000 automated NAV Application tests running based on the testability features released with NAV 2009 SP1. Our experience is that these tests are very stable compared to UI based tests.
As far as coverage goes, the testability features allow you to cover triggers of most types. The most important exceptions are the field triggers on pages and forms. Note that even though tests based on these testability features do not execute against the UI, it is still possible to execute forms and pages. For unattended execution it is possible to define function triggers (handlers) that will be automatically invoked when a form or page opens. This will prevent the form or page to be displayed, but does execute some of the corresponding triggers. Also note that local function triggers cannot be invoked directly from tests.
Unfortunately, mocking the database is not really possible. As a result you’ll have to be very aware of side effects (in the database) when calling into any of the application triggers. In a limited set of cases it might be possible to use temporary record variables as parameters for the trigger you are testing, but this only works if the exercised trigger itself only interacts with the database via those parameters. In that sense it is difficult to really achieve the “unit” in unit testing, but you will get much closer than with any other test framework for NAV.
As a result, one of the most important decissions when building test suites on top of the NAV testability features is how to obtain the data required to execute your tests (vendors, customers, items, posting groups, and so forth). The two extremes here are to either create everything inside the tests themselves or to use an existing dataset (such as the CRONUS company that you get when you install NAV). In practice you’ll most likely end up with a hybrid approach. It is our experience that as you build up helper libraries, creating test data inside the tests becomes more and more efficient.
I want to stress the importance of the particular approach taken here; the patterns used to create and use data in C/AL tests determine to a large extent their maintainability.
Despite the fact that we cannot mock the database and often create a lot of data inside our tests, they still execute in less than 1 second per test (on average).
We published some example tests and helper libraries as part of the C/AL Application Test Toolset on PartnerSource. Note that the testability features were introduced in the language and runtime in NAV 2009 SP1. Since the mentioned toolset builds on top of those features, SP1 is required to install it. More specific information on the testability features can also be found on msdn.
Please let me know if you have any more questions.
Kind regards,
Bas
No worries, no leaked information
I asked Bas to follow up.
Bas,
Thanks for your detailed reply, very helpful! Your comment “I want to stress the importance of the particular approach taken here; the patterns used to create and use data in C/AL tests determine to a large extent their maintainability.” make a lot of sense.
So bottom line sounds like a thousand tests are taking you about 16 minutes per run (assuming average of 1 second per test)? Again, this is probably a SQL Server question, but do you know if SQL Server can run in memory in the NAV environment to speed things up? Our goal would be have ‘unit tests’ run in 5 minutes or less.
Thanks,
Joel
yes Bas’s remarks are much more clear than your early post. It seemed you were implying that you have a NAV ver 7 tool available, but you mean you are running in 2009Sp1 to test possible V7 code.
Hi Joel,
I don’t think that is possible or advisable. Although I do not know the relevant settings, I guess properly configuring your SQL Server could improve performance.
Other performance improvements may be gained in the design of your test and their implementation. Again the approach taken to manage test data is really important. In some cases we achieved order-of-magnitude improvements in test execution performance by changing the design of tests and helper libraries.
For instance, if you have a set of tests that require to create a large amount of records and post some transactions it might be possible to improve performance by only doing all that once. This test pattern has also been referred to as Shared Fixture. Note that this particular pattern has to be applied with caution; your tests have to be agnostic to the state of the application for this pattern to work (at least to some extent).
Regarding the impact of implementation, consider, for instance, the difference in performance between FIND, FINDFIRST, and FINDSET.
Kind regards,
Bas
Bas,
Thank you for your feedback. The “Shared Fixture” link was particulary interesting.
Regards,
Joel