Testing

Overview

When you write a complex Summit Application, you will probably want to also write automated unit tests for it. This allows you to ensure that everything works as expected, and also lets you ensure that the entire application keeps working if you have to make changes to some part of it. Additionally, you will probably want to make sure that all your code is covered by the test cases (to be sure you didn’t miss a path through the system) and that you don’t have any useless or unreachable paths anywhere.

The Summit Testing module will provide the ability to do both these things. On the technical side, we are using the busted library from Olivine Labs (which is powered by Luassert) for the unit testing, and luacov to handle code coverage. When you push your code to our servers, we will run any unit tests you have written and calculate code coverage for your Application. If any tests fail, we will reject your push, so that you can ensure your application is working as intended before it goes live. We also allow you to specify a minimum code coverage percentage, so you can be sure you aren’t running a lot of untested code in production.

If you've used Behavior Driven Development(BDD) before, busted syntax should make a lot of sense. If not, don’t fear, busted is very easy to use and is designed specifically for readability. We'll provide a basic overview and enough to get you started with busted here, but if you want to learn more, check out the busted documentation.

Behavior Driven Development (BDD)

In BDD, you categorize your tests by using the describe function, which takes a description as it’s first argument and a function as it’s second. For example:

describe("This is a logical grouping of tests", function()
end)

describe’s can be nested to allow you to group your tests in a way that makes sense.

To write an actual test we use the it function. Like the describe function, it takes a description and a function as its arguments. For example:

it("does something really cool", function()
end)

The following is an example that also introduces the “assert” function which is used to determine a pass or a fail.

describe("Summit", function()
    it("allows you to write your own telephony applications", function()
        assert.truthy("Yep")
    end)

    describe("has a strong developer focus", function()
        it("has code samples and documentation", function()
            assert.truthy(#code_samples > 0)
            assert.truthy(#documentation > 0)
        end)

        it("will automatically run your test cases and determine your code coverage", function()
            assert.truthy("Yep")
        end)
    end)
end)

The “assert” syntax has many other options that can be used and more documentation can be found at the luassert website.

Testing your Summit Application

Lets take a look at a very simple Summit Application to get an idea of how unit testing works. Let’s say our application is as follows:

-- main.lua

channel.answer()
channel.say("This is an example application")
digits = channel.gather()
channel.say(digits)
channel.hangup()

This application simply answers a call, plays a short message, gathers input, echoes it, and hangs up. For our tests, lets just make sure that flow actually happens. Below is a sample unit test file for this application:

-- example_spec.lua

local application = require("test.application")
describe("test ivr", function()
    local app = application()
    it("answers the phone", function()
        app.should.channel.answer()
        app.should.channel.say("This is an example application")
    end)
    it("asks for numbers", function()
        app.should.channel.gather(234)
        -- Note that channel.gather in your Summit App always returns
        -- a string, so we want to be sure to use strings for the first
        -- argument here as well to properly test our application
        app.should.channel.gather('234')
    end)
    it("repeats the numbers", function()
        app.should.channel.say('234')
    end)
    it("hangs up", function()
        app.should.channel.hangup()
    end)
end)

Now, we'll break down that test file.

The first thing we do is local application = require("test.application"); the test.application module contains most of the testing functionality. As you can see, within the describe function, we call this module to get back an app object. This object contains the actual Summit Application and various testing helpers. By using the test functions available on the app objects, we can step through the Summit App and ensure the flow matches our expectations.

In this particular case, we have broken up the functionality into multiple it() statements, but you could easily test an entire path through the application within a single it() test. Just note that everytime you call local app = application() as above, a brand new copy of the Summit Application is started (which will start from the beginning), so you'll need to do this anytime you want to test a new path.

After we get the app object, we then start stepping through. All the testing libraries are available through the should object on the app. All the testing modules documented in this site are available under app.should.<module_name> (for example, app.should.channel). This Application is very simple and only uses channel commands, but you also have the ability to mock up any Summit Application function that would access some outside entity (such as http, email, or channel functions). The app object also has a config object available at app.config. We will describe that object later.

The first thing we do is ensure that the Summit Application answers the call. It is likely that nearly every test you write will start with this, just like every Application tends to start with channel.answer(). After we answer, we ensure that the Application plays its greeting, in this case with the channel.say() command. So far, both of these functions' signatures are essentially the same as the Summit Channel Library’s signatures, and any arguments you pass in the test will be compared and validated against those in the Summit Application.

Next, we will test out the channel.gather() function. In this case, the Summit Application is expecting its call to channel.gather() to return a value (a sequence of digits gathered). You'll notice how the app.should.channel.gather(234) looks slightly different from the gather in the Summit Application: this is because the test library allows you to pass an argument which will be returned to the Application. This lets you drive the application from your tests, and lets you try out different inputs for a gather statement. If, for example, you were branching based on the result of the gather, you could write separate tests for each branch, to ensure things worked as expected, by passing different numbers to app.should.channel.gather. Here, the application simple saves the input to gather and then repeats it back to us in the next line. You can see in our test that we are expecting it to say the same number we sent to the gather.

Finally, we ensure that the Application hangs up. Again, most of your tests will likely end with this statement, just as most Summit Applications end with a hangup call.

Details of test.application

Our validation is generally lazy by design. If you don’t pass any optional arguments to a test command (such as app.should.channel.say\('hi', {})), then we will not validate any options that come back from the Application. Conversely, if you want to specify exactly what the Application should do, you can be very verbose and specify every option expected (for example: app.should.channel.say\('hi', {voice='man', language='en'}), and we will check all those against the actual Summit Application.

This allows you to be very general when you start writing your tests, and also to save some time and mental hassle by not repeatedly and tediously testing every argument of common paths in your Application (you can write a very specific test once, and then just use lazy tests elsewhere while being confident your code is covered).

Indepth API documentation for all the functions available under app.should are found in the rest of this site. I'll go over a couple of note here, but for the most part, any function whose normal Summit Application library version does not have a return value will have an identical API in the testing library to that of the main Summit Application library.

We touched on channel.gather() above, so I will not go over that again. Refer to the channel module for all the available arguments. The http library allows you to spoof outside web calls (we don’t allow outside communication from within tests). To do this, you simple specify the (response, error) return values you would like the Summit Application to recieve, similar to how app.should.channel.gather works. This lets you test out how your Application performs with both successful and failed HTTP calls.

The soap library works similarly. Please make sure to note that it is up to you to ensure you properly mock up the responses and errors. We won’t do any parsing or modifications of the values you send the testing library, we will just return them exactly as-is to your Summit Application, so ensure that any data you expect to be included in the response is present in your test mockups.

The datastore and manifest libraries return Lua tables of data, accessible just like a regular table through dot notation. For this reason, when you use these in your unit tests, you should be sure to pass a table of data as the first argument. Note that for manifest, if the raw flag is set, we will actually return a raw string, so pay attention when writing your tests and be sure to pass a string instead of table if you are using a raw manifest call.

The salesforce library is also a simple wrapper. Note that all the methods in this library expect you to specify a response: it is up to you to ensure these responses mock up actual Salesforce response data. We will not modify or alter the responses you specify, so be sure that your mock data properly reflects the real world responses you expect to see.

As mentioned earlier, there is an app.config object available. This allows you to specify certain configuration information which affects the environment surrounding your Summit Application during a test. Note that any changes made through this object will be reset whenever you instantiate a new testing application (with local app = application()).

It’s often useful to be able to set the current time for a test run (for example, to test after-hours functionality). Use app.config.set_time to do this. The method signature is the same as calling time.create in the Summit Application library.

Once you use set_time, any calls to time.now() will return the time you specified rather than the actual current time. Note that, for now, timezone support is still somewhat naive, so we recommend you use the same timezone in set_time as you are using in calls to time.now() to ensure things work as you expect.

Additionally, we provide a method app.config.set_random that will allow you to set up a list of random numbers to return to the math.random function. This lets you “seed” the random number generator so that you can test out Applications which rely on it.

You may choose not to use this config option, in which case we will fall back to the default random number generator. If you intend to test out an Application that relies on random numbers for functionality though, we do not recommend this.

Nuts and Bolts

We've covered how to write your tests, and now we'll cover where to put them and how to run them.

In your Summit Application, you will see a src folder and a spec folder. You already know that your Application code belongs in src. Any unit tests you want to write should be in the spec directory. We will run any files in that directory that end in “_spec.lua”.

In the near future, we will make it possible to run your tests using the Simulator (available through our Sublime Text 3 package), but for now, we will run all your unit tests whenever you git push your code to us.

We also will use luacov and determine your code coverage, and all this information will be presented back to you in our response. At this time, we will not reject commits based on failing tests, but that is also planned in the near future.

You can also specify a minimum allowable code coverage percent in a manifest file, and we will reject commits if that percent is not reached.

Further Help and Examples

If you have any comments, questions, or issues, head over to https://summit.shoretel.com/. Additionally, we will release more sample Summit Applications with unit tests soon.


Modules

Name Summary

application

Application object.

channel

PBX Channel object

email

Email Library

http

HTTP Request Library

salesforce

Salesforce Library.

sms

SMS Library

soap

SOAP Service Library