Mocking a function with different return values in Python

May 15th, 2019

For the first half of 2019 I was borrowed by the Performance Team at Mozilla and asked to help them implement a way to measure the performance of various browsers on Android platforms. This has involved me getting familiar with several tools and environments that I had not touched before:

  • Mozilla's tooling for use with mozilla-central
  • Rooting Android devices for testing purposes
  • Android Studio and it's device emulation tools
  • Executing and parsing shell commands

Of course, I am using pytest as well to make sure that I had thought about testing scenarios and giving whomever supports this code once I am back at my previous role a chance to figure out what is going on.

One of the testing scenarios I came up with was handing the fact that I had designed the code to have a fallback mode in case the shell command I was using did not work on that particular device. Some code reviews from other developers revealed some differences in how the top command works on different phones.

So, drawing on my own experiences with using test doubles in PHP, I asked myself "how can I create a mock that has different return values?". In pytest this functionality is called side_effect.

So, here is some of the code in question that I needed to test:

try:
    cpuinfo = raptor.device.shell_output("top -O %CPU -n 1").split("\n")
    raptor.device._verbose = verbose
    for line in cpuinfo:
    data = line.split()
            if data[-1] == app_name:
                cpu_usage = data[3]
except Exception:
    cpuinfo = raptor.device.shell_output("dumpsys cpuinfo | grep %s" % app_name).split("\n")
    for line in cpuinfo:
        data = line.split()
        cpu_usage = data[0].strip('%')

I wrapped the first call in a try-catch construct because an exception is thrown if anything is up with that top call. If that call doesn't work, I then want to use that dumpsys call.

In the test itself, I would need the shell_output call to first throw an exception (as expected for the scenario) and then return some output that I can then parse and use.

In the PHP world, most test doubling tools allow you to create a mock and have it return different values on consecutive calls. Pytest is no different, but it took me a while to figure out the correct search terms to find the functionality I wanted.

So, here is how I did it:

def test_usage_with_fallback():
    with mock.patch('mozdevice.adb.ADBDevice') as device:
        with mock.patch('raptor.raptor.RaptorControlServer') as control_server:
            '''
            First we have to return an error for 'top'
            Then we return what we would get for 'dumpsys'
            '''
            device.shell_output.side_effect = [
                OSError('top failed, move on'),
                ' 34% 14781/org.mozilla.geckoview_example: 26% user + 7.5% kernel'
            ]
            device._verbose = True

            # Create a control server
            control_server.cpu_test = True
            control_server.test_name = 'cpuunittest'
            control_server.device = device
            control_server.app_name = 'org.mozilla.geckoview_example'
            raptor = Raptor('geckoview', 'org.mozilla.geckoview_example', cpu_test=True)
            raptor.device = device
            raptor.config['cpu_test'] = True
            raptor.control_server = control_server

            # Verify the response contains our expected CPU % of 34
            cpuinfo_data = {
                u'type': u'cpu',
                u'test': u'usage_with_fallback',
                u'unit': u'%',
                u'values': {
                    u'browser_cpu_usage': '34'
                }
            }
            cpu.generate_android_cpu_profile(
                raptor,
                "usage_with_fallback")
            control_server.submit_supporting_data.assert_called_once_with(cpuinfo_data)

Let me break down what I did (as always, I am open to suggestions on better ways to write this test).

The first double is for a class that communicates with the Android device. Then the next double I needed was for the "control server", which is what is used to control the browser and execute tests.

My first "side effect" is to generate an error so it triggers the first condition of the scenario that 'top should not work'. The second "side effect" is the response I am expecting to get from the shell command in my fallback area of the code.

Then I continue with the "arrange" part of the Arrange-Act-Assert testing pattern I like to use -- I configure my "control server" to be the way I want it.

I finish up with creating what I expect the data that is to be submitted to our internal systems looks like.

I execute the code I am testing (the "act" part) and then use a spy to make sure the control server would have submitted the data I was expecting to have been generated.

The ability to have a method return different values is powerful in the context of writing tests for code that has conditional behaviour. I hope you find this example useful!

Watch me get grumpy - event sourcing refactor

April 27th, 2019

As I continue to build out OpenCFP Central I wanted to share with you some of the work I have been doing to move it from your typical CRUD structure to something a little more robust -- CQRS and event sourcing.

So at the time I wrote this, I had two problems I needed to solve:

  • how to refactor existing code for talk creation to support CQRS+ES
  • how to go back and create events for data that already exists

Solving the first problem appears straightforward. This is a Laravel 5.8 application, so I spent time looking at some different packages that I thought could help me implement the core features that I needed for event sourcing quickly. My first research lead me to two solutions tailored towards PHP:

The problem I quickly ran into was there was some friction in using these packages with Laravel. They both are very powerful, but you will end up spending a lot of time writing the wiring and/or glue yourself. That was time that I felt I did not have. After all OpenCFP Central is a project I can only really devote 1-2 days a week on. I needed some help to implement the basic concepts and let me build stuff quickly.

Like with a lot of problems I try and solve over the years, sometimes if you wait long enough someone else will create the solution for you!

I discovered that Freek Murze had been tweeting about Laravel Event Projector. This turned out to be exactly what I needed to get started.

So, I need to go backwards and first figure out how do I take the existing data we have and create events from it. The work I needed to do will be used when I go and refactor the existing user registration code to support the new event sourcing.

What do I need to start?

  • An aggregate that represents a talk
  • An event that needs to be triggered to create our talk aggregate
  • A script that reads in the existing data and creates those events.

The aggregate looks like this:

declare(strict_types=1);

namespace App\Domain\User;

use App\Domain\User\Events\UserCreated;
use Spatie\EventProjector\AggregateRoot;

final class UserAggregateRoot extends AggregateRoot
{
    public function createUser(string $email, string $name, string $password)
    {
        $this->recordThat(new UserCreated($email, $name, $password));

        return $this;
    }
}

Remember, all I'm storing is the data about this user that is important to the system. It will automatically create a UUID that belongs to this aggregate. I'll worry about what ends up in the database we will read information from when I create a projector to extract data from the event store.

Next, I have to create the code for the event that will in turn generate our user aggregate.

declare(strict_types=1);

namespace App\Domain\User\Events;

use Spatie\EventProjector\ShouldBeStored;

final class UserCreated implements ShouldBeStored
{
    /** @var string */
    public $email;

    /** @var string */
    public $name;

    /** @var string */
    public $password;

    /**
     * UserCreated constructor.
     * @param string $email
     * @param string $name
     * @param string $password
     */
    public function __construct(string $email, string $name, string $password)
    {
        $this->email    = $email;
        $this->name     = $name;
        $this->password = $password;
    }
}

It takes the user information passed into it and assigns it to class attributes.

Finally, I will create a console command that I can execute with php artisan that will loop through all my existing user records, storing aggregates for them.


declare(strict_types=1); namespace App\Console\Commands; use App\Domain\User\UserAggregateRoot; use App\User; use Illuminate\Console\Command; use Ramsey\Uuid\Uuid; class GenerateUserCreatedEvents extends Command { /** * The name and signature of the console command. * * @var string */ protected $signature = 'admin:generate_user_events'; /** * The console command description. * * @var string */ protected $description = 'Set an existing user to be an admin'; public function handle() : void { foreach (User::all() as $user) { UserAggregateRoot::retrieve((string) Uuid::uuid4()) ->createUser( $user->email, $user->name, $user->password ) ->persist(); } } }

The UserAggregateRoot uses some magic behind the scenes to then take the data I submitted and write it to the event store.

So what ends up being stored in the events table? In my current test environment I have exactly one user and here's a slightly-edited version of what got created.

id               | 4
aggregate_uuid   | 2b5f88da-1f51-4443-a080-c566c04d452e
event_class      | App\Domain\User\Events\UserCreated
event_properties | {"email":"chartjes@grumpy-learning.com","name":"Chris Hartjes","password":"clearlynotmypassword"}
meta_data        | []
created_at       | 2019-04-26 19:31:44

Again, I remind you to understand the central concept that we are creating events that do not necessarily map exactly to what ends up in a database. One of the central parts of CQRS is that by separating the "command" (typically writing something to a database) from the "query" (typically reading something from a database). This gives me the flexibility to have multiple database tables that can display information in different ways. This is especially useful for domains where multiple types of reports are required. Multiple tables with exactly the data you need are, in my opinion, better than one big table where you have to create customized queries to filter out, and group, and aggregate things you are looking for.

First, I need to modify the controller method that handles user creation to create an aggregate instead of directly writing to the database.

Here's what it looks like right now:

/**
* Create a new user instance after a valid registration.
*
* @param  array  $data
* @return \App\User
*/
protected function create(array $data) : User
{
    return User::create([
        'name' => $data['name'],
        'email' => $data['email'],
        'password' => Hash::make($data['password']),
    ]);
}

Here's what it does instead:

protected function create(array $data) : User
{
    $newUuid = (string) Uuid::uuid4();
        UserAggregateRoot::retrieve($newUuid)
            ->createUser(
                $data['email'],
                $data['name'],
                Hash::make($data['password'])
            )
            ->persist();

        return User::where('email', $data['email'])->first();
}

Now, this is not exactly normal behaviour in a CQRS+ES application. Because Laravel's Auth system was not created with this is in mind, I had to cheat a little to make the use registration system behave correctly.

Next I need to create a projector that will be triggered whenever the UserCreated event happens.

declare(strict_types=1);

namespace App\Domain\User\Projectors;

use App\Domain\User\Events\UserCreated;
use App\User;
use Spatie\EventProjector\Projectors\Projector;
use Spatie\EventProjector\Projectors\ProjectsEvents;

final class UserProjector implements Projector
{
    use ProjectsEvents;

    public function onUserCreated(UserCreated $event, $aggregateUuid) : void
    {
        User::create([
            'name' => $event->name,
            'email' => $event->email,
        'password' => $event->password
        ]);
    }
}

Alert readers will notice this is what the old controller method used to do.

Any projectors you write will be automatically detected and registered by the application.

So there you have it! A successful refactor of some existing code to support a new underlying paradigm. There is still more work to do and I'll share some of it in another blog post soon.

Watch Me Get Grumpy -- Zend Expressive Doctrine Configuration

March 17th, 2019

I am in the process of starting the dreaded Rewrite Of An Existing Application That Works. In this case, it is time that I turned OpenCFP from a install-it-yourself web application into a Software-As-A-Service offering.

As part of this rewrite I have decided to use CQRS and Event Sourcing instead of the traditional CRUD-backed-with-a-DB architecture that most of the web is built on.

I believe that an application that has so many domain-specific events associated with it will benefit greatly from the ideas underpinning CQRS and Event Sourcing. Anyway, the architecture is not up for debate since I'm the one doing it!

This app is going to replace what I already created at OpenCFP Central and I will cut over to this new one once I have implemented the two existing features:

  • allowing people to register accounts
  • allowing people who are running OpenCFP to use OpenCFP Central for single-sign-on

Because there is so much reworking to be done with the OpenCFP code base to make it a SaaS capable of hosting multiple events, I felt it was better to start fresh with the code base. Especially because I now need to add all the CQRS+ES implementation.

The existing version of the application is a standard CRUD-backed-with-a-DB that was built using Laravel. My research into figuring out how to add CQRS+ES led me to believe that I did not have the requisite knowledge of the framework to figure out how to make it work. Laravel is great in that it has lots of packages and add-ons to allow you to quickly build something. I felt like this was not going to help me in this case. Laravel is good! But not a great fit for someone with my level of expertise with it.

So I decided to use Zend Expressive as the framework to build this app. My online network of friends includes many people who have used the framework, and one of the best and most thorough examples of how to build an application using CQRS+ES was done by Marco Pivetta and it was backed by Zend Framework and uses Prooph for CQRS+ES functionality.

(As an aside, using Zend Expressive has reminded me how much I have relied on 'batteries included' frameworks in recent years. Forcing myself to also write glue code is actually a good thing for me)

So, I knew the framework, I knew what I could use for CQRS+ES. Now it was time to install some other tools to help me build out this version of OpenCFP.

I was going to require some sort of tool to create database migrations as the app gets built. I was also learning towards trying not to use an ORM but instead something like Doctrine DBAL so I decided to also use Doctrine Migrations since it can be used with our without the ORM.

I found some great examples of how to set things up...and it just wouldn't work for me. The steps seemed straightforward and I highly recommend watching Adam Culp's Beachcasts tutorial on configuring Doctrine ORM and DBAL. I had my database configured and working. I added in the code to allow the Zend Service manager to locate Doctrine as required. The examples said "this should work just fine with DBAL." I had the 'migrations.php' and 'migrations-db.php' file and it Just Wouldn't Work.

Until I realized the key critical thing I had hand-waved and did not think anything off -- environment variables.

The app is going to be deployed to Heroku, where I can set environment variables that can be accessed by code, both in a CLI and web environment. I use environment variables in my work at the day job so why wouldn't I do that here?

This is what my 'migrations-db.php' file looked like:

<?php
declare(strict_types=1);

return [
    'driver' => 'pdo_pgsql',
    'dbname' => \getenv('DB_DATABASE'),
    'user' => \getenv('DB_USER'),
    'password' => \getenv('DB_PASSWORD'),
    'host' => \getenv('DB_HOST')
];

When I would run the migration tool it would spit out errors telling me it could not read the database configuration file and a bunch of other noise that just made me grumpier and grumpier as I struggled to figure out what was wrong.

Eventually I decided to see what as actually inside those environment variables. To my surprise there were empty! Ugh. But I did know what I could do to fix it. I would make use of Vance Lucas' dotenv tool to make sure the contents of my own '.env' file would be available.

After installing it using Composer as per the documentation, I added this code to my 'migrations-db.php' file:

use Dotenv\Dotenv;

if (file_exists(__DIR__ . '/.env')) {
    $dotenv = Dotenv::create(__DIR__);
    $dotenv->load();
}

Now the migrations tool worked just fine, and I was on my way towards the first step of the app -- building the user registration system and making sure authentication worked correctly.

If you have any comments or suggestions, please reach out to me via Twitter (my preferred way) or you can email me at chartjes@grumpy-learning.com.