End to end testing in React Native

I’ve been a professional React Native developer for just over a year now, previously working on a number of different freelance and contracting engagements.

I’m a single developer maintaining two different apps for my day job, which are in constant development. It’s easy to ‘lose your place’ when digging up a project from a month ago and let bugs slip in.

There’s no excuse for it unfortunately – it’s not OK to let bugs slip through because you didn’t keep good enough track.

For the most part, I’ve used Jest snapshot testing to render React Native components out to JSON snapshots, and then compare the differences. This is cumbersome, and basically requires as much test maintenance as code writing.

Sure, it makes sure that a bug can’t slip in to the document model without it passing through your eyes, but it’s also easy to get some snapshot fatigue when 10 snapshots update after a decent chunk of coding. In retrospect, I don’t think jest snapshot testing is that great for a mobile app.

Also, jest component snapshots don’t see fatal errors that’ll crash your mobile application. If you’ve ever made the mistake of outputting an ascii character *outside* of a <text /> component, the whole app comes crashing down. Jest snapshots? No worries, everything passing!

E2E testing is a good, fast solution to this problem. E2E testing, when carefully designed, can work through every screen of your application (including with dynamic data) and make inputs like tapping, dragging, and typing text. You can assert the UI visible state of various components while the E2E tests run, but that’s not the good stuff.

The good part about E2E testing is screenshot diffing. In my case, we set up a test harness (a fake server for the app to connect to, basically). Run through a sequence to connect the app to the fake server. Then visit every screen of the app and cycle through settings on each screen to capture a screenshot of nearly every possible state of the app. 

It’s very slow compared to snapshot testing – the full pipeline takes about 2 minutes. BUT – we’re running an actual release build of the app, processing actual user input, and running it through the actual code. If there’s a hard crash on the user profile screen because I’ve accidentally put in some white space.. The E2E tests are going to find it.

I’ve modified the CI pipeline to stash these screenshots as job artifacts too, so at any stage we can dig back to old branches and commits and see what the app looked like in its entirety.

The screenshots are instantly visually parseable to a user. It’s not a big JSON diff of View > View > View > View > SVG > G > G > Text, which now has another <G> because of an svg version bump. It’s just a text label highlighted red because it’s moved.

Now that I’ve stabilised onto a stack for E2E testing in React Native, I’ll do a quick write up next time about the pipeline that is working for me.

Scripting a simple nodejs server

This post covers scripting a simple node.js server from scratch. It’s easy, but deceivingly powerful.

We’ll use ES6 style javascript, HTTPS (with a self signed certificate), a live reloading server, and serve responses from an external JSON file.

What’s it for?

If you’ve got some kind of moving target (a versioned API, a usage restricted API, the API isn’t designed yet, an API with clunky authentication) it can drastically speed up your front-end development to have a quick nodejs server running to serve back JSON responses.

This is a two part project, the other side collects and stores responses from the real API in JSON files.

Initial setup

Since it’s a nodejs project, we need a package.json to store our dependencies and project info. Run npm init to initialise the project and create the file.

npm init threw an error? You probably need to upgrade or install nodejs & npm, see this tutorial first.

npm init asks a few questions, tap enter until it’s done. You’ll have a bare package.json file in your project.

Create an index.js file, and enter the following code: console.log("Works!");

Set up live reloading

We’re going to use live reloading from the start – our server will restart anytime we make a change. We use nodemon for this. Make sure you’ve got nodemon installed globally by running npm install -g nodemon

Generating self signed certificates

Next we need to generate some self signed certs that our https server will use. These are useful only for development.

openssl req -nodes -new -x509 -keyout server.key -out server.cert

Again it’ll ask a few questions, enter through it quickly. You should have a server.cert and a server.key in your project folder now.

Running the script the first time

In package.json, add a “start” key to the “scripts” object with value “nodemon index.js”

Your package.json should look like this:

{
  "name": "nodejs-server",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "start": "nodemon index.js",
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC"
}

Run npm start to see the message “Works!” from above. Leave the script running.

If you get an error here, read it carefully and google the error. You’ll likely just be missing a library you need.

Now we’ll butcher that index.js file quickly to be a https server.

Setting up the nodejs server

Empty the file, and add the following

Import our https server library and the filesystem library (to read the certs). We need

const https = require('https');
const fs = require('fs');
const url = require('url');

Add the port in use, and a simple request handler.

const port = 3000;
const requestHandler = (request, response) => {
    // do something with the request here.
    console.log("Request received");
    response.end("Hello!");
};

Create the https server, and connect requests to the handler.

const server = https.createServer({
    key: fs.readFileSync('server.key'),
    cert: fs.readFileSync('server.cert')
}, requestHandler);

Tell the server to listen for responses.

server.listen(port, (err) => {
    if (err) {
        return console.log('Error occurred.', err)
    }

    console.log(`server is listening on ${port}, https`)
})

When you’re saving index.js, you should notice the server repeatedly restarting. It’s ready for a request now!

If you used port 3000 above, you should be able to visit https://localhost:3000/ to see a response generated by requestHandler.

You’ll receive a security notice about your self signed certificates, continue past this.

Great! It’s working.

Sending back JSON from a file

To make this server useful, we need to connect it to some real world data.

For this demo, I borrowed books.json and dropped it in my responses.json file.

Download responses.json from here and drop it in your project.

If our test user visits https://localhost:3000/books, we want to serve him up the JSON stored in the ‘books’ key of the responses.json file. Change requestHandler to do this now:

const requestHandler = (request, response) => {
    // do something with the request here.
    console.log("Request received to " + request.url);

    switch (request.url) {
        case "/books":
            fs.readFile("./responses.json", function (err, contents) {
                let data = JSON.parse(contents);
                response.end(JSON.stringify(data.books));
            });
            break;
        case "/":
            response.end("Hello!");
            break;
        default:
            response.writeHead(404);
            response.end("Page not found.");
            break;
    }
};

To test, visit these 3 URLS. The first should return a JSON list of books, the second ‘Hello’, and the third a 404 error.
https://localhost:3000/books
https://localhost:3000/
https://localhost:3000/movies

Project files

The nodejs server in it’s finished state is available from github here. If you have any difficulties getting the project up and running, please let me know.

Eloquent: Joining a table to itself (Many-To-Many)

You should be forewarned, this is a snippets post. I find many to many joins a bit clunky, so this is a good reminder for next time 🙂

On a side project, I need to link one or many Stores to other Stores. (A store applies to sell products from another store, creating a web of store permissions).

Create the table

The first step to the many to many is making the table. Laravel best practice is usually amodelname_zmodelname. Since both of my joins are to Store, my table is supplier_store_shipper_store.

      Schema::create('supplier_store_shipper_store', function (Blueprint $table) {
            $table->increments('id');
            $table->integer("supplier_store_id");
            $table->integer("seller_store_id");
            $table->boolean("is_approved")->default(0);
            $table->timestamp("access_requested_at")->nullable()->null();
            $table->timestamp("approved_at")->nullable()->null();
            $table->timestamps();
        });

Seller Store is the requesting store, Supplier Store is the approving store. is_approved, access_requested_at, approved_at are all pivot fields that we need to keep track of as part of the relationship.

There’s no additional ID fields required in the store table, this new table holds everything about the relationship.

Define the relationship in the model

By creating a belongsToMany relationship with the table name, both key names, and pivot fields we want, Laravel will quickly tell us which stores have been approved to sell.

In Models\Store.php, join this store to App\Models\Store via supplier & seller store IDs.
I used a screenshot here since it’s so well self-documented.

Insert a row in the Many to Many table.

Inserting a row in the many to many table is as simple as calling the function above that we just defined and then attach. Note how we are also passing in the pivot fields in an associative array as the second parameter.

$supplier_store->approvedToSellFromStores()->attach($store, [
                'is_approved'=>0,
                'access_requested_at'=>Carbon::now(),
                'approval_key'=>$approval_key
            ]);

Now we’ve got a record in our database documenting the seller store has requested access to the supplier store.

Modifying the pivot row

In this case, a store applies for access and the other store grants access . That’s two operations on the pivot row, so we need to be able to update it.

I find updating the pivot rows a bit clunky. First we load the store, and then load the pivot, and then filter the pivot. The remaining rows are the rows we need to operate on.


$records = $supplier->approvedStores()->wherePivot('approval_key','=',$approval_key)->wherePivot('is_approved','=',0);

if($records->count() > 0){

    $records->updateExistingPivot($seller, array('is_approved' => 1, 'approved_at'=>Carbon::now()));

Calling updateExistingPivot with an associative array as the second parameter (as before!) allows us to update the pivot, and grant the seller access to the supplier products.

Back in our product directory, we can now use a nice helper function to make sure the row exists and the seller is approved to sell the product – making sure is_approved = 1 if you’re following along!

@if($store->hasApprovalToSell($product->store))

After some profiling, I’ll likely need to add an index to both supplier & seller store IDs in the database migration before this app goes to production.

Using rn-cli.config.js to link a shared library

When working on more than one React Native project, it makes sense at some point to extract shared tools into an external library.

We then reference the shared library from all projects, and maintain a single codebase for the shared code. For whatever reason, the Metro bundler used by React Native does not support symlinks, so we need a hack.

I’ve found two ways to do this:

  1. Work on the library externally, and directly require the source in the React Native project (import sharedlib from ‘../../test/some-shared-code’
  2. Work on the library externally, and require the shared library in package.json ‘import sharedlib from ‘sharedlib’. For dev purposes, rsync the changes over.

This post covers working on the library externally, and requiring via rn-cli.config.js

Work on the library externally

In React Native, this takes some massaging to work. The Metro bundler doesn’t allow symlinks, so we need to add the external root to the Metro config.

This approach has issues when working on a library that will be used in multiple projects, and when worked on by others. It complicates project setup and isn’t an elegant way to present your project.

Example rn-cli.config.js, pre babel 7 (< RN 0.57)

Shared library is in ./shared-library. :

const Path = require('path');
const blacklist = require('metro').createBlacklist;

module.exports = {
    getTransformModulePath() {
        return require.resolve('react-native-typescript-transformer');
    },
     
    getBlacklistRE: function () {
       return blacklist([]);
    },
    extraNodeModules: {
        react: Path.resolve(__dirname, "node_modules/react"),
        "react-native": Path.resolve(__dirname, "node_modules/react-native")
    },
    getSourceExts() {
        return ['ts', 'tsx'];
    },
    getProjectRoots: () => [__dirname, Path.join(__dirname, "../shared-library")]
}

Note specifically ‘getProjectRoots’, where we tell the bundler to check in the shared-library folder as well.

Also note ‘extraNodeModules’, where we tell the bundler which dependencies to share with the shared-library when adding it.

‘getBlacklistRE’ is an array of files to ignore from the current project or included project.

Example rn-cli.config.js, babel 7 (RN 0.57+)

const Path = require('path');

module.exports = {
    resolver: {
        sourceExts: ['tsx', 'ts', 'js'],
            extraNodeModules: {
                react: Path.resolve(__dirname, "node_modules/react"),
                "react-native": Path.resolve(__dirname, "node_modules/react-native")),
            }
             blacklistRE: /shared-library\/node_modules\/react-native\/

    },
    transformer: {
        babelTransformerPath: require.resolve("react-native-typescript-transformer")
    },
    watchFolders: [Path.join(__dirname, "../shared-library")]

};

Note the different syntax.

Why this method sucks

Linking to your library from the app with rn-cli.config.js works perfectly fine. The Metro bundler does what it says on the box, the file system watcher works, live reloading still works fine.

The problems appear down the track. In my case, I had two apps making use of a shared helper library. I’d work on app 1 for a few weeks, and then app 2 for a few weeks. Sometimes there’d be an update to the helper library.

Next time I dig out app 1 to work on it, there’s uncertainty about what’s actually changed in the shared library. Something has broken, a test is failing, a method has moved. When functionality was being improved or added for app 1, app 2 was coming along for the ride whether I liked it or not.

There’s no real way to pin each app to a stable version as development continues, and the CI/CD pipeline became just as brittle – if I re run a build next week, and I’ve made a commit to the shared library in the meantime, the build is a different output. Big no no.

What’s the solution?

I’ll cover this in a better post soon, but the solution to this problem is to require the shared library in the package.json dependencies of the app projects.

During development, the library is rsynced into the node_modules folder during development. You’re able to work on and debug the shared library and the app codebase at the same time.

When the library is finished being worked on, commit, push, and pin the dependency version in the app to the commit hash.

rm -rf node_modules, npm install, and now the app is pinned to the last *actual version* of the shared library you ran the app against.

The only shortcoming I’ve found with this method so far is it requires discipline/learning to make sure you’re working on the source file (in the shared library project) and not the destination file (in the app project/node_modules/shared-library folder).

Using JQ and merging JSON objects into an array

I have a collection of files named file1.json, file2.json, file3.json

Inside each file is a JSON object representing an API response.

{“result”: {“foo”:”bar”, “etc”: “etc”}}

As part of a bigger project, I need these merged together so I can replay them later.

The JQ library is a powertool for this type of work, deep diving into the JSON structure and extracting necessary bits and pieces.

I’m only merging though, so here’s how to take each file and merge it into an array (and output to out.json)

First up we need to brew install jq

Then we can run the command on the collected files:

jq -s ‘.’ test*.json > out.json

[{"result": {"foo":"bar", "etc": "etc"}}, {"result": {"foo":"bar", "etc": "etc"}}, etc]