Solar in Bargo – numbers

We installed solar at our house in Bargo 2574 (a town in Wollondilly, NSW) in March 2020.

It cost approximately $5k for a 6.5kW system (feeding into a 5kW inverter). Half the panels face west, and half face north.

I’ve always been concerned that people who install solar need to convince the world that it was a good decision, and they often mask the real numbers involved. With that, here’s some real numbers.

A few interesting points, after a week of ownership.

A sunny day in Summer generates >30kW ($9 of electricity). The effective timespan is 8:30am (~1kW) until 6pm (~1kW). From 11am until 4pm the system generates consistently generates between 4-5kW. The highest I’ve seen is just slightly over 5kW, so I assume it maxes out there.

A gloomy day, raining from sunrise to sunset is generating ~5kW max for the day. ($1.50 of electricity)

Our usage (6 person household, 3 x split system air conditioners, tvs left on, 2x fridges, always using a dryer, and a pool) is 30-40kWh/day ($9-12 of electricity per day). This means on the sunny days we’re generating as much power as we’re using.

Unfortunately, the feed-in payment will be 10c/kW (where it costs ~30c/kW to consume) so we expect that even with perfect summer days we will still generate a bill*.

I’m not sure what our day/night consumption is like relative to the solar power generation window of 8:30-6pm, and obviously those goalposts will continue to move after summer. We’re expecting a 30-40% drop on our electricity bills and a 2 year repayment window on the $5k outlay.

*At the moment, our power meter (that is read manually every 3 months) cycles backwards when we are generating excess power. Wish we could leave it that way, it’s effectively 30c/kW feed-in payment too! I have a feeling the energy company won’t waste time fixing that.

I’ll follow this up when we receive our next power bill. The solar system was installed the day after our quarterly bill rolled over, so we should get a very realistic number on what we’ve saved.

Vue.js fragments

All manner of googling didn’t turn up a good result for this problem.

Sometimes, you need a wrapper component with logic attached that does not render to the UI.

For example, child components with a percentage of the parent can’t just have a div wrapped around them for conditional logic.

Another example (which I found a different, novel solution for) is using more than one child component in a root node. Only one renders, but a few are there and are excluded by logic.

React has a ‘fragment’ for this and afaik vue.js does not.

Using ‘vue-fragment’ adds a helpful fragment tag that is removed during render, but allows application of logic and generally acting as a ‘root’ component for multiple siblings.

below, we are using a ‘fragment’ tag to render or hide 2 items based on if the user is an admin. Trivially wrapping these in a div with logic breaks the layout.

                <b-navbar-nav>
                    <b-nav-item :to="'/'">
                        Dashboard
                    </b-nav-item>
                    <b-nav-item :to="'/something'">
                        Something
                    </b-nav-item>

                    <fragment v-if="user.role === 'Admin'">
                        <b-nav-item :to="'/item-1'">
                            Item 1
                        </b-nav-item>
                        <b-nav-item :to="'/item-2'">
                            Item 2
                        </b-nav-item>
                       
                    </fragment>
                </b-navbar-nav>

Install from yarn (or npm i vue-fragment)

yarn add vue-fragment

Add the fragment component

import { Plugin } from 'vue-fragment'
import Vue from 'vue'
Vue.use(Plugin)
<fragment>
  <div>Use the fragment! The fragment tag will be removed at render time.</div>
</fragment>

Nuxt.js Mixin – access global property in template

Adding a mixin in nuxt.js is just like adding a plugin. For my project, I needed a date format property set once but accessible in all my templates (named ‘moment_format’)

Step 1, In plugins/, create a plugin file (Mine is called ‘vue-format-global.js)

// plugins/vue-format-global.js
import Vue from 'vue'

var mixin = {
  data: function () {
    return {
      moment_format: 'DD/MM/YYYY HH:mm'
    }
  }
}

Vue.mixin(mixin)

Step 2, connect up the plugin in nuxt.config.js

  // in nuxt.config.js

  /*
  ** Plugins to load before mounting the App
  */
  plugins: [
    /* other plugins ... */
    '~/plugins/vue-format-global'
  ],

Step 3, restart the app and use freely in your templates.

// inside a template
<template>
effective since {{data.effective_date | moment(moment_format)}}
</template>

Nuxt.js and Vuex – dumping flash messages

Context:

  • A child component has received an error message, and needs to communicate it to the parent layout.
  • There’s no unhacky way to do that, so we should put it in the application store (vuex).
  • When the user navigates, the error is retained forever more.

It’s a problem seemingly without a clean answer.

Step 1: In the vuex store, create an errors array to hold the errors. Create two mutations: ‘set_errors’ and ‘clear_errors’.

// store/index.js
export const state = () => ({
  errors: []
})

export const mutations = {
  errors (state, errors) {
    state.errors = errors
  },
  clear_errors (state) {
    state.errors = []
  }
}

Step 2: In the component generating the error, put the errors in the vuex store.

// sign in has failed.
.catch(e => {
        let errors
        switch (e.response.status) {
          case 422:
            errors = this.translateErrors(e.response.data.errors)
            this.$store.commit('set_errors', errors)
            break
        }
      })

Step 3: In the ‘Errors’ component that lives in the layout, capture and display the errors. Confirm these two pieces are communicating.

// components/Errors.vue

<template>
    <div>
      {{errors.map(e => e)}}
    </div>
</template>

<script>
import { mapState } from 'vuex'
export default {
  computed: mapState(['errors'])
}
</script>

If that’s all working, it’s time to clear the message between navigation events.

Step 4: Create a ‘clearFlash’ middleware.

// middleware/clearFlash.js

export default function (context) {
  if (context.store.state.errors.length > 0) {
    context.store.commit('clear_errors')
  }
}

Step 5: Wire up the middleware in nuxt.config.js. Add the router and middleware keys if needed.

//nuxt.config.js

  router: {
    middleware: 'clearFlash'
  },

All done and working!

Dumping redux state on log out

When a user signs out of my mobile app and signs in as a different user, we need to make sure no application state can be carried over.

Adding a simple ‘LOGGED_OUT’ action that drops the application state is a straight forward solution. I’m keeping a few properties.

Here’s the normal reducers object that is exported.

var { combineReducers } = require('redux')

const reducers = combineReducers({
  attachments: require('./attachments'),
  user: require('./user'),
  //...etc
});

module.exports = reducers //or export default etc.

We can wrap the existing reducer with a single reducer that resets the state. Below, I’ve chosen to keep a few properties of my application state (out of about 20).

//existing reducers
const reducers = combineReducers({
  attachments: require('./attachments'),
  user: require('./user'),
  //...etc
});

//wrapped with the logged out action
const reducersWithLogoutDumpState = (state, action) => {
  if (action.type === 'LOGGED_OUT') {
    let newState = {}
    ['projects', 'device', 'user'].map(k => {
      newState[k] = state[k]
    })
    state = newState
  }
  return reducers(state, action)
}

module.exports = reducersWithLogoutDumpState

Here I’m creating a fresh state object (newState) and retaining the projects, device, and user properties from the old state.

Note that the export has changed to the new function name, so there’s no other changes throughout the application.

(An adaptation of the answer here: https://stackoverflow.com/questions/35622588/how-to-reset-the-state-of-a-redux-store.

Handling payments securely in React Native

I’m rebuilding an e-commerce mobile app side project (WooToApp) that requires payment for orders created.

The initial gateway is Paypal, but it’s an important part of the mobile app that there could be many more gateways, so the architecture needs to be re-usable and secure.

Basic architecture

Assumptions

We need to assume that the mobile app context is insecure. It’s easy to decompile a mobile app and extract keys, and it’s easy to copy HTTP requests to update an order as paid. Both of these are deal breakers.

We can assume the payment gateway and the app backend are secure.

With these assumptions, I made this quick architecture map to follow.

Quick explanation

It’s a relatively common pattern – the mobile app signals intention to pay to the backend.

The backend talks to the gateway, and returns the mobile app just enough information to pay (a redirect to a payment page)

The gateway then notifies the backend that payment has been authorised, which updates store and the mobile app.

Note above that the only components doing any sort of heavy lifting are our trusted components (app backend and payment gateway). We don’t trust any sensitive data from the mobile app or the ecommerce store.

Explicitly defining the architecture before starting work (and fine tuning along the way) means it’s easy to get into ‘thinking’ mode and define the secure boundaries and map it out.

In the next sitting when it’s time to write the implementation, the architecture and security considerations are both fresh in your mind AND documented. It’s a lot easier than architecting and reasoning about security along the way.

Payment Handling

I won’t dive into the server side payment handling here. There’s nothing exciting there, it’s just nothing innovative. A nodejs script (the app backend) captures the payment request from the mobile app and creates a payment request that paypal understands. Paypal gives the backend a redirect URL.

In the mobile app, we should a secure frame for the customer to make payment to the gateway. Paypal redirects the user to a URL that notifies the backend that payment was received, which redirects the user to the app thankyou page.

The mobile app has zero knowledge of paypal libraries and integrations. There’s no surface area for the mobile app to be vulnerable. Secret keys are never safe in a mobile application.

Payment UI

This is a pretty important one. The regular webview in React Native is susceptible to manipulation and as far as I know should not be used for any type of third party secure communication in React Native. For example, an app developer is able to inject a JS keylogger. It’s flawed by design.

iOS and Android both make a secure browser context available that is not susceptible to manipulation, which has been abstracted into a useful library here.

Once payment has been made, the gateway success URL redirects back to a deep link for the mobile app (think myapp://order-paid/505). React-Navigation deep linking works with simple code changes to handle this.

There’s a small issue to work around – the popped out web browser context successfully causes the deep link to fire and navigate to the correct page in the mobile app, but you can’t see it because the web browser context stays open on the top.

The Order Paid page will always be the result of a deep linked action, so we can just hide the browser when the page initiates.

Notes: Building a React App using Lambda, Dynamo DB and API Gateway – Part 1 (The back-end)

These are notes and additions from working through this youtube video. In the video and in the notes below, the AWS CLI is used to configure Lambdas, a Dynamo DB and an API Gateway.

Setting up an IAM user

Visit the IAM Console and create a ‘Programmatic Access’ user.

Add the user to ‘Administrator’ group.

Install the AWS CLI tools.

brew install awscli

Configure AWS CLI tools.

aws configure

Paste in the Access Key ID and the Secret Access Key. Leave region name and output format as defaults.

Dynamo DB – Create a table

aws dynamodb create-table --table-name ToDoList \
--attribute-definitions AttributeName=Id,AttributeType=S \
AttributeName=Task,AttributeType=S \
--key-schema AttributeName=Id,KeyType=HASH \
AttributeName=Task,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

We’ve created a table with columns Id (S=String) and Task (S=String).

We’ll need the TableArn later, so use this JQ command to store the TableArn in an environment variable.

TABLEARN=$(aws dynamodb describe-table --table-name ToDoList | jq '.Table .TableArn')

Create role and policy

The role is a fairly standard template, and is just a bare role with lambda access.

Save the following to lambda_role.json

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Principal": {
				"Service": ["lambda.amazonaws.com"]
			},
			"Action": "sts:AssumeRole"
		}
	]
}

Create the role on AWS:

aws iam create-role --role-name lambda-role --assume-role-policy-document file://lambda_role.json

The policy adds put and scan permissions to the role.

Create policy.json and add the following. Replace the Resource value with the output from echo $TABLEARN (that we saved above).

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": ["dynamodb:PutItem", "dynamodb:Scan"],
			"Resource": "replace me---->arn:aws:dynamodb:us-east-1:478724445133:table/ToDoList"
		}
	]
}

Attach the policy to the role.

aws iam put-role-policy --role-name lambda-role --policy-name dynamodb-access --policy-document file://policy.json

We’ll save the role ARN for use later (we need to assign it to the lambda scripts later on).

ROLE_ARN=$(aws iam list-roles | jq '.Roles[] | select(.RoleName=="lambda-role") | .Arn' -r) && echo $ROLE_ARN

Create the node.js scripts & turn them into Lambdas.

First up is the file that fetches our to-dos from DynamoDB. It uses a full scan, and there’s no filtering or paging.

Run code get.js and drop in this code to scan the table:

const AWS = require("aws-sdk");

const documentClient = new AWS.DynamoDB.DocumentClient();

exports.getAllTasks = (event, context, callback) => {
	const params = {
		TableName: process.env.TABLE_NAME,
	};

	documentClient.scan(params, (err, data) => {
		if (err) {
			callback(err, null);
		} else {
			callback(null, data.Items);
		}
	});
};

zip it up with zip get.zip get.js

We’ll push up get.zip and turn it into a lambda:

aws lambda create-function --function-name get-all-tasks --zip-file fileb://get.zip --runtime nodejs8.10 --role "$ROLE_ARN" --handler get.getAllTasks --environment Variables={TABLE_NAME=ToDoList}

Same goes for post.js. code post.js, paste this code in. This takes the task supplied and pushed it to the TableName table in DynamoDB. TableName will be populated by an environment variable later when we use these lambdas.

const AWS = require("aws-sdk");

const uuid = require("uuid");

const documentClient = new AWS.DynamoDB.DocumentClient();

exports.addTask = (event, context, callback) => {
	const params = {
		Item: {
			Id: uuid.v1(),
			Task: event.task,
		},
		TableName: process.env.TABLE_NAME,
	};

	documentClient.put(params, (err, data) => {
		if (err) {
			callback(err, null);
		} else {
			callback(null, data.Items);
		}
	});
};

Zip it up: zip post.zip post.js

Save as a lambda function:

aws lambda create-function --function-name add-task --zip-file fileb://post.zip --runtime nodejs8.10 --role "$ROLE_ARN" --handler post.addTask --environment Variables={TABLE_NAME=ToDoList}

Creating the API

Creating the API involves a ton of steps. I’ve made heavy use of environment variables below to save the chopping/changing and leave less room for errors.

  1. Creating the API Gateway
  2. Creating the route (resource)
  3. Adding methods to the route (GET & POST)
  4. Adding responses to the methods
  5. Adding integrations (connecting up the lambdas)
  6. Adding integration responses
  7. Deploy to a stage

Create the API:

aws apigateway create-rest-api --name 'To Do List'

Capture the Rest API ID (into APIID) for later use (we will need to use this *a lot*. Capture the ROOT ID as well (the ID of the resource that serves /

APIID=$(aws apigateway get-rest-apis | jq '.items[] | select(.name=="To Do List") | .id' -r) && echo $APIID
ROOTID=$(aws apigateway get-resources --rest-api-id $APIID | jq '.items[] | select(.path=="/") | .id' -r)

Create the /tasks resource/route. (And on the second line, capture it in an environment var)

aws apigateway create-resource --rest-api-id $APIID --parent-id $ROOTID --path "tasks"
TASKSID=$(aws apigateway get-resources --rest-api-id $APIID | jq '.items[] | select(.path=="/tasks") | .id' -r)

Add GET and POST methods to the new /tasks endpoint

aws apigateway put-method --rest-api-id $APIID --resource-id $TASKSID --http-method GET --authorization-type NONE
aws apigateway put-method --rest-api-id $APIID --resource-id $TASKSID --http-method POST --authorization-type NONE

Add a 200 response handler to the 2 new methods.

aws apigateway put-method-response --rest-api-id $APIID --resource-id $TASKSID --http-method GET --status-code 200
aws apigateway put-method-response --rest-api-id $APIID --resource-id $TASKSID --http-method POST --status-code 200

Create the integrations: head over to the AWS Management Console and manually wire up the Lambda to the API Gateway (GET/POST)

Connecting the lambda to the API Gateway method is an integration. Add an integration response of 200 for the 2 integrations.

aws apigateway put-integration-response --rest-api-id $APIID --resource-id $TASKSID --http-method GET --status-code 200 --selection-pattern ""
aws apigateway put-integration-response --rest-api-id $APIID --resource-id $TASKSID --http-method POST --status-code 200 --selection-pattern ""

Now we’re ready to deploy the API. Pick a stage name (such as dev) and deploy the API to the stage.

aws apigateway create-deployment --rest-api-id $APIID --stage-name dev

Now you can request the todos from the API, and post new todos.

Note ‘dev’ below is the stage name. Save the endpoint address:

ENDPOINT=https://$APIID.execute-api.us-east-1.amazonaws.com/dev/tasks

Save a new item to the endpoint:

curl -X POST -d '{"task": "Eat"}' $ENDPOINT

Request all items from the endpoint (and use jq to pretty print):

curl -X GET $ENDPOINT -s | jq

Intro to AWS using CLI tools

Following along some useful youtube videos, I’ve created a cheat sheet below for reference when setting up AWS services using the CLI tools.

This post assumes a level of familiarity with basic AWS services through the management console – you should know what a security group is and what an instance is, etc.

Youtube videos:

1. cli basics: https://www.youtube.com/watch?v=_P0fgqt99RA
2. cloudformation basics: https://www.youtube.com/watch?v=EVK8ultk-u0
3. lambda basics: https://www.youtube.com/watch?v=ZybIYqjXt1g

The snippets below assume the aws command line tools have already been installed , via a guide such as this one for installing AWS tools with pip.

You should also install jq via brew, as responses by default are JSON objects. JQ allows easy filtering to fetch for example an array of IDs from a big JSON response:
brew install jq

CLI Basics

On a fresh CLI install, we have no permissions. Run aws configureto enter a set of API keys that will be used through the process.

To create a new EC2 instance, we need pre-requisite security groups to access the instance and a key pair to connect to the instance.

Create a security group

Creating the security group, with no rules:

aws ec2 create-security-group --group-name cli-example --description "this is the cli example"

The response will return a “GroupId”. You should capture this in a text doc for later.

{
    "GroupId": "sg-0666aba64453e0791"
}

Opening up SSH on the security group

aws ec2 authorize-security-group-ingress --group-name cli-example --protocol tcp --port 22 --cidr 0.0.0.0/0

Allows ANY IP through the security group via TCP/Port 22. In production, this IP range should be seriously limited.

Creating a keypair to SSH in

aws ec2 create-key-pair --key-name test-key --query "KeyMaterial" --output text > test-key.pem

Creates a keypair and saves the private key to ‘test-key.pem’. The key has too many permissions by default, so it’s important to give it only ‘user read’ permissions (400)

sudo chmod 400 test-key.pem

It’s worth mentioning that commands have a corresponding describe command. In this case, we can describe the key pairs attached to the account. It’s useful to check what’s still running on cleanup, and to check what was actually created in the case of failures.


aws ec2 describe-key-pairs

Finding and creating an instance

We’re going to gloss over a whole giant part here of working out what image you want, and assume you’re happy to use the pre-configured Amazon Linux 2 AMI.

Using a describe-images request and some clever jq (installed at the start) we can return the AMI for the latest Amazon Linux 2 AMI. At the time of this post, it’s ami-0b898040803850657

It’s time to spin up an instance with the AMI ID, security group ID, and key pair we made above. Make sure to drop yours in below instead of mine:

aws ec2 run-instances --image-id ami-0b898040803850657 --security-group-ids sg-0666aba64453e0791 --instance-type t2.micro --key-name test-key

Extract the InstanceId and PublicIpAddress from the response, and connect to the public IP address using the private key.

ssh [email protected] -i test-key.pem

With any luck, you’ll be connected to your new instance!

When you’re done, make sure to kill the instance.

aws ec2 stop-instances --instance-ids "i-0a7cbe4f97c0515a0"

Describe the instances until you can see that the instance has stopped.


aws ec2 describe-instances | jq '.Reservations[].Instances[] | [.State, .InstanceId]'
[
  {
    "Code": 80,
    "Name": "stopped"
  },
  "i-0a7cbe4f97c0515a0"
]
[
  {
    "Code": 16,
    "Name": "running"
  },
  "i-0ae6334455914efdb"
]

Deploying a React Native Web project with CircleCI

  1. Generate a private & public SSH key for the CI server.
    1. Paste the private key into circle CI.
    2. Grab the key fingerprint from circle CI for later.
    3. Paste the public key into ~/.ssh/authorized_keys/ on the host.
  2. Create a .circleci folder in the root of your project with a config.yml file underneath.
  3. Adjust the following circle CI config as necessary. The file:
    1. loads a node environment
    2. allows the fingerprint supplied to connect
    3. restores cached node_modules and yarn cache
    4. runs yarn install
    5. saves caches
    6. adds expo-cli – currently does not cache, so will play with that further
    7. builds the project (into web-build/ subfolder)
    8. uploads the project via scp to the remote server
version: 2
jobs:
  build:
    working_directory: ~/web
    docker:
      - image: circleci/node:8
    steps:
      - add_ssh_keys:
          fingerprints:
            - "<the ssh fingerprint from above>"
      - checkout
      - restore_cache:
          key: yarn-v1-{{ checksum "yarn.lock" }}-{{ arch }}

      - restore_cache:
          key: node-v1-{{ checksum "package.json" }}-{{ arch }}

      - run: yarn install

      - save_cache:
          key: yarn-v1-{{ checksum "yarn.lock" }}-{{ arch }}
          paths:
            - ~/.cache/yarn

      - save_cache:
          key: node-v1-{{ checksum "package.json" }}-{{ arch }}
          paths:
            - node_modules

      - run: yarn global add expo-cli
      - run: CI=false && yarn web-build && CI=true
      - run: scp -o StrictHostKeyChecking=no -r ./web-build/* [email protected]:/var/www/path/to/app/

As the project wears on, I’ll add some unit testing and end to end testing to this pipeline.

Footnote: Deploying expo build:web to a subfolder

I had a hell of a time finding the right incantation to get this working. There’s different advice everywhere.

The solutions are not:

  • homepage field in package.json
  • Anything in app.json
  • Any command line flags/switches in expo build

The solution is to customize expo’s webpack file, to specify that there is an output subdirectory.

  • Run expo customize:web and choose to export webpack.config.js
  • Specify the public path required. e.g. my webpack.config.js now looks like:
const createExpoWebpackConfigAsync = require("@expo/webpack-config");

module.exports = async function(env, argv) {
  const config = await createExpoWebpackConfigAsync(env, argv);
  // Customize the config before returning it.

  config.output.publicPath = "/app/";
  return config;
};
  • Note the addition of config.output.publicPath