Routing and Easy Form Fields on FRETS 0.3

Made a few changes to my little web frontend library, FRETS. First of all I updated the online API documentation to clean up old pieces of code and document more funcitons in the main module.

First, one big bug was fixed. We were getting double rendering of the same state change because I wasn’t checking the cache properly. The new cache checking code should prevent double renders and still allow rendering to happen when an async function (like a fetch) calls FRETS.render(newProps).

Here’s an example of the right way to call async functions from an event listener (action).

1
2
3
4
5
6
7
8
9
10
11
F.actions.loadUser = F.registerAction((e: Event, props: AppProps) => {
fetch("https://jsonplaceholder.typicode.com/users")
.then(response => response.json())
.then(json => {
const user = json[(Math.random() * 10).toFixed()];
console.log("recieved fetch");
props.username = user.username,
F.render(props);
});
return props;
})

Fields Registry

On previous apps, for every single field that a user could change I would have to do several steps.

  • Add a property to the main modelProps class
  • Add an empty action function signature to the main actions class.
  • Add a concrete implementation of the action event handler. Usually some variation of this code
1
props.someProperty = (e.currentTarget as HTMLInputElement).value

Writing the same boilerplate is repetitive. I thought about how we as developers want to add functionality to our apps in flexible and composable way. That’s what components in Vue and React are all about. With those libraries you are building an object to fit some sort of component structure so that you can register properties inside of child components that all get rolled up into the master component. But, I want to keep FRETS free of configuration objects and string key conventions that need to be memorized.

So, I added registration methods on FRETS so that you can call a registerField() method from inside of your UI rendering methods. The entire frets App object is expected to be injected into your render methods now. That way you can still extract model props and actions from the app instance for rendering the UI, and now you can add fields to the main registry — and FRETS takes care of the repetetive event handler creation.

1
2
3
4
5
const countField = app.registerField<string>("count", "1");
return $.input.m1.h({
onblur: countField.handler,
value: countField.value,
}),

When you call registerField you get back an object with the handler and the current value, and any validation errors that have been attached to it during the validation step.

This makes it much quicker to add form inputs that are tied into your app state without having to know and declare every single model property up front. Now you can inject functionality to your app from within the render methods themselves increasing decoupling of the main app code and enabling functionality and features to be declared from within js chunks that are loaded asynchronously only when they are needed.

Routing

When I build single page apps I often end up needing to do some sort of routing to different screens based on the URL. I took it on as a challenge to enable this feature in a functional way, not using complicated configuration objects like Vue-router does. So, when you want to register a route in FRETS - what you are really doing is registering a string for matching the current URL value, and a function that will update your app state in some way when the current url matches your decalared pattern. It can be as simple as switching a value from “screen: 0” to “screen: 1”.

1
2
3
4
F.registerRoute(RouteKeys.About, "/about", (name, params, props) => {
props.activeScreen = SampleScreens.About;
return props;
});

Then when it comes time to navigate to a screen programmatically you will call a method on the app instance from within your event handler action.

1
2
3
4
5
6
F.actions.navAbout = F.registerAction((e: Event, props: AppProps): AppProps => {
F.navToRoute(RouteKeys.About);
props.activeScreen = SampleScreens.About;
return props;
});

Performance

The overall library size is a bit larger (15.3kb minified and Gzipped instead of 10.6kb) because I added a second dependency: the path matching library “path-parser”. In addition, the necessary features from Maquette are still included in compiled library.

I still recommend writing your UI rendering methods using atomic CSS library like BassCSS or Tachyons compiled to JS functions using frets-styles-generator. This will make writing UI code much more pleasant but adds a few more KB of javascript to your final bundle.

The updated example app frets-starter builds a final javascript bundle of 36KB minified and gzip. Which I think is very respectable. View that demo app.

Other Changes

I increased unit test coverage to around 75% now.

To support the field registry your actions and props classes should now inherit PropsWithFields and ActionsWithFields

1
2
3
class MyProps extends PropsWithFields {}
class MyActions extends ActionsWithFields {}

As mentioned before, the new UiRendering method that’s passed into FRETS.registerView() should accept the entire frets object, it’s signature should be:

1
renderFn: (app: FRETS<MyProps, MyActions>) => VNode
icon

FRETS 0.2.7 Supports Async Rendering for Performance

I added an important new feature to my TypeScript SAM framework, FRETS - support for async render functions. Just call F.registerViewAsync() instead of F.registerView() (which is still available).

If you use Webpack you may know about the power of code splitting and lazy loading. It can be useful because you don’t have to include all of your rendering code for pieces of the app that the user will never see becaus they haven’t clicked on it. At the most basic you could simply set it so that the render function for each “screen” or page in your SPA is inside of a module that get’s lazy loaded through an async function.

1
2
3
4
if (props.currentScreen === Screens.Customize) {
const RenderCustomizeScreen = await import("./Customize");
return RenderCustomizeScreen(props, actions);
}

All the code inside the Customize module will be in a chunk file that hasn’t been downloaded yet. If the first time your app renders it doesn’t hit that code branch then it won’t be downloaded by webpack. Webpack will take care of loading the customize screen, it’s child includes, and any of it’s custom style classes and CSS, only when that branch of render logic gets executed. Since Typescript now supports async and await we can use this syntax now as long as we make all the functions up the call stack async also (or handle them as promises). Behind the scenes Webpack issues a fetch request to the server for it’s necessary chunk.js file… and FRETS will re-render the app once that async fetch promise resolves.

This means it might be a good idea to start splitting your atomic css into different css files that only get loaded inside certain modules. Don’t forget about the power of combining FRETS with atomic css and generated typescript classes.

This idea was inspired by this article about architecting large javascript applications - and me staring at the results of webpack-bundle-analyze wondering how to make things smaller now that everything is in javascript.

Also, since my last update I ce this sweet logo:

icon

FRETS 0.2.3 - Now With More FP

I wanted to make it easier to get started with a new FRETS app, so I refactored it into one class with a more functional approach to registering the various parts of the app.

What’s new in version 0.2.3?

Primarily, a new way of instantiating applications that is more functional and less dependent on big ugly configuration objects. The idea is to still use the goodness of TypeScript generics while making it more functional and obvious how to set up a new app.

1
import { FRETS } from "frets";

The older class exports are still available, but now all you need to get started is this one FRETS class.

You kick things off by writing a very lightweight actions class for your app.

1
2
3
4
5
6
export class MyActions {
public changeName: (e: Event) => void;
public saveName: (e: Event) => void;
public startOver: (e: Event) => void;
}

Notice there are no actual implementations in this class! We can still get the help of referencing at action by name in our view, but the function will be assigned at runtime. When you write your view rendering function it still looks pretty much the same.

So, here’s how the minimum initialization procedure actually looks.

1
2
3
4
const F = new FRETS<QuizProps, WheelActions>(new QuizProps(), new WheelActions());
F.mountTo("mainapp");

There are default methods for everything, but you will want to set a real render function using registerView().
We still use the generated atomic BaseStyles class to build dom VNodes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
const renderRootView = (props: QuizProps, actions: WheelActions): VNode => {
return $$().div.border.p3.m3.h([
$$().label.pr1.h(["Your Name"]),
$$().input.h({
type: "text",
onchange: actions.changeName,
value: props.name
}),
$$().div.p2.h([
$$().button.btn.btnPrimary.h({ onclick: actions.saveName },["Save"]),
]),
]);
}
F.registerView(renderRootView);
F.mountTo("mainapp");

And of course theres no actual implementation for the actions so you will want to set those using the handy dandy registerAction() method.

1
2
3
4
F.actions.changeName = F.registerAction((e: Event, data: QuizProps) => {
data.name = (e.target as HTMLInputElement).value;
return data;
});

And that means we just need to validate our data changes and calculate any other related state variables that need to change. We should override the built in validator and calculator because they are empty passthroughs out of the box.

1
2
3
4
5
6
7
8
9
10
11
F.validator = (newProps: QuizProps, oldProps: QuizProps): QuizProps => {
if (!newProps.name) {
newProps.errors.name = true;
}
return newProps;
};
F.calculator = (newProps: QuizProps, oldProps: QuizProps): QuizProps => {
newProps.answers.reduce((acc, x) => x * 10 + acc);
return newProps
};

I’m reading the excellent book Functional-Light JavaScript: Pragmatic, Balanced FP in JavaScript. As I make it further into this book I expect I will revisit FRETS and make more updates. We shall see. I wonder if I’m willing to give up some of the TypeScript features for a more pure FP approach.

Shortest Drone Flight Yet

Finally had a day with some nice weather. Went to the park to fly my drone with the go-pro attached. Can’t do anything really crazy right now because of one bad motor that freaks out when I give it too much juice. Anyway, that didn’t matter - ninja branch took me out and covered my electronics in snow.

icon

Introducing FRETS, an Ultralight Frontend TypeScript Framework

If you’ve been doing javascript development on the web (or applied for any developer jobs recently), you know about React, Angular and Vue. Each of these is a modern, powerful, and well-maintained framework for creating interactive and rich javascript applications on the web. They attempt to make it easy to compose user interfaces of reusable parts for consistency. Usually you reach for a framework like this when you want to build a SPA (single page app). Sometimes the ecosystem, the tooling or community around the project is more compelling than the syntax and code architecture itself.

Libraries tend to grow many “necessary” appendages that bloat the size of your poject. Things like React “needing” babel and redux, Vue wants and vue-router and vuex and a template compiler. Then there’s the temptation to add these awesome pre-written UI packages like a material design implementation, or Element UI, and maybe a dozen other open source packages for API communication and date parsing and collection manipulation.

To be honest, I kinda love the chaos of this world. You get to learn constantly and you get to mix and match the best of everything, and there’s always some other neat trick of tooling that someone can teach. So, that’s why I don’t feel completely insane to say I’ve spent a little time lately developing another experimental basic framework for writing and organizing javascript app code.

My goal with this experiment was to learn more about the SAM pattern, that is State Action Model. As apposed to the more common MVC or MVVVM. It’s far closer to the React way of thinking, where each component of your UI is ultimately a render function. (You knew that’s what JSX compiles down to, right?) But, I am making the assumption from the beginning that it will only every be written in TypeScript, which opened up some interesting possibilities. What if we wrote a framework with the power of Vue or React but with all the power of TypeScript and a good IDE like VS Code as a given? Can we protect developers from the evil scourge that is memorized magic strings?

Even though TypeScript comes from Microsoft it is still an awesome well maintained open-source project that plays nicely with the rest of the JS community, and it’s super-power is giving JS developers the safety and consistency of a type-safe, compiled language without completely obliterating the aspects of JS that are interesting and good. Namely, functional programming.

So, my framework FRETS (Functional Reactive and Entirely TypeScript) is a set of classes and interfaces to make it easy for you to write code that complies (mostly) with the SAM pattern. You can get all the reassurance of reliable code completion and type checking while still writing “pure” functional code. I think classes made up of functions are a perfectly valid way of giving developers the convenience of automatic code completion, and the other advantages of the Typescript tooling world. Making a developer remember all the variable names or massive copy-pasting is the enemy of clean bug-free code.

To explain the framework, Let me work backwards through the SAM application pattern starting from the UI rendering in the browser.

Views

In SAM every piece of your UI should be a pure function that updates the DOM in some way. Reusability comes from classic refactoring and composition of functions, without learning any new ceremony of a component object structure. Your view rendering code should be modular and composable, these aspects tend to emerge as the developer starts programming and sees the need to refactor continuously.

I originally was playing around with Mithril and attempting to integrate it as a VirtualDom rendering implementation of the SAM pattern. But Mithril was not very TypeScript friendly, and a little searching revealed Maquette, a smaller and lighter TypeScript implementation of the hyperscript rendering interface that Mithril (and react) give us. It might even be more performant, depending on how you measure. It’s not perfect, but it is under active development and I think the value of a solidly implemented hyperscript rendering library, decoupled from the big projects, that we can build upon is of significant value.

Why no JSX? Why no templating? … This is an experiment, and in the past I have always been a believer in working with real HTML. I thougt staying close to the final implementation language was the smartest way proceed compared to things like ASP.Net WebForms or HAML. Every developer is familiar with HTML which is why Vue and JSX are so easy to learn, they have a declarative syntax that looks mostly just like HTML. But I wondered, can I avoid some of the pain of the syntactic restrictions of HTML? (let’s be honest, it is super verbose and repetetive). In the Mithril hyper script code I saw DOM rendering functions let you specify a CSS selector string — think of how you use the Emmet tool in your IDE to generate HTML.

div.main-component.theme-primary.green

These hyperscript methods take that selector string and an attributes object to generate an html element with all the appropriate class names, attributes, etc. I like it. It’s weird but it works. And as a developer I don’t have to do as much mode switching between HTML and JavaScript syntax.

What data does your view function render? Well, it’s just a plain old JavaScript object, or preferable a generic subset of that object if you’ve refactored your UI into smaller decoupled functions. Of course, since this is TypeScript our IDE will know that we’ve already specified the shape and types of the properties on that object, so we get code completion and type warnings everywhere we work with it reducing errors and making it easier to reason about your higher level code when you’re down in the rendering functions. Ideally you will have one big parent Class object for your entire application state making it easy to know what you’re passing around and looking for.

In FRETS you keep all your high level view functions that accept that global state data object in one class to make refactoring easy and painless.

State

State is a simple class that is responsible for calling those “View” render methods. You instantiate a new FRETS state object specifying the render function you want (with a default assumption that you’re using the Maquette projector for updating the dom). This state representer function will also recieve a preRender function to do any special calculations or logic for deriving transient properties in the application state from the values of the data properties object that it is passed. Things like warning messages, loading indicators, visibility switching, and in-app navigation or routing.

Model

The state was called by a function on your Model class called “present”. Generally there is one Present() function on a model, and it is tied to the one Render() function on you State class. This present function first executes any data validation logic that the model was configured with when we gave it a validate() function at instatiation. So the Model handles consistency, and this is also where you would specify data synchronization logic for communicating with a remote API.

Action

The Model was asked to update itself by a function on your Action class. This action class should be a new class that you wrote for this application which extends the FRETS ViewActions class. Your custom actions class will contain all the functions that your application might call to ever change data or state. These functions will have been bound to the event handlers on the dom, or timers or other reactive events. This practice makes sure you know exactly where to look for any change that was made to your application state, and you get code completion in your Views for this class because of the power of Generic Types.

Diving back down

Let’s follow the logic back down then.

Assuming we’re talking about yet another Todo list implementation: when your view function renders a button you will specify it’s onclick handler as the function this.actions.createNewTask which you already new about and stubbed in or wrote previously on that custom ViewActions Class.

When this function is called it will add a new task string to the array of tasks in a copy of the model properties. And then call the present() function with those updated props. The present method runs your validation logic and either saves the errors in the data or saves it to a server, but either way it calls the render() method on the State object with the new data properties object.

The render method checks for the existance of errors to display to the user, and it sets a couple derived properties on that object that the model doesn’t need to deal with, that will be used for changing what is finally rendered back out the user. At the end of it’s calculation work it will call your view rendering functions. But in the case of Maquette it’s actually just going to tell it’s own main Maquette Projector to schedule a re-render on the next animation frame of the browser. Using Maquette in this way allows you to move certain really performance hindering calculations out to the view rendering functions if you want them to only ever be called once per render (every ~16ms) so we don’t bog down the browser. But I wouldn’t start doing this until you spot performance problems because this adds another place to look where state logic is happening.

The view method is doing the rendering of the list

1
2
3
4
5
6
7
8
9
10
11
TodoList: (props: TodoListProps) => {
return h("ul.all-todos", props.list.map((item: string) => {
return h("input.todo", {
type: "checkbox",
value: item.id,
checked: item.done,
classes: { 'strikethrough': item.done },
onchange: this.actions.changeTodoItem,
});
}));
}

Now, what about that h() function… it requires a lot of “magic strings” and if there’s one thing I’ve been trained to hate it’s string literals in my code. So, what can we do about this? It’s generating HTML, but using classname syntax like Emment. And technically we could already know what those classnames are because they’ve already been written once… over in a css file.

So, we fire up a little code generation utility called frets-styles-generator that writes a templated TypeScript class file based on the contents of the css file in your project.

> node_modules/.bin/frets-styles-generator src/main.css src/base-styles.ts

And since we are using base.css (or another atomic css library like tachyons or tailwind) we want to have access to those in the TypeScript code which means they need to be exported members on a TypeScript object somewhere so the IDE can pick up on it. Then at the top of the Views file we import them

import { $, $$ } from "../base-styles";

Because it generates a convenient class containing a property for each css selector you might want, when you’re creating your markup in the views, where you would use the h() function with a magic string, you can instead have a fluent api like:

h($$('button').btn.bgBlue.p2.mx1.$, {}, [])

But, I thought that was a little verbose and I can probably guess on the most common html tags I’ll be writing, so I added a few more helpers so you can do things like

h($.button.bgBlue.white.p2.mx1.$, {}, [])

But still: why do we have to use that ugly $ at the end to output the string selector that the h function is expecting? Let’s embed our h function directly so we can extend it with our fluent API like so:

$.button.bgBlue.white.p2.mx1.h({}, [])

Awesome, now we’re talking! A terse fluent api that is generated directly from the selectors available in our CSS file already. That leaves just one more set of magic strings to deal with, the “classes” property e.g.

1
2
3
4
5
6
7
8
return $.div.border.h({
classes: {
"bg-aqua": isActive,
"blue": isActive,
"bg-gray": !isActive,
"red": !isValid,
}}, []);

Well that’s repetetive and ugly, there’s logic in there and some magic strings that now don’t even match camelCased class names which have been used previously, breaking my mental model. So, inside the BaseStyles Class (proxied to $) we also have some functions generating these CSS class name logic objects fluently. Note: you have to instantiate a new class each time because there’s internal state to deal with the conditional logic switching. You flip conditions inside of your fluent selector chain using $.when(condition).selectorNames.otherwise().differentSelectors Otherwise flips the previous boolean or andWhen(condition) will start a whole new condition chain.

1
2
3
4
5
return $.button.circle.h({
classes: new CC().when(model.viewing === view) .bgAqua.blue
.otherwise().bgSilver.red
.toObj,
}, ["OK"]);

Cool, now our only magic strings are in the text that is actually displayed on the page, and the correct way to refactor those out is to use a localization library like i18n. Which is a pain… so, we wont.

App Configuration and Setup

When it’s time to spin up your FRETS app you have to load in all the functions that you’ve prepared. I decided to call the App Constructor with a custom configuration object. It feels wrong to have an object play such a prominent role, but the object is full of functions… so I don’t think I strayed too far from home here.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
const configuration: AppConfiguration<CustomProps, ViewComponents> = {
action: (m: Model<CustomProps>): ViewActions<CustomProps> => new AppViewActions(m),
model: {
validate: (props: CustomProps): Error[] => {
return []; // TODO
},
},
state: {
calculate: (props: CustomProps): CustomProps => {
props.hasUserData = ((props.gender === "MALE" || props.gender === "FEMALE") && props.weight > 0);
props.hasDrinks = (props.drinks && props.drinks.length <= 0);
return props;
},
views: (a: AppViewActions) => new ViewComponents(a),
},
};

The App constructor takes that configuration object as well as an optional initial state object for hydrating your app state.

1
2
3
4
5
const app = new App<CalculationProps, ViewComponents>(container, {
locationState: "CO",
viewing: Views.HOME,
weight: 150,
} as CalculationProps);

Once the app is ready we can call init() with an array to specify elements in the page we are replacing with our highest level, parent view function.

1
2
3
4
5
app.init([
{
comp: app.views.ViewLoader,
el: document.getElementById("mainapp"),
}]);

You can replace and append dom elements anywhere on the page with different high-level functions and they will all get updated from the same shared state data.

So what does all of that cost us in terms of file size?

Maquette is 3.3kb gzipped and the FRETS base class code is only 800 bytes (really!) Gzipped.

That styles module is 4.75kb Gzipped, which is kinda annoying since the minified CSS file itself is just 5kb Gzipped. But I think it’s a useful tool for now and I could probably figure out ways to optimize it better in the future.

Add it all together and you’re looking at minimum of 8.85kb of javascript code being downloaded before you add in any extra vendor dependencies like velocity, moment, or lodash.

Item Gzipped Size in KB
Maquette 3.3
FRETS .8
FRAMEWORK 4.1
Generated Selectors Class 4.75
TOTAL 8.85
BaseCSS 5

For an idea of how much code you will end up writing, it’s really hard to estimate. My custom application code for a medium-complexity application is coming in around 3.2kb gzipped.

Lessons Learned

Doing this experiment has tought me a lot. I feel a lot more confident in writing “functional” code now, and I am much more comfortable with the TypeScript system and the power of Generic types.

Now that I know it’s possible to have intelligent code completion for every important part of the framework it makes me hesitant to go back to Vue or React, with their less robust typescript support.

I understand the fundamentals of these VDom frameworks a lot better. I understand the SAM architecture a lot better too. Though I still suspect that there’s some critical aspect of that architecture that I’ve bastardized here. I had to make deicisions in favor of developer productivity while putting this all together.

I had the opportunity to think about JSX and it’s role in the React framework stack. I think it’s the most flexible and approachable tool for the job (as long as you’re shipping compiled code without a templating system) - but I think theres a value in building your HTML with a fluent API generated from the CSS classes provided by a good modular CSS framework. After all, JSX (and HTML for that matter) are essentially gigantic string literals full of stateful context that the developer has to hold in their head.

I learned about the new CSS standard variable syntax and used them through a PostCSS compilation workflow to compile my own version of BaseCSS. I also wrote a custom PostCSS based parsing tool to read that CSS, and I can say it was remarkably easy to use PostCSS, I don’t know why I’ve been so afraid of it and holding on to SCSS so tightly.

I had to learn a whole lot more about webpack config files, and the proper ways to make production bundle sizes shrink down. This knowledge will be immediately useful on almost any other project. Up until this point the webpack config was a little bit of a mystery to me. I don’t think I’m alone on that. But it is a really powerful tool, and now that I know what’s going on I can improve performance across many of my projects.

I plan to work on a boilerplate project to get you up off the ground if you wanted to try using it too. Let me know if you like this idea or if this whole thing seems totally crazy on twitter @sirtimbly.

Subomniactating: Definition

Subomniactating is a useful word for discussing the process of throwing a person under the bus. Like at work, after a bad meeting, or when blame needs to be apportioned to someone other than yourself. Or when a new task or project is particularly unpleasant and likely to result in pain or sadness, you may want to throw a person under “the bus”. Subomniactating will occur.

Inspired by the amazing word defenstration (throwing a person out a window).

icon

Conversational UI Design and Prototyping with Bottery

Remember last year when everyone was super excited about conversational UI and chatbots? Maybe you were like me and thought “ooh that sounds interesting but no-one actually needs me to design one now… 😦” Well, guess what. 2017 is almost over and I finally had the chance to work on the UX for a chatbot. I mean, I had made fun silly chatbots which spouted random nonsense using markov chains generated from the text of a long running work chat channel. Which was fun. But it didn’t exactly require any “design”.

So this new chatbot is for a client that has a really good use-case and idea. The bot is in early stages of life, but it is available in the slack directory now, and it’s is really buzzword friendly (“machine learning!”, “gamification!”, “computer vision!”). Anyway, it’s actually cool but I needed some way to analyze and improve the UX. There’s a lot of different things to think about, so let’s break down what I’ve done so far.

1. Use the thing

Install it, interact with it, understand why it’s cool, what’s ugly, what’s just half-baked. Also use other competing bots on your same platform so you can see how other people have solved particular problems. The Slack API offers some interesting extras like slash commands, buttons, and dialogs. Know how those work, skim the technical documentation so you understand the limits.

2. User journey maps

Think about the real-world context of a user who is going to interact with the bot’s primary function. Where are they? Are they on their phone? What’s the Job to Be Done? What is their emotional state? What external distractions and competing motivations will you have to deal with during the interaction? Graph it out. Ask questions through documentation.

I found it useful to break down this type of interaction into 4 phases.

  • Desire
  • Initiation
  • Follow-through
  • Satisfaction

journey

This gives you the chance to think about the “happy path” and also see where exactly are the places the user might “exit” the journey too early.

3. Pixels and Vectors

I know this is where designers usually start, but you can now start drawing up wireframes and designs in sketch (or Affinity Designer or Adobe XD) for specific interactions that bring the user out of the chat user interface for things like payment, upgrades, account registration. I like to draw flow lines from buttons to next screens. Creating an invision prototype isn’t a bad idea even though you can’t really type anything into your fake slack windows.

wireflow

Mocking up the slack windows is important to really give your brain some context and to be aware of how visually jarring it might be for a user to have to transition out of the chat interface to a web-browser and back.

4. Actual Interactive Programming!

This is where it all get’s interesting. Go download bottery from GitHub. Google provides it as an open-source project. It’s a little technical but basically you need to follow the instructions in the readme. They walk you through creating a simple “kitten” bot. The code is all in javascript, so I hope you’re comfortable with that. Remember to provide all those commas otherwise nothing works!!!

It’s a lot like designing a text-adventure actually. Entrances, exits, storing variables. Technically you’re defining a “finite state machine”.

Tip: One other thing the readme doesn’t say is that you need to run the bottery app from a local webserver. Which is super easy if you have node installed. You can open a terminal and type npm install -g http-server then, change to the directory where you downloaded or cloned bottery and run http-server -o.

I started to realize that it was actually useful to go through the practice of writing each prompt and state of the bot. Using it in a real interface makes you understand the tone, makes you think about grammer in a whole new way. It’s personal and conversational, so you have to think about tone in a totally different way than we are accustomed to when writing web micro-content. I decided to make my bot a little funny and sarcastic like K-2SO from Rogue One. This happens naturally as you start trying out all the different paths. You see the need for unique states that respond to the user’s data in very specific ways. English grammar is hard and writing english grammer that is adaptable to even a few variable is surprisingly difficult.

Understanding the different variables you are capturing from the user and writing out consistent vocabulary words for different random injections in the script is important to this phase of design.

We need to think about what we are saying to the user, is it getting stale and boring quickly? The randomness of having the bot pick different words for different ideas during each conversation offers a little bit of delight in each interaction.

chatform bottery

icon

Design Systems Documentation in a Wiki

I’m reading the excellent book Design Systems by Alla Kholmatova and it’s been a great resource so far. I would consider it to be a higher-level companion to Brad Frost’s Atomic Design book, which I read early this year. Neither of these are very long books, and as a relatively new freelancer I love the freedom of getting to buy books to improve my skills as a business expense.

The idea behind this Design Systems book is to provide a lot of the answers to “why” are design systems important, and “how” to start with certain mental exercises before even creating a design system. The prep work in the foundations section of this book is really excellent and inspired me to look for another new way to implement a design system for a client.


design systems

As I work with different customers I think about the best way to provide deliverables to each team. I’ve created big PDF’s full of visuals when that is the best option, and I’ve created styleguide web apps stored in a git repo… but, I really like the power and flexibility of the Wiki for detailed and in depth documentation. Wikis are easy to share with a team and collaborate in, easy to search, and easy (easier) to maintain.

Recently I had the opportunity to use the excellent Confluence from Atlassian with a new client and it is a great tool for building a pattern library. I’m calling it the “UX Library” and it contains the following major categories:

  • Design Principles
  • Personas
  • Functional Patterns
  • Perceptual Patterns
  • Modules
  • Vocabulary and Tone

There are two great features of Confluence that I’m using, (not sure if you can do this MediaWiki or DocuWiki)

First, creating Page Templates for new patterns of certain types to make it clear what each pattern should contain.


current template screenshot

Second, I’m creating Clickable User Journey diagrams that map to the specific pattern or module that the diagram box the user is looking at.

pattern map diagram

These give the internal audience a visual way to quickly find the overall flow and the specifics of each interaction.

All together this makes a wiki a great tool. It becomes a living, and maintainable UX spec for the entire product or across the company with multiple products implementing similar patterns.