Blog

Looking For Any Full-Time Work

where I humbly ask for any kind of work, even if not freelance

What I’m Good At

  • Programming: My top three languages are Python, Haskell, and Javascript. I have also programmed a bit of Go and Scala
  • Writing: I’ve mostly done creative writing but I can also do technical writing, and I had a class in editing while at university
  • Attention to detail, but also zooming out to architectural concerns
  • Team player, but also able to work independently

What I’m Learning

  • Full Stack Javascript, including React and Node
  • How to run a business (very slowly!)
  • How to write proposals

How to get in touch

You can email me: hey [at] samhatfield [dot] me

OR, you can comment down below.

Thanks!

My Learning Projects

A few months ago, I decided that I needed to have a project where I learn React and GraphQL, and instead of doing a normal tutorial, I also decided that I was going to build a completely independent project that I would host online as part of my portfolio. The TL;DR here is that I didn’t ever finish it, but I did learn what I needed for GraphQL at least, and that I realized that I don’t have the skills to come up with a React component system all on my own.

The project lives in the Bytes Zone, and it is officially called the Lego Marvel Browser. The project pitch is this: A web app that allows you to browse Lego Marvel sets based on relational data found in the Marvel API. For example, you could find all the Lego sets that have Black Widow as a minifigure. The app wasn’t going to be making any money (which is something you agree to when you get a Marvel API key) but rather demonstrate my ability with the tech stack, and to help convince people to hire me.

Stage 1: GraphQL

I decided that I was going to start with the GraphQL side of things, since I’m much more comfortable with back-end web work: interfacing with APIs, HTTP return codes, and so on.

BTW, hattip to Data is Plural for providing both of these APIs!

Marvel API

First, I got my Marvel API token, saved it to an .env file, and cranked out this marvel.js file

    const crypto = require('crypto');
    const axios = require('axios');
    const { buildSchema } = require('graphql');

    const BASE_URL = 'https://gateway.marvel.com/v1/public';

    const getParams = () => {
        let public = process.env.MARVEL_API_PUBLIC_KEY;
        let private = process.env.MARVEL_API_PRIVATE_KEY;
        let ts = Date.now().toString();
        let data = ts + private + public;
        let hash = crypto.createHash('md5').update(data).digest('hex');

        return {
            apikey: public,
            ts: ts,
            hash: hash
        };
    };

    exports.schema = buildSchema(`
    type Query {
        characters: [Character]
        comics: [Comic]
        series: [Series]
    }

    type Character {
        id: Int!
        name: String!
        description: String!
        modified: String!
        comics: [Comic]!
        series: [Series]!
    }

    type Comic {
        id: Int!
        title: String!
        issueNumber: String!
        description: String!
        modified: String!
        characters: [Character]!
        series: Series!
    }

    type Series {
        id: Int!
        title: String!
        description: String!
        modified: String!
        comics: [Comic]!
        characters: [Character]!
    }
    `);

    const characters = () => {
        return axios.get(`${BASE_URL}/characters`, {params: getParams()})
            .then(res => {
                return res.data.data.results;
            })
            .catch(function(error) { console.log(error); });
    };

    const comics = () => {
        return axios.get(`${BASE_URL}/comics`, {params: getParams()})
            .then(res => {
                return res.data.data.results;
            })
            .catch(error => console.log(error));
    };

    const series = () => {
        return axios.get(`${BASE_URL}/series`, {params: getParams()})
            .then(res => {
                return res.data.data.results;
            })
            .catch(error => console.log(error));
    };


    exports.characters = characters;
    exports.comics = comics;
    exports.series = series;

Then, I have this server.js file.

    #!/usr/bin/env node
    require('dotenv').config()

    const express = require('express')
    const cors = require('cors')
    const graphqlHTTP = require('express-graphql')
    const graphql_schema = require('./src/schema.js')

    const app = express()
    app.use(cors())

    const graphql = graphqlHTTP({
        schema: graphql_schema.schema,
        rootValue: graphql_schema.rootValue,
        graphiql: true,
    });

    app.use('/graphql', graphql)
    app.listen(4000)
    console.log('Server is running on localhost:4000/graphql')

N.B: The graphql_schema variable loads a file that doesn’t exist, so that this code will fail if you try to run it. When I find out where the working code for this exists, and if I remember to, I’ll update the repo and this blog post. I’m a little embarrassed, but this is a minor thing, so I won’t worry too much about it

BrickLink API: a bust

It was at this point I decided to leave the Marvel API and work on the Lego side. What I didn’t realize at the time was that the API provided by BrickLink is for store operators, so their API wasn’t actually available.

Then, I looked at the listing for the dataset in Data is Plural, and I found out that the whole dataset can be downloaded in bulk. If I ever pick up this project again, I’ll consider loading that data into a SQLite table and using that as a datasource, but only if it has pictures, and the TOS allows it.

At the time, I decided that I was just going to wrap the Marvel API, and build a React app to display the info. However, I started to lose interest, since my original vision wasn’t possible.

Stage 2: React

With this stage, I honestly didn’t get that far. This is when I realized that I was going to need to do more work learning React and web design before I start building out an app.

That said, I was able to put this App.js together

    import React, { Component } from "react";
    import { request } from 'graphql-request';
    import {hot} from "react-hot-loader";
    import "./App.css";

    class App extends Component{
        constructor(props) {
            super(props);
            this.state = { characters: [] };
        }
        componentDidMount() {
            const query = `query {
                characters {
                        name
                        comics {
                          description
                        }
                }
            }`;
            request('http://localhost:4000/graphql', query)
                .then(res => {
                    this.setState({ characters: res.characters });
                });
        }
        render() {
            return (
                <div className="App">
                    <h1> Hello, World!</h1>
                        <ul>
                        { this.state.characters.map(character => <Character name=charactor.name description=character.description >;}) }
                        </ul>
                </div>
            );
        }
    }

    export default hot(module)(App);

As you can see, it’s not that complex; I’m simply iterating over the data and putting it into a component called Character. Well, here is Character.js:

    import { Component } from 'react';

    export default class Character extends Component{
        render() {
            return (
                    <ul>
                    <li>{this.props.name}</li>
                    <li>{this.props.description}</li>
                    </ul>
            );
        }
    }

A simple ul element. My CSS doesn’t even have any rules for that component, so it just renders as default HTML 5. Nothing special.

My Personal End Point

It was at this point that I realized that I wasn’t going to be able to make thing that actually looked good. And the fact that I was only wrapping the Marvel API and that this project wasn’t really going to go anywhere interesting, I decided to stop working on it. I had learned what I needed to learn.

And what did I learn exactly?

  1. I learned about how to build GraphQL schemas, and how to populate those schemas with data. If I ever work on a large GraphQL project, I have the basics down and I can look at the documentation whenever I need to. I feel pretty good about my project in this regard
  2. I learned about Axios, at least a little bit. I have a feeling that I’ll be using this library in any future JS projects.
  3. I learned a bit about Babel and Webpack. Not enough to be really confident, but I do know what they are now.
  4. I learned that I don’t know enough about React to be able to make things look like what I want them to look like. I know the basic flow of things, but I don’t know it well enough to be able to come up with a non-trivial implementation without considerable effort, like I was able to do with the GraphQL side of things
  5. I don’t know enough about web design to be able to just wing it like I was planning. I’m going to have to do a lot more practice with just the design aspect of things before I can go building apps from scratch.

So, even though I didn’t finish this project, I learned quite a bit, and really, that’s what these kinds of projects should be. I was too ambitious with this project: I should have seen it like it was: a learning project, not a portfolio one.

The future?

I think that I could make a GraphQL wrapper for the Marvel API, and delete the React part entirely. Also, I think that once I get more practice with React and web design, I might come up with a good portfolio project, just not this one.

Call for comments

Feel free to make comments below (unless you are a spammer, I check!). What do you think that I should try next?

I’ve been added to NUR!

So, in order for the title of my blog to make any sense, I’ve got to explain a few things to folks that are not involved in the NixOS space.

To understand NUR, you have to understand NixOS and Nix.

What is Nix?

Nix is a functional package manager that operates differently than most other package managers, and something I’ve talked about in the past. In brief, Nix is a language agnostic package manager that relies on a functional programming language, also called Nix, to declare how packages are built within a fully isolated environment. Each package that is built is put into the Nix Store, which is essentially a hashmap of directories in a read-only filesystem, with the SHA256 of directory contents as the hash function.

What is NixOS?

NixOS is a GNU/Linux OS that builds on top of Nix. In NixOS, all of your packages AND system configuration files live in the Nix Store, and are symlinked into the mutable part of the filesystem. The system is configured with the Nix language, and a large collection of packages and configuration modules live in Nixpkgs, which is a collection of Nix language files that define how NixOS is configured by default, and all the possible configurations that you can activate in the system. As of writing, there are 8532 different options that are available to NixOS users.

For example, if you wanted to donate your idle compute cycles to BOINC, in a normal GNU/Linux OS, you have to install BOINC with a package manager, then enter other commands in order to enable the service. With NixOS, it’s a single line of code to enable, and then you get 4 additional options to refine your service specification. Pretty neat, right?

What is NUR?

So, the standard Nixpkgs has all of those options, and that is great. However, one feature that Nixpkgs does not support is community packages, that is, third-party sources of packages that come from independent packagers. Community packages are great because it allows individuals to contribute to the overall ecosystem without interacting with the standard repository. I’m not saying that the standard package repository is bad! What I’m saying is, because the standard package repo is run by volunteers, they cannot service every need that the larger community has. And not every volunteer is able to participate within the standard packaging organization.

Enter NUR, the Nix User Repository. As of right now, it is implemented as a package overlay over Nixpkgs. What that means is, that the Nixpkgs is extended, and NUR packages are added in. That way, the interface to add a NUR package is exactly the same as adding a standard package. So in this way, it is slightly simpler than adding AUR packages in Arch. Since the packages are put into their own namespace, this makes the distinction between official and unofficial somewhat obvious, but not as obvious as with AUR.

I’ve been added to NUR

So, in order to be apart of the community repository, I followed the instructions on the NUR homepage. My personal NUR is at https://github.com/sehqlr/nur-packages.

What’s really cool about NUR in particular is that I can now store my NixOS configurations in that same repo, and my home-manger config as well. Now, I can keep the majority of my configurations within my NUR, import them alongside any local config (and secret config), and done! All of the portable parts of my config rest within my NUR, and if anyone could benefit from it, they can import whichever pieces they need, as long as I keep them properly organized, of course.

One problem

So, now I have a problem: can I make any packages that are actually useful to anyone else besides me?

So far I’ve tried to package up Gridcoin-Research, tuxemon, and upwork-desktop to no avail.

What do you think I could package up? Leave a comment here or create an issue on my NUR!

It’s February, How Did I Do?

At the beginning of the year, I made a resolution to write more.

  1. Do the writing exercises of a writing prompt book, at least one a day of not more! And write in the book this time!
  2. Post on this blog and social media more, weekly instead of occasionally.
  3. All your ideas belong WRITTEN DOWN SOMEWHERE. It doesn’t matter how weird it is!

So, how did I do?

Writing Exercises

On this front, I actually have done fairly well. The “300 Writing Prompts” book that I bought has been a pretty good investment, and although I haven’t done it every day, it has helped. Additionally, I borrowed “Steering the Craft” by Ursula K. Le Guin from my local library, and I have done the first three chapters in that book in a sketchbook that I bought specifically for doing creative writing in. (I prefer thick, unruled pages.)

Once I’m done with the Le Guin book, I will return it and then dip back into my personal library. I know that I have a copy of “Writing Down The Bones” that I could do exercises from again, and I also have at least one book on Creative Non-Fiction that comes with exercises.

Overall, I’d say that this goal was met.

Posting

I have managed to post here every week for the last 3 weeks, so hooray! However, I have not done any social media posts. So, I call this one mostly fulfilled.

Writing ideas down

Sadly, I haven’t done this. I almost starting writing down the idea for a story of a lonely city in the depths of space, but then I started revising the idea in my head and never got around to getting it to paper. So, I didn’t meet my goal for this one. But I will say that the writing prompts were a much more important goal, so I’m OK with this one getting away.

Overall

Good job, me! I am well on my way of getting back into writing in a big way.

Wikis as Multigraphs of Text

Where I explore designing a type definition of a Wiki, without code this time

I have a lot of wacky computer science ideas, more of which are going to be appearing as other blog posts in the future. This is the most recent one, and therefore, the least pickled.

What is a Multigraph, anyway?

This is a concept in Discrete Mathematics, and it is a more general form of a Graph. So, what is a graph?

I think that the best way to explain these kinds of graphs is to use a different term: network. Most people have heard of a network. A social network includes you and all of your friends and family, and all of their friends and family, and so on and so on. I’m going to borrow a picture from Wikipedia.

Imagine that each circle with a number is a person, and each line between them is a friendship. That means that 1 is friends with 2 and 5, while 6 is only friends with 4.

However, since graphs are more abstract than groups of friends, we can use them to represent many different things. And there are different kinds of graphs too! The multigraph is where the edges (those lines between the circles) can be different kinds of edges. In the picture below, they are different colors.

A multigraph with three kinds of edges: grey, red, and blue

What I am theorizing is that a multigraph can represent wikis themselves.

How are Wikis Multigraphs?

Graph of Hyperlinks

Wikis have hyperlinks between documents. Here is a hyperlink to the Wikipedia page on hyperlinks. We can represent this kind of link between one bit of text with another in a graph. Let’s take a look at that first picture again:

Now imagine that each of those circles represent a Wikipedia page, and that the lines between them represent a hyperlink.

The Outline is a Directed Acyclic Graph

A Directed Acyclic Graph (DAG) is a type of graph where all the edges have a sense of direction (directed) and there are no loops (acyclic). A good, real life example of this is a family tree (from Wikipedia again).

This DAG can’t have loops in it because people cannot be a parent and a child to themselves, nor can you be your own grandpa (biologically anyway). The directed nature of this graph can go either way: you can say that the arrow points to parents, or to children. Typically, we want our DAGs to point to one particular thing, so we will say that the arrows point to parents.

Why am I bringing this up? Because I realized that the outline structure of a document is also a DAG. Each Heading has several blocks of text under it, like the children in the family tree. And each Heading itself is a subheading of a bigger Heading. The concept of the “document” itself can be seen as the Lucas Grey of our family tree. Each document in a wiki is a DAG.

Therefore Multigraph?

So, you have a graph of text blocks that form DAGs of documents, but each of those text blocks can also connect up to other text blocks in other DAGs with hyperlinks. So, what if we had a graph with two kinds of edges, the Outline kind, and the Hyperlink kind? An entire wiki could be theoretically stored in a single multigraph of text blocks, with Outline DAGs encoding documents, and Hyperlink graphs representing the connections between all the documents.

Conclusion

I believe that the type definition of a Wiki would be a multigraph of text. I think that you could store an entire wiki inside of a graph database and have a really interesting architecture for a wiki engine. But that’s for next time 🙂

Source Code in a Wiki?

This is an add on to my previous post, Wikis as Multigraphs of Text.

As of right now, most source code is stored as text files in a file system. And this has worked out fine for the industry for decades. But, there are other ways of storing programs. What if, we stored source code in a wiki?

As a reminder, from my previous blog post, I stated that a Wiki data type could be encoded as a multigraph, that is, a graph with at least two kinds of edges, Outline, where we encode text block order for output, and Hyperlink, where we track hyperlink references between blocks.

Why should we put source code in wikis?

For the purposes of this blog post, I’d like to avoid the question of the usefulness of this paradigm, and just make this statement: I think this is a fun thought experiment, not a call for change.

If/when I do implement some kind of system based on these ideas, I will report that in another post.

I think that a wiki could be an excellent way for a group of programmers to maintain a codebase over time. Utility scripts could be encoded with documentation alongside, with comments and examples included. If the wiki was version controlled, then all of that information would be as well. All alongside your code, instead of in a different place. And if that wiki was a multigraph, then you can use graph theory to structure your code instead of a bytestream.

Here’s a made up example: let’s say that there is a license/copyright comment block at the top of every one of your files in your code base. In the real world examples of these, often there was a script or automation of some kind that made sure that all of these comment blocks were up to date. If source code was encoded in a graph, then you could have only one vertex in the graph that has that license file, then you can have any number of references to it. If you have the system present the code as a file system (as a particular view into your graph data), then all the files would have that same comment block at the top.

How does this relate to Literate Programming?

This is an evolution of Literate Programming, and as such, the tangle and weave functions from that paradigm are required. The weave function creates a website that allows users to view the graph data as prose. The tangle function creates that view into the graph data of a filesystem with the source code in plaintext.

What types do we need for the multigraph?

So, the types needed to encode this change would have to expand to include different kinds of text blocks, and include a new type of edge.

In the last post, the Wiki only has one type of block, which is just Text. However, in the case of differentiating between different kinds of blocks, we need to make at least two kinds of blocks: Prose and Code. If we wanted to have our wiki be a polyglot environment, we can have the Code block encode what language the code is in as Text or possibly a sum type.

The Outline edge type can also be divided into two kinds, which I’m naming after the two Literate Programming functions: Tangle and Weave. Both the Tangle and Weave types could include a filepath as a property. So, during the tangle operation, the system would output the files as they should be for compilation, and the weave operation would generate the pages for the wiki for user viewing.

The End

This is a fairly new idea for me, and I don’t know a lot of details as of yet. I’m going to read up on Tinkerpop and Goblin and see if I can’t test out some of these ideas on my own.

Thanks for reading!

Multiple Choice Language Feature Programming Languages

where I describe an a-la-carte language feature compiler toolchain

Semantics in general

When I was first learning programming, I thought that the syntax of a language (keywords, spacing, how to write functions, &c) was the only thing that I needed to learn. Then I started learning about something called programming language semantics. Semantics in general refers to the meaning behind words and symbols, both in linguistics and in programming language theory.

For an extended example, I took CS50, the Introduction to Computer Science course that Harvard offers via edx.org. In the 2014 version of the course, they taught C as the main language. So, I learned about memory management with alloc and friends. Then, I taught myself Python after the course, and I learned about the “garbage collector,” and that was my first lesson in semantics. Even though the languages didn’t make a specific syntactic declaration about how memory is managed, the semantics of the two runtimes are quite different.

Semantic Differences Matter

In the many discussions I’ve had about programming languages at meetups and other places, is that although syntax is something that many programmers care deeply about, one thing that really separates languages apart are their semantics. Specifically, their language features that are built behind the scenes, like garbage collection, or a module system, and so on. This is also a source of language envy and innovation. Object orientation is a language feature that adds specific semantics to a computer language; ditto for functional semantics.

In theory, you could have the same syntax but many, many different semantic components, and you’d describe a whole slew of languages and their associated run-times.

What if we made that a feature? What if there was a language built that allowed for multiple semantic components that could be chosen for each program written? It would be similar to Racket’s #lang syntax, but maybe in a build description rather than each file separately.

Semantics a-la-carte?

Imagine that there was a programming language system where all major language features were multiple choice.

Choice 1: Type Checker

What if we could choose which type checker we wanted to use within the same toolchain? Turn it off completely for simple scripts, turn it all the way up for complex applications.

  1. Unitype Checker: only report runtime errors, be completely dynamic
  2. Gradual Type Checker with type inference (with compilation errors?)
  3. Full Powered Type System with all the compilation errors

Choice 2: Memory Allocation

I’ve heard more than one programmer say that they miss C because of the simplicity of the system. I don’t know if that includes memory management, but I do know that for some instances, garbage collection is not appropriate. So, what if the garbage collector was optional?

  1. Manual Memory Management, like C
  2. Garbage Collection, like Python and Haskell

In fact, there could be multiple garbage collectors built into a toolchain, each with different properties. So, you can declare at the start of a project, what kind of memory semantics you want to have, without changing the syntax, and with the type system also being a-la-carte.

Choice 3: Concurrency System

One thing about Python that hurts performance but makes the semantics easier to understand to a beginner in the GIL, which forces Python to be single-threaded. Well, CPython at least. But what if Python was made in this a-la-carte fashion, and projects could decide if they want the GIL or not? Or go even further, to a system like in Erlang?

  1. Single-threaded only
  2. Simple Message Passing System
  3. Something like OTP/BEAM in Erlang

How many “languages” is this?

What I’ve described here could counted as 12 different languages. Or, if built right, could be one language with 12 different operating modes.

This system would require something like 8 different components to the compilation toolchain, in addition to the usual ones, like lexer/parser, assembler/linker, and so on. So, there would be a lot of work. But, the upside is that more people could use the language to fit their own needs. A language where a component could be written as a deep research topic, and allow users to determine if they want to use that component or not?

Specifically, look at GHC, the de facto compiler for Haskell. It has so many different language extensions, and often they come in groups. Instead of having to specify a collection of extentions for type-level programming, there was a “Type-Level Programming” option? So, when you wanted to level up your Haskell, you could turn that feature on? That component could have its own focused documentation on how to use it properly.

Possible Problem: too much work

Would the additional work be worth the benefit? I have no idea.

Is this too weird of an idea?

Is there any merit to this proposed approach? I’ll openly admit it’s a thought experiment.

Comment below to let me know!

Harder Than Getting On A Bike

In my last post, I set out a goal for myself: to write more. Specifically, I set out these three activities:

  1. Write for 15 minutes every day. This includes diary entries, but is mostly meant for creative free-writing
  2. Post on this blog and on my social media accounts more often, say, weekly instead of occasionally
  3. Any new ideas about stories or other topics I think about, write it down! (There are so many things that I don’t write down but I should, it is criminal)

As it turns out, it’s really hard to write completely new material every day.

I have not done much creating writing in the last 5 years, and it turns out, I cannot just get back on the bike and have the exact same productivity that I had 7 or 10 years ago. Sure, the act of writing things down isn’t that different, nor is doing free association. Typing is easy.

However, I found that the stuff I typed out was not great. Here’s an entry:

The solubriousness of that word was just incredible, said Nelly not knowing what the word even meant. It’s almost as if words have no meaning!

Sorry, I’m a bit drunk and writing things down as I think them. We don’t ususally go to fancy dinner parties, especially not the Sandersons’. It’s just that “multitudediousness” is not a “solubriuos” word.

It’s a twenty dollar word

She said, oh, well excuse me for trying to be fancy! And you should drink less.

me (stone cold sober), 2020-01-06

My note to myself at that point was “listen to people again.”

I think my note to myself now is “think about what you used to do.”

So, I thought about what I used to do, and one thing I remember in college was that I actually started most of my work with handwritten notes that I then typed up and turned into final work. I also remember doing quite a lot of writing prompts at the time, to keep my skills sharp.

So, what I did was I bought a writing prompts notebook (I’ll talk about the specific product later if I like it) and I’m going to handwrite my responses to the prompts in the pages like you are supposed to.

(I have this odd thing about not wanting to “ruin” notebooks, hopefully this will help me work on that.)

I also noticed some changes I wanted to make in the other ones, so I’ll update them now. I am changing my list to this:

  1. Do the writing exercises of a writing prompt book, at least one a day of not more! And write in the book this time!
  2. Post on this blog and social media more, weekly instead of occasionally.
  3. All your ideas belong WRITTEN DOWN SOMEWHERE. It doesn’t matter how weird it is!

That’s all for now! Check back again in 15 days where I evaluate how I’ve done.

2020 Resolution: Write More

Just a brief blog post about my New Year’s Resolution for 2020, which is to write more!

This includes the following:

  1. Write for 15 minutes every day. This includes diary entries, but is mostly meant for creative free-writing
  2. Post on this blog and on my social media accounts more often, say, weekly instead of occasionally
  3. Any new ideas about stories or other topics I think about, write it down! (There are so many things that I don’t write down but I should, it is criminal)

I’m going to revisit this resolution at the end of January and see how I did.