Librarian of Alexandria

software

This is an entry from a list of projects I hope to some day finish. Return to the backburnered projects index.

What is it? A tiny ML variant, designed for embedding.

It should of course have closures and anonymous functions and so forth. It should have algebraic data types—named sums, tuples, records for product types—and it should also be garbage-collected. It should have powerful pattern-matching constructs. It shouldn't be pure, in the same way that something like SML isn't pure: mutation would still need to use explicit ref types, but side-effectful functions should be allowed everywhere. It should have a basic exception system. It probably doesn't need a full module system, but some kind of namespacing would be nice.

I'm pretty neutral with respect to syntax, but I was imagining maybe borrowing some ideas from ReasonML. Over time I've become a fan of explicit delimiters for scope, but I also fully admit that syntax is probably the least interesting thing here.

Most importantly, it should be easily embeddable in any language, exposing a C API that makes it easy to initialize the runtime, call into it, and expose callbacks to it. PicoML wouldn't be for writing programs on its own: it would be for writing scripts in larger programs, which means making it accessible to larger programs should be as simple as possible.

Why write it? There are a number of language which are explicitly designed for embedding in a larger program: Lua is one of the most prominent, but there are certainly many others, my personal favorites being Wren and Io.

These embedded languages rarely have types, and the handful that do (like Pike or Mun) often don't embrace functional programming in the way that I wanted. Entertainingly, though, the closest language to what I wanted PicoML to be is Gluon, and my biggest complaint there is that it's too Haskell-ey: it's got monads and GADTs, which feels to me like it goes too far in the other direction. There's a middle ground—an impure functional language with strong SML influence—and that's where I want PicoML to sit.

Why the name? Not that surprising: I wanted to get across the idea that this would be “an extra-tiny ML variant”.

#backburner #software #language

This is an entry from a list of projects I hope to some day finish. Return to the backburnered projects index.

What is it? A “web framework”, but over SSH instead of HTTP.

What are web frameworks about, on a basic level? You're taking a protocol meant originally for serving simple documents in a hierarchical file system, and instead of serving those documents you're doing various computations and responding with dynamic data. Your web framework needn't interpret a GET /foo/bar as actually looking up a file bar in a directory foo: you could have code attached that does whatever you want!

Cube Cotillion follows the same basic idea except with a different protocol: SSH. Usually, when you establish an SSH connection, you get dropped to a shell and the stuff you type gets interpreted as command-like invocations. Cube Cotillion implements an SSH server, but instead of passing strings to a shell, it instead lets you write arbitrary computations keyed on the commands you've written. If someone writes ssh my-cc-server foo bar, then instead of trying to run the command foo with the argument bar, you can do, well, whatever you want!

Cube Cotillion is implemented as a Haskell library that does pattern-matching over the command passed in over SSH and can invoke callbacks as necessary. Here's a simple example: this server understands the command greet, with or without a parameter, and will respond with a greeting in kind, using the bs command to respond with a ByteString:

main :: IO ()
main = do
  key <- loadKey "server-keys"
  cubeCotillion 8080 key $ do
    cmd "greet" $ do
      bs "Hello, world!\n"
    cmd "greet :name" $ do
      name <- param "name"
      bs (mconcat ["Hello, ", name, "!\n"])

Like a web framework, we can now do whatever we want in response to these commands: there's no underlying shell we're passing commands to, and greet isn't dispatching to another executable, it's just a single process figuring out what to do with the strings and responding in kind. We could build applications on top of this that use structured data (e.g. responding with JSON blobs instead of simple lines) or keep various pieces of internal state. Why not?

Why write it? This is far more on the experimental side: I have no idea if this would be useful for anything at all!

The thing that I like about using SSH as a transport mechanism is that it necessarily has a notion of secure client identity. For certain applications, having a built-in mechanism for verifying client identity would be wonderful, since that's something that need to be built on top of HTTP applications again and again.

That said, I have no idea whether applications built on top of this system would be useful in practice or not. That was sort of the point: I wanted to experiment with using it as a tool for building personal server control, but would it be useful for anything else? Who knows!

I also never got around to designing the right API for doing anything with the client identity, or the ability to do interactive sessions at all. There's a whole design space there that might be fun to explore.

Why the name? I wrote this in Haskell, and there are multiple web frameworks in Haskell named after Star Trek characters, including Spock and Scotty. Given that this was an analogous but wrong project, I decided to choose a name from [Neil Cicierega's guide to the alien races in the Star Trek universe.

#backburner #software #experimental

This is an entry from a list of projects I hope to some day finish. Return to the backburnered projects index.

What is it? A reverse proxy and web server interface.

Aloysius was originally inspired by the old academic servers where a person could host static pages under their username. What I wondered was, how could I design a system like that for dynamic sites, where a given user account could run their own services via a reverse proxy but not conflict with any other service on the machine?

So the approach was based on directories and symlinks. The top-level configuration is a directory that contains zero or more subdirectories, each of which uses different files to describe both filters and forwarding logic. Filters are based on matching on either path (i.e. the HTTP requst path) or domain (i.e. the server) or both, and then they can forward in one of three ways—maybe more over time, too, but these were the first three I experimented with—as specified by a mode file: either http (which forwards the request to host host on port port), or redir (which uses HTTP response code resp and redirects to host host), or aloysius, where you specify another directory by the symlink conf, and then recursively check configuration there.

That last one is important, because it allows you to symlink to any readable directory. One idea here is that different subdomains can map to different users on the same machine. For example, let's say you're keeping your Aloysius configuration in $ALOYSIUSDIR, and you've got a user on the machine yan who doesn't have root access but wants to be able to run some dynamic sites. You can set up something like the following:

# create a dir to forward yan's config
$ mkdir -p $ALOYSIUSDIR/00-yan
# match against yan.example.com
$ echo yan.example.com >$ALOYSIUSDIR/00-yan/domain
# use a recursive aloysius config
$ echo aloysius >$ALOYSIUSDIR/00-yan/mode
# and forward it to yan's home directory
$ ln -s /home/yan/aloysius $ALOYSIUSDIR/00-yan/conf

Now yan can write their own configuration for whatever dynamic web services they want—at least, if supported by the ability for them to run user-level services—and it doesn't require giving them root access to change the other configuration for the reverse proxy. It falls directly out of Unix symlinks and permissions!

Why write it? Honestly, it's not terribly useful anymore: I run my own servers using mostly nginx and am pretty comfortable with configuration thereof. But I still think it's a cool idea, and I clearly think this directory-and-symlink-based system of configuration has legs (which I know I've written about before).

I still think it'd be convenient! After all, I have for a long time used a directory of files (along with a wildcard) to configure Nginx: this simply makes that the first-class way of doing configuration! But also, there's a lot of work that goes into writing a good web server, and this is also solving a problem which the tech world seems to no longer have: namely, how to shunt around requests to different web services all living on the same machine, since instead we abstract those away into their own containers and treat them like separate machines.

Why the name? The name Aloysius was chosen pretty much at random. I originally called it Melvil after Melvil Dewey due to a loose metaphor between the tiny scraps of information used in configuration and the card catalogues that Melvil Dewey had helped popularize, but only because at the time I didn't realize what an absolutely awful human being Melvil Dewey was.

#backburner #software #tool #web

This is an entry from a list of projects I hope to some day finish. Return to the backburnered projects index.

What is it? A protocol for writing RSS readers, heavily inspired by the maildir format.

The core idea of Lektor is that you separate out the typical operation of an RSS reader into two programs which can work concurrently over a specific directory. One program is called a fetcher, and its purpose is to read remote feeds (of any format) and write them into the lektordir format. The other program is called a viewer, and its purpose is to take the files inside the lektordir and view them.

I've got the full format specified elsewhere but the core idea is that you've got a specified directory called your lektordir, which in turn contains four directories: src, tmp, new, and cur. The src directory contains representations of feeds, new contains unread posts, and cur contains read posts. (The tmp directory is an explicit staging place: fetchers will create posts in there and then atomically move them to new, which means that it's impossible for a viewer to ever observe an incomplete post. Even if a fetcher crashes, it will only be able to leave garbage in tmp, and not in new.)

The representations of feeds and posts are themselves also directories, with files and file contents standing in for a key-value structure. That means a given feed is represented by a directory which contains at least an id (its URI) a name (its human-readable name), and a post is a directory which contains at least an id (its URI), a title (its human-readable name), a content file (which is its content represented as HTML), and a feed (a symlink to the feed that produced it.) In both cases, there are other optional files, and also ways of representing fetcher- or viewer-specific metadata.

The use of directories means that fetchers and viewers don't even need a specific data serialization or deserialization format: unlike maildir, you don't even need to “parse” the format of individual entries. This means fetchers need only a bare minimum of functionality in order to write feeds—I had test programs which used basic shell scripts that wouldn't have been able to safely serialize JSON, but were able to write this format trivially.

My intention is not just to design the format—which has already been pretty thoroughly specified, albeit in beta format and without having been subjected to much testing—but also to write the tools needed to write an RSS and maybe ActivityPub reader on top of this format.

Why write it? Well, I wanted to write an RSS reader, because none of the existing ones are exactly what I want, and this seemed like a good way of splitting up the implementation into bite-sized chunks.

There are some interesting features you get out of separating out the fetcher and viewer operation. For one, fetchers become ridiculously simple: they don't even need to be full services, even, they can just be cron jobs which hit an endpoint, grab some data, and toss it into the lektordir. Instead of a reader which needs to support multiple simultaneous versions (e.g. Atom, various versions of RSS, newer ActivityStreams-based formats) you can have a separate fetcher for each. For that matter, you can have fetchers which don't strictly correspond to a real “feed”: for example, you can have one which uses a weather API to include “posts” that are just sporadic weather updates, or one which chunks pieces of your system log and puts one in your feed every day to give you status updates, or whatnot.

Separating out viewers means that you can now have multiple views on the same data as well. Maybe one viewer only cares about webcomics, so you can pull it up and read webcomic pages but let it omit blog posts. Another gives you a digest of headlines and emails them to you, leaving them unread (but of course omits anything you happen to have read during the day.) Another does nothing but pop up a message if you let unread posts pile up too long. Hell, you could have one that takes your combined posts and… puts them together into an RSS feed. Why not?

The cool thing about lektordir to me is that it lowers the barrier to entry to writing RSS-reader-like applications. It takes the parts and splits them behind a well-defined interface. But there's a lot of cool stuff that you get from trying to apply a one-thing-well philosophy to problems like this!

Why the name? It's for reading, and lektor means “reader” in several languages. I'll probably change the name at some point, though, because there's also a plain-text CMS called Lektor.

#backburner #software #web #tool

This is an entry from a list of projects I hope to some day finish. Return to the backburnered projects index.

What is it? A microtonal music tracker.

A tracker is a piece of software for writing music in an almost spreadsheet-like way: trackers give you tables of notes with other pieces of metadata next to them, like volume or other effects. Writing in a tracker can often involve writing small pieces of repeating music that get assembled together with relatively few tracks using relatively few instruments. It's no surprise that trackers are often used for writing chiptunes!

Microtonal music is music that uses a non-twelve-note scale. The specifics of why and how are complicated (and I've written about them elsewhere) but the short handwavey explanation is this: that the reason we use twelve notes per octave on most of the instruments you find in the Western world is that, when we split the octave into twelve equal or near-equal parts, we get notes which often sound good together, but that doesn't mean that's the only way of carving up the space of possible sounds in a way that sounds good together, though. We settled on twelve for lots of historical reasons, but there are plenty of other approaches that we could have taken that still sound good, although different!

So my goal for Hypsibius was to write a tracker that didn't hard-code the assumption that every note was a note on our traditional 12-note scale. Instead, I wanted the user to be able create (and export or import) scales which describe which notes were available to a given composition by specifying them in terms of cents, allowing a composition to have access to either more or fewer possible notes. For that matter, I wanted users to be able to specify how those scales repeat: some systems of tuning don't repeat every octave, but use larger intervals, so a user might want to use the Bohlen-Pierce scale which repeats not every octave but every tritave.

Once they had specified a scale, the user could then use that scale to write music, which could be played and exported. My plan was, at least at first, to stick to a relatively small number of waveforms—sine and square and saw waves—with the intention that most of the music created by Hypsibius would be chiptunes-like. Eventually I might add the ability to play other samples or soundfonts, but to begin with Hypsibius was going to be a pretty barebones affair: the interesting part isn't the instruments, but rather the ability to choose your palette before composing music.

Why write it? I've written before about microtonal tuning because it's a perennial fascination of mine. I love microtonal compositions, but on the other hand, I've found it's also somewhat hard for a casual and relatively inexperienced musician like myself to experiment with. Microtonal instruments are rare and software that supports microtonal music tends to be fiddly and rather hard to get familiar with: you often need to use explicit note-bending, since the MIDI format that underlies much of digital music hard-codes assumptions about 12-note scales.

On the other hand, one kind of music software that's straightforward, usually inexpensive, and requires no special equipment is the music tracker: there are plenty of open-source trackers out there, and some simple ones are even embedded in barebones software packages like the Pico-8 fantasy console. These are often ridiculously easy to get started with, with the major hurdle being their barebones, number-heavy interfaces, but those interfaces also make them even easier to create in the first place!

So that's why I figured I'd put them both together: trackers, being simple to make and use, could be a way of making microtonal music much more accessible. In particular, the idea of putting together a tracker which can be parameterized by arbitrary tunings simply by giving it a list of frequences in cents would allow a musician to experiment with wildly unusual tunings in a trivial way.

Why the name? Hypsibius is the genus of tardigrades, also known as “water bears”. Tardigrades are microscopic eight-legged creatures found all around the world, but are also famously resilient to wildly extreme conditions, and have been found in a stunning range of temperatures and environments. Many people find them alien and fascinating, and I also find microtonal music to be alien and fascinating, hence the association. They'd make a great logo or mascot, too.

(I was actually going to call them “extremophiles” in this description, but I discovered while writing this post that that's incorrect: tardigrades can survive in extreme conditions but, unlike true extremophiles, they do not seek them out or thrive in them.)

#backburner #software #music

This is an entry from a list of projects I hope to some day finish. Return to the backburnered projects index.

What is it? A programming language designed as a tiny experiment: what if a dynamic language were built to emulate the affordances of a static language with typeclasses or traits?

In practice, many dynamic languages are designed around storing data either in a handful of basic data structures—usually lists and associative arrays, sometimes adding a handful of others like sets—and then more sophisticated data modeling is done using some kind of object system. There are exceptions, of course—Erlang, for example, omits the object system entirely and sticks to simple data modeling plus processes—but it's definitely true of most of the commonly-used dynamic languages like Python and Ruby and JavaScript and Lua.

The experiment in Petri is that it uses a dynamic version of algebraic data types—which it calls tags, borrowing loosely from the Tulip language—and then provides a feature which allows functions to be implicitly looked up based on the tag. For example, the following program—using a syntax where keywords are marked explicitly with @1—creates a tag called Person, then defines how to print it to the screen using the Repr class:

@tag Person {name, age, pets}
@impl Repr(Person) {
  @fn repr(p) {
    "Person({p.name}, {p.age}, {p.pets})"
  }
}

We can now write a function which creates a Person and prints a representation of that person to the screen:

@fn main() {
  person := Person {name: "Ari", age: 33, pets: []};
  print(Repr::repr(person))
}

Underneath the hood, Repr is a map from tags to bundles of functions, so when we call Repr::repr, it'll look up the tag of its argument and find the relevant implementations of the functions, and then call that with the relevant parameter. In practice, we can also “open” any module-like thing—i.e. anything where we'd use :: to access a member—so we can also @open Repr in the root scope and then call repr instead of Repr::repr. (In practice, the prelude will pre-open several commonly-used classes, and also provide syntactic sugar for classes like Eq and Add.)

There's a little bit more practical sophistication to the typeclass mechanism, as well. For example, Eq is a typeclass with two “type” parameters, and it also provides a default implementation. It also does run-time introspection to find out whether a class has an implementation for two specific values! Also notice that, in the definition of Eq::eq, the parameters are capital letters, which reference the stand-in tag names that are parameters to the class Eq: this is how we indicate which argument tags to use to look up instances. It's possible to have methods that take extra parameters which are ignored for the purposes of instance lookup, in which case those parameters can be written as _.

@class Eq(A, B) {
  @fn eq(A, B);
  @default eq(a, b) {
    @if implemented(Eq(b, a)) {
      return Eq::eq(b, a)
    }
    False
  }
}

This also doesn't go over some other useful features for writing actual programs (e.g. a basic namespace mechanism, how file imports work, what other parts of the stdlib look like) but it gets at the core of the idea: a language that builds a dynamic version that tries to replicate the affordances provided by features like Haskell's typeclasses or Rust's traits.

Why write it? To be totally clear, Petri is more of a UI experiment than any kind of programming language innovation! After all, there's nothing new semantically in this idea: it's just multimethods. In terms of what you can express, there's almost nothing you get that's not already present in CLOS. The thing I'm interested in trying with Petri is more about affordances, figuring out what programs look like when these features are expressed in this way. After all, Python and Ruby and (modern) JavaScript are all languages with very similar basic features—dynamic types, object systems which provide sugar over hashtables of values and functions, some simple built-in data structures, and so forth—but programs in all three can look radically different because of the specific way those low-level building-blocks are combined. What if we took a tack that, while not unique, was still several steps outside of the Python/Ruby/JavaScript world?

I've got a background in both static and dynamic languages: I used to program in a lot of Python and Scheme, but my first two jobs used primarily Haskell and eventually smatterings of Rust, and more recently I've been working on Ruby-related infrastructure at Stripe, which means I spend most of my work time these days in a dynamic language with a static type system over top of it. I'm always interested in things that can bring affordances from one into the other, whether that means bringing static features to dynamic languages (as here) or dynamic features to static languages.

Admittedly, I doubt I'd actually use this language for much, but I still love tiny experimental programming languages. It'd be fun to see how far it could be pushed in practice, to see what at least medium-sized example programs look like in something like Petri.

Why the name? It's an miniature experiment, and miniature experiments are often found in Petri dishes.

#backburner #software #language #experimental


  1. This is also a feature borrowed from Tulip, and one that I think is very cool in general but especially for languages which are still growing. Modern languages often have a need to expand their vocabulary of keywords, which means they sometimes reserve a set keywords in case they're ever needed later, but even still languages might need to either context-sensitive keywords to introduce keywords without breaking existing programs or break backwards-compatibility to introduce new keywords which weren't previously reserved. None of these are inherently bad choices, to be clear, but I like the Tulip approach: if all keywords are in a particular namespace, then there's never any ambiguity or chance that a future revision will break your program since your variable is now reserved. Admittedly, it also is best when there's a syntax with a relatively low number of keywords, like the Haskell style. If you used this same approach for something like SQL, your program would be incredibly noisy with the sheer number of @ characters!

This is an entry from a list of projects I hope to some day finish. Return to the backburnered projects index.

What is it? An authoring tool for simplistic shape grammars intended for generative art.

A shape grammar is a tool for building two-dimensional objects. Like all grammars, you can think of a shape grammar as either a recognizer of a language or as a generator of a language, and for our purposes it makes more sense to think of them in the latter sense. Also like other grammars, it consists of a set of rules, each of which has a left-hand side and a right-hand side: generating a language from a grammar is a process of finding rules whose left-hand side matches some part of an intermediate state, and then replacing that part with whatever is specified by the right-hand side. Similarly, some components are considered 'non-terminals', which are symbols that always appear on the left-hand side of rules and are meant to be replaced, while others are 'terminals' which form the final output: once a state contains no non-terminals, we can say it has been fully generated by the grammar.

The biggest distinction between a shape grammar and a traditional grammar, then, is that a shape grammar operates over, well, shapes. In the case of Palladio, the intention was for it to operate over sprites on a square grid. As an example, consider a grammar defined by these three rules:

A simple three-rule shape grammar

These rules are very simple: when we see a symbol on the left of an arrow, it means we try to find that symbol and replace it with whatever is on the right-hand side. That means if we see a black circle, we replace it with the symbols on the right: in this case, a specific configuration of a black square and two red circles. Similarly, when we see a red circle, we can choose one of the two possible rules that applies here: either we can replace it with a black square, or we can replace it with a light blue square.

This grammar is non-deterministic, but generates a very small number of possible outputs: rule A will produce one terminal and two non-terminals, and we can non-determinstically apply rules B or C to the resulting non-terminals. In fact, here we can show all four possible outputs along with the rules we apply to get them:

The four possible outputs of the above shape grammar

That said, it's not hard to write grammars which produce infinite numbers of possible outputs. This grammar is not terribly interesting, but it does produce an infinite number of possible outputs: in particular, it produces an infinite number of vertical stacks of black squares.

A simple two-rule shape grammar that includes a rule which can apply recursively

We can't enumerate every possible output here, but you get the idea:

The first handful of outputs of the above shape grammar

My goal with Palladio was to write an authoring environment for grammars like these, with the specific intention of using them to produce tile-based pixel art pieces. This means not just providing the ability to create and edit rules like these, but with things like live outputs—possibly with a fixed random seed, for stability, and possibly with a large set of seeds, to see wide varieties—and maybe analysis of the output, alongside the ability to export specific results but also the data used to create these.

Why write it? For fun and experimentation! I don't just love making art, I love making tools that make art, and my favorite way to do so is using grammars. Grammars act like tiny constrained programs over some value domain, and you can still bring a very human touch to writing a grammar that produces a specific kind of output.

Doing a tool to support this that's both graphical and constrained to grids is a good way to make it easy to get started with. My plan was even to build tools to spin up Twitter bots as quickly as possible, too, so someone could design a grammar and then with minimal fuss start showing off the output of that grammar on the internet.

My original version (which I admittedly didn't get terribly far with) was written as a desktop app (in Rust and GTK) but I suspect that for the purposes of making it easy to use I'd probably reimplement it to have a web-based version as well, making it easy to open on any device. It'd be nice to also build simple-to-use libraries that can ingest the grammars described, as well, so they can be embedded in other applications like video games or art projects.

Why the name? After the architect Andrea Palladio, but specifically because one of the famous early papers about shape grammars in architecture was Stiny & Mitchell's 1979 paper The Palladian Grammar. Palladio was famously schematic in terms of how he designed and constructed houses, so Stiny & Mitchell were able to take those rules and turn them into mechanized rules which can express the set of possible villas that Palladio might have designed. For a more recent and accessible treatment on the subject, you might look at Possible Palladian Villas (Plus a Few Instructively Impossible Ones) by Freedman & Hersey.

#backburner #software #tool #procedural

This is an entry from a list of projects I hope to some day finish. Return to the backburnered projects index.

What is it? A modern, debuggable, interactive PostScript implementation.

The first part is boring, and in fact doesn't really fit my criteria for these posts, because “just implement PostScript well” is not a super interesting project idea. But it's really worth calling out that many of the existing implementations—not surprisingly—really aren't designed for human beings. For example, if I accidentally pop too many elements in GhostScript, I get an informative but not terribly well-designed error like so:

GS>pop
Error: /stackunderflow in --pop--
Operand stack:

Execution stack:
   %interp_exit   .runexec2   --nostringval--   % removed for space
Dictionary stack:
   --dict:729/1123(ro)(G)--   --dict:0/20(G)--   --dict:75/200(L)--
Current allocation mode is local

It's not hard to improve on the ergonomics here, since something like GhostScript isn't really designed for human ergonomics anyway!

But there's more to Endsay that I wanted to get around to, which is also designing an interactive debugger for PostScript: to begin with, I want step-by-step execution and breakpoints in code, so I can walk through the process of watching my document being drawn on the screen. But I think we can even do better: for one, supporting some amount of rewinding, so we can rewind the state of the world, undraw things that we have drawn. Even further, I suspect we can set breakpoints based on graphical content, as well: for example, selecting an area of the page to be rendered and then requesting that we break whenever that area gets drawn, or choosing a color and requesting that we break when that color is used. Something that interests me about this project is figuring out the right way to build and expose those primitives!

Why write it? The PostScript language is a weird beast, and in some ways it's the complete opposite of the languages I've pitched in these backburner posts: rather than using a non-Turing-complete language for a task conventionally understood as the purview of executable logic like in Parley it's a Turing-complete language for a format conventionally understood to be declarative. PostScript is a dynamically-typed stack-based language, roughly like Forth, that was originally created for printers: rather than rendering expensive raster versions of your document on your computer, you could produce a simple, terse program which would produce your document, send that program to the printer, and let it handle the rendering.

There was a very very narrow slice of history where this actually made sense—where printers were good enough to need bigger raster files than computers could comfortably generate and send, for example—and at no point was PostScript really intended for human writing. PostScript lives on in more restricted forms: for example, the EPS format is PostScript under the hood, while the PDF format was effectively designed as pre-executed PostScript.

That said, you absolutely can, as a human being, write PostScript by hand. I happen to like writing PostScript a lot. I wouldn't be interested in writing a PostScript implementation for practical rendering of images—those exist already, and are fine—but I'd love to write an implementation that's more user-friendly, even if the user is just me!

Plus, building debugging tools for creating 2D graphics sounds like a fascinating design question. I'd love to see what turns out to be useful and how best to expose these features!

Why the name? English is one of many languages to have had strong “linguistic purist” movements: that is to say, efforts to remove foreign influences by removing borrowed words and replacing them with native words or compounds. Thankfully, these efforts have mostly disappeared, despite the best efforts of tedious linguistic reactionaries like George Orwell.

While I by and large think that linguistic purism is an abysmally stupid endeavour, I nonetheless do appreicate the creative and artistic applications of linguistic purism. Consequently, I do enjoy reading about some of the alternative words once proposed as “pure” equivalents of foreign borrowings, just because they sound so whimsical and entertaining: for example, using starlore instead of “astronomy”, bendsome instead of “flexible”, and, of course, endsay instead of “postscript”.

#backburner #software #language

This is an entry from a list of projects I hope to some day finish. Return to the backburnered projects index.

What is it? A language for expressing recipes in graph form.

The usual presentation of a recipe is as a snippet of prose, doing things step-by-step. However, actual recipes aren't strictly linear: there are often parallel preparations that need to be done, like making a roux while the sauce simmers elsewhere. Prose is certainly capable of capturing this—prose is, after all, a default way of representing information for a good reason—but I'm also fond of notation and alternative ways of representing information. What if we could make the various branches of a recipe more explicit?

Apicius is a way of writing recipes that encodes explicit parallel paths. It looks more like a programming language:

eggplant rougail {
  [2] eggplants
    -> scoop flesh, discard skin
    -> mash
    -> $combine
    -> mix & [4bsp] oil
    -> DONE;
  [2] white onions or shallots -> mince -> $combine;
  [2] hot peppers -> mince -> $combine;
}

How do we read this? Well, the bit before the curly braces is the name of the recipe: eggplant rougail. Inside the curly braces, we have a set of paths, each of which ends with a semicolon. The beginning of every path starts with one or more raw ingredients, with the amount set apart with square brackets. Once you're in a path, there are two things you can write: steps, which look like plain text, or join points, which start with a dollar sign.

What is a join point? It's a place where you combine multiple previously-separate preparations. In the above example, you have three different basic ingredients: eggplants, peppers, and onions or shallots. Each is prepared separately, and then later on mixed together: the join point indicates where the ingredients are combined.

There's also a fourth ingredient at one point above: at any given step or join point, you can add extra ingredients with &. The intention is that these are staples which require no initial preparation or processing: it's where you'd specify salt or pepper or oil in most recipes, ingredients which sometimes might not even make it onto the initial ingredients list because they're so simple and obvious.

Finally, there's a special step called DONE, which is always the last step.

In the above example, there's a relatively simple pattern happening: three ingredients are prepared and then joined. But with different uses of join points, you can express recipes that have multiple separate preparations that come together in a specific way, and that parallelism is now apparent in the writing. Here is a different recipe:

egg curry {
  [8] eggs -> boil 10m -> $a;
  [2] onions -> mince -> brown &oil -> stir -> $b;
  [2] garlic cloves + [1] piece ginger + [1] hot pepper + salt + pepper -> grind -> $b;
  [4] ripe tomatoes -> chop -> $c;
  $b -> stir &thyme -> $c;
  $c -> simmer 5m &saffron -> $a;
  $a -> stir 10m -> simmer 5m &water -> halve -> DONE;
}

This is rather more complicated! In particular, we now have three different join points, and a larger number of basic ingredients (combined with +). In parallel, we can boil the eggs, mince and brown the onions, grind seasonings together, and chop the tomatoes. The first two disparate sequences to join are actually the browned onions and the garlic/ginger/pepper/&c mixture: those are combined with some thyme, and afterwards the tomatoes are mixed in; after simmering with saffron, the boiled eggs are added, then all paths are joined.

I don't think this is terribly easy to follow in this format, especially when using non-descriptive join point names like $a. However, an advantage of this format is that it's machine-readable: this format can be ingested and turned into a graphic, like this:

The above recipe for egg curry, represented as a graph with certain steps leading to others

or a table, like this:

The above recipe for egg curry, represented as a table with cumulative preparations combined

or possibly even into a prose description, using natural language generation techniques to turn this into a more typical recipe. Marking the amounts in square brackets might also allow for automatic resizing of the recipe (although how to do this correctly for baking recipes—where linearly scaling the ingredients can result in an incorrect and non-working recipe—is a can of worms I haven't even considered opening yet.)

Why write it? Well, I love the presentation of recipes as graphs. I've actually got a whole web site of cocktail recipes in this form, which you can find at the domain cocktail.graphics, and I would love to get to the point that I could semi-automatically generate these diagrams from a snippet of Apicius:

A chart-based recipe for the Last Word cocktail

But I think—as I mentioned when I described the language for Parley—that there are many cool things you can do if you have generic data that can be re-interpreted and revisualized. As mentioned above, there are lots of ways you can take this format and use it to represent various structures. What I'd like to do, once I have the tooling to do so, is take a bunch of my favorite recipes (like sundubu-jjigae or ševid polow or jalapeño cream sauce) and convert them into Apicius, and then have some kind of front-end which can be used to view the same recipes using many different rendered formats.

Why the name? One of the few Latin-language cookbooks we have remaining copies of is usually called Apicius, although it's also referred to as De re coquinaria, which boringly translates to “On the topic of cooking.” It's not clear who Apicius was or if the Apicius referenced by the book was a single person who really existed: it may have been written by an otherwise-unattested person named Caelius Apicius, or it may have been named in honor of the 1st-century gourmet Marcus Gavius Apicius, or perhaps it was even authored by a group of people. (It doesn't help that the remaining copies we have are from much later and were probably copied down in the 5th century.)

#backburner #software #tool

This is an entry from a list of projects I hope to some day finish. Return to the backburnered projects index.

What is it? The digital tabletop I want for the tabletop games I play, scripted the way I want to script it. My original notes on the topic called this system Beholder, but more recently I've taken to calling it Parley.

The information display part is maybe a little bit less interesting, but fundamentally this is because of how I play tabletop games: what I want is first and foremost something wiki-like, and secondarily a mapping system. Many of the games I play don't care about fine-grained maps, but instead are concerned with characters and factions and whatnot: I want to bring that information front-and-center. I want a digital tabletop to function as a shared dossier, including the ability for players to add their own pieces of information (e.g. descriptions of NPCs or items) that are stored tagged with the player/character who wrote them. Maps should still be present, but they're not always the star of the show.

The scripting component is the more interesting part: I want the ability to express game rules using a total, decidable language with reusable components. I know that “this should use a decidable language” is a thing I've talked about before in these project posts—both Virgil and Van de Graaff include such a language, and won't be the last ones—but I really am interested in systems that do decidable computation in a domain-specific way. The intention here is that I can support a new tabletop game or system by chaining together an appropriately-designed set of computational primitives.

Say I want to describe the rules of the game Apocalypse World. In particular, let's look at the Apocalypse World rule for Do Something Under Fire, which is described like this:

When you do something under fire, or dig in to endure fire, roll+cool. On a 10+, you do it. On a 7–9, you flinch, hesitate, or stall: the MC can offer you a worse outcome, a hard bargain, or an ugly choice. On a miss, be prepared for the worst.

I might begin (using handwavey, ML-inspired syntax here—I hadn't made any syntax choices before) by writing a macro which can be used to describe the way all dice rolls in Apocalypse World have three fixed outcomes:

let aw_roll(modifier:, success:, partial:, failure:) =
  match 2d6 + modifier {
    | n when n >= 10 => success
    | n when n <= 6  => failure
    | otherwise      => partial
  }

I can then express a specific move by instantiating this macro, using constructs to express that the modifier will be filled in with the character's relevant stat and the various outcomes will result in specific pieces of informative text.

let do_something_under_fire = aw_roll
  ( modifier: @char.cool,
    success:
      say("You do it."),
    partial:
      say("You flinch, hesitate, or stall: the MC can offer
           you a worse outcome, hard bargain, or ugly choice."),
    failure:
      say("Be prepared for the worst."),
  )

Finally, this and other macros will be assembled into a single set of rules which can be exposed to the player in a clean way, which gives the player a palette of the abilities at their disposal. Importantly, despite looking like an imperative program, the above code would actually be purely declarative: the result of do_something_under_fire is an abstract tree of possibilities, so the Parley system would be able to understand not just how to “run” it with a particular random roll, but also how to, for example, express that rule in prose, since it understands expressions like 2d6 + modifier symbolically and abstractly.

Why write it? Well, for one, basically every digital tabletop service I've used (like Roll20) assumes that you're playing Dungeons and Dragons or at least a game significantly like it. That means they build first and foremost around maps, and usually grid-based maps of the kind usable for D&D combat.

While I'm not against D&D, it's also not my favorite kind of tabletop game. As I mentioned before, I mostly prefer games that aren't focused on tactical grid-based combat. My usual games of Blades in the Dark or Apocalypse World are far more about the history and interactions of NPCs and factions, and maps tend to be sketchy and collaborative instead of the rigorous structuring principle of the whole game. I've mostly been using Roll20 for my games, but a bunch of the UI around shared notes feels like an afterthought, and I think designing for shared notes up-front (maybe borrowing some ideas from Notion, a tool which I love) would yield some great benefits.

I also am fascinated by the language design necessary to describe tabletop games, and I think that's the most appealing part of this project to me. You can certainly express the rules for a new tabletop game using Roll20's system… but that system is just, “Program it with JavaScript.” I don't think that's a bad design, to be clear! But I think there's a lot you can do if you sat down and designed a language specifically for the task of describing and implementing tabletop game rules, especially from the point of view of being able to statically analyze the game structure.

For example, I can imagine using the same underlying “programming language” to build out not just a web interface for Parley but also a kind of rigorous rule-book: the same declarative description could just as easily be analyzed and serialized as it could be run. That sort of interface could even allow for a game designer to start doing an abstract analysis of the patterns and possibilities inherent in the rules. There's also some really fascinating ideas to be borrowed from Chris Martens' linear-logic-based Ceptre language which I haven't even scratched the surface of.

Why the name? The original name, Beholder, was because it was fundamentally an information-display application, but also referenced the famous D&D monster of the same name. The newer name, Parley, is because it's also a chat-like application featuring a feed of dice rolls and results, and because Parley is a Dungeon World move used to talk to people.

#backburner #software #web