commit: 2b922fe18a85dd1972bf4c2ebe2c05b3fa27ef0e
parent a8381aa20f5700415c75dab2982fe82c8782801b
Author: Drew DeVault <sir@cmpwn.com>
Date: Tue, 9 Apr 2024 13:54:54 +0200
Drop remains of gemini support, finally
Diffstat:
198 files changed, 0 insertions(+), 6410 deletions(-)
diff --git a/content/_index.gmi b/content/_index.gmi
@@ -1,25 +0,0 @@
-```ASCII art of a rocket next to "Drew DeVault" in a stylized font
- /\
- || ________ ________ ____ ____ .__ __
- || \______ \_______ ______ _ __ \______ \ ___\ \ / /____ __ __| |_/ |_
- /||\ | | \_ __ \_/ __ \ \/ \/ / | | \_/ __ \ Y /\__ \ | | \ |\ __\
-/:||:\ | ` \ | \/\ ___/\ / | ` \ ___/\ / / __ \| | / |_| |
-|:||:| /_______ /__| \___ >\/\_/ /_______ /\___ >\___/ (____ /____/|____/__|
-|/||\| \/ \/ \/ \/ \/
- **
- **
-```
-
-# Drew DeVault's geminispace
-
-Contact: sir@cmpwn.com
-
-My Gemini-related projects:
-
-=> gmni.gmi gmni: a Gemini client
-=> gmnisrv.gmi gmnisrv: a Gemini server
-=> kineto.gmi kineto: an HTTP to Gemini proxy
-
-Gemini services:
-
-=> /cgi-bin/man.sh Look up a POSIX man page
diff --git a/content/_index.html b/content/_index.html
@@ -1,4 +1,3 @@
---
title: Drew DeVault's blog
-outputs: [html, gemtext]
---
diff --git a/content/blog/2020-Election-worker.gmi b/content/blog/2020-Election-worker.gmi
@@ -1,57 +0,0 @@
-In 2019, I was on the election board as a machine inspector for the primary election, the first election when the new voting machines were in use. And it's a good thing that I was — the opportunity to learn how the machines work, and to refine the voting process during a less contentious election, was valuable.
-
-Once 2020 rolled around, it became clear that this election was important, and was likely to be the subject of attempted subversion. I reached out to let the board of elections know that I was prepared to help out with this election as well, and reprised my role: machine inspector, division 18/19, PA. The machine inspector for 18/5 didn't show up, so I ended up doing their work as well until relief arrived partway through the day.
-
-NOTE: I made some educated guesses about the responsibilities of other poll workers, which I am not too familiar with myself. A couple of errors have been pointed out to me in those respects. Descriptions of roles other than machine inspector should be taken with a grain of salt.
-
-Officially, the only role of the machine inspector is to open the polls at the beginning of the day, keep the machines sanitized throughout, and close the polls at the end of the day. However, any qualified board member is also permitted to do most of the tasks required of the election board, so I helped out where I could throughout the remainder of my (15 and a half hour!) workday. We were issued basic surgical masks (I wore two) and face shields, and more than enough sanitizer to wipe the machines, pens, and working areas down with.
-
-At 6 AM, we arrived to set up for the 7 AM opening. I gathered my materials from the official Box of Election Things (it has a name, but I forgot it), and got to work. Each division had two voting machines. I removed the covers (checking and writing down the seal numbers), plugged them in, and removed the seal protecting the bin on the back — where, among other things, the blank paper ballots are stored. These were handed off to the other officials, and I opened up the administrative controls box. In here is a flash drive which stores (one copy of) the election results for this machine, a power and mode switching button, and another numbered seal. This other seal is used as a tamper-evident seal for the admin box itself, once I'm done with the initial setup. This box, and the area where the filled-in paper ballots are collected, are also locked with a barrel key that came with the election materials.
-
-Opening the poll machines involves turning on the machine, verifying some information matches our expectations (like the division number and the fact that no votes are already recorded), and entering the "election code". I won't repeat it here, but it's not a particularly secure password. It is made much more secure, however, by the fact that entering it is always accompanied with a physical measure in the admin box, which is kept locked and sealed. After these steps, the machines are ready to accept votes.
-
-For those readers who do not live in PA, the typical voting experience is as follows: when the voter arrives at the polling place, they're asked which ward and division they're voting in. The voter is handed an index card to write their name and ward/division on while they wait. Socially distant lines are formed for each division, and two voters per division are allowed into the building at a time. There were a total of four divisions voting at my location, each with four election board workers; two poll watchers; a small number of city workers; and up to two voters per division, averaging about 25-35 people in the room at peak times.
-
-The voter approaches their table, and gives their name and party affiliation. The party affiliation is used as a signal that something fishy might be going on if the ratio of R to D is very different from the votes recorded. There are three people at the table: one looking up registrations A-M, another for N-Z, and a third recording all of the voters in order. The voter is assigned a number (assigned ordinally from the first voter of the day), which is written on their card, in the third worker's book, and in the registration book. Voters who are voting in our division for the first time are required to show proof of address, and some voters have special circumstances which are specified in the registration book. If all is well, they'll be asked to sign the registration book, are issued a blank paper ballot, and are sent to the next open machine for that division.
-
-The paper ballot has a notch on one side indicating the orientation it is to be placed into the voting machine in, via a slot on the right-hand side. The slot leads to a conveyor belt which pulls it into the machine underneath a transparent pane, and out of view. The touch screen asks the voter to select a language (English or Spanish), then presents them with their choices. There are choices for changing the font size or contrast, and voters who have a greater degree of visual impairment can use the "ADA device". I've never had to set this up so I'm not sure how it works.
-
-Under specific circumstances, the voter may be accompanied into the booth. I am allowed to enter the voting booth if a voter asks, to help with any confusion with the machine. Additionally, one member of the public can be allowed to assist the voter ONLY if they and the voter fill out a form to consent to assistance, usually in the case of disability. I make an effort to avoid seeing the selections the voter makes, and I'm not allowed to offer any advice on what to vote for.
-
-Once the voter makes their choices, they tap the "Vote" button, and their choices are printed onto their ballot. The ballot slides back out, under the transparent pane, for them to inspect for errors. They have an opportunity to reject it and ask to vote again. Otherwise, the vote is pulled back into the machine and stored in the hopper, under lock, key, and tamper-evident seal in the back. The printed ballot includes bar codes which can be quickly scanned to read the vote, and also has the selections printed out in plain English.
-
-After this, the voter is done, they get their sticker and go home. Outside of the typical case, there are primarily two other kinds of voters: those surrendering their mail-in ballot, and those voting provisionally. Voters who surrender their mail-in ballot to the election officials are allowed to vote on the machine; we give them a sharpie to censor their ballot with (if they already filled it in), and a form to fill out with some basic information about the voter, then staple them together and store them for later. After this, they may vote normally.
-
-The provisional ballots are used for anyone whose registration status is uncertain, for example voters who are required to provide proof of address, but who cannot do so. We are not allowed to refuse any voters for any reason, and provisional ballots are the last measure for anyone we don't know how to deal with. The voter is given a form to fill out with their registration and other details in the presence of the judge of elections for our division, and then the judge and another worker signs it. The voter is given a special paper ballot and a secrecy envelope, then brought to a private area to write in their votes and seal them in the secrecy envelope. When they're done, they return the envelope to the election board, we enclose it in the first envelope, and then add a sticker with a unique serial number on it. The same serial number is printed on a card which is given to the voter, which they can use to track the status of their provisional ballot on a website or via the phone, to make sure it's accepted and recorded.
-
-During the day, I helped out with a little bit of everything, here and there. Mainly I took care of unusual voters, those surrendering mail-in ballots or voting provisionally, to (1) make sure the rest of the election board was free to keep the voters moving, and (2) to stave off the boredom which comes with my official responsibilities being separated by 13 hours of doing nothing. There was one voter who insisted that we were going to throw away her provisional ballot, while another board worker and I patiently explained how the serial numbers worked, how she could use them to make sure her vote was counted, and that none of us wanted to go to jail over her vote.
-
-The other event came when a discrepancy developed between the number of votes recorded by the machine and the number of voters tallied in the books: one vote was recorded in the books that was not recorded by the machines. We ultimately concluded that someone had just walked out of the polling place after they were handed their ballot, never casting their vote on the machine. Other possibilities which we ultimately ruled out were election board error, or that the voter voted in the wrong division after receiving their ballot. The redundant records — the index card, the registration book, the list of voters — helped to narrow down the possible causes. In the end, there was nothing we could do about it.
-
-The polls close at 8 PM. In 2019, we had almost all of the votes at the end of the day, and had to keep the polls open longer — anyone who is in line at 8 PM is entitled to vote. This year, unexpectedly, we received hardly any voters in the last few hours of the day. At exactly 8 PM, I started closing down the machines, and this is when things began to go wrong.
-
-First: I will explicitly clarify that none of the problems I'm about to describe ever had the slightest risk of causing votes to be miscounted or lost.
-
-At the end of the day, the machines print out eight receipts with tallies of the vote. Each copy is signed by the election board, then one is sent to city hall, one is kept by the judge of elections for at least a year, another two are given to representatives from each party, another is taped to the front door of the polling place for public inspection, and so on. However, my machines stopped before printing all of the receipts: one after 7, and one after just 5. The kicker: the only receipt which includes the full list of write-in votes is the last one, and that's the one which goes to city hall to provide (one of many) redundant sources for the vote.
-
-The machine malfunction was unexpected and required improvisation, which is something I really did not want to do. I first called up the machine hotline (one of several numbers provided to the election board to deal with issues that arose on the day), and I was connected to an expert. They told me to reboot the machine and hit "print receipt" as many times as I needed to get the appropriate number of receipts. After I got off the phone with them, however, I discovered that this approach would not print off the write-in votes. When I tried to call back to enquire further, I got a busy tone. I expect similar problems were happening across the state; it affected both of my machines and another one in the room.
-
-I was probably better equipped to address this than most divisions, as a computer expert by trade, but I was still skeeved out about doing anything to the machines which wasn't written down in my book. I was more skeeved out about the possibility of not counting write-in votes, however, so I decided to very carefully start pressing buttons. I guessed correctly at how to summon the administrative menu (something we had access to but were never trained on how to use), and poked around until I found some receipt printing tool that could prepare various kinds of reports. The specific report we needed to deliver to city hall was not available, but there were two reports which, together, contained the same information as the desired report. I printed this out on both machines, stapled them together, then included with them a signed note describing the undocumented process I used to obtain them. Then I went to the third machine in our room (for another division) which had the same malfunction, and repeated the process. The hardest part during this process was making sure none of the other well-meaning poll workers touched the machines — no, you there, DO NOT touch my machines, god dammit, you are not helping, fuck off! As soon as I found a button labelled "erase media", I did not want anyone else anywhere near these machines.
-
-Ultimately, the votes are extremely secure regardless of this incident. The paper votes have two levels of redundancy, the plain-English votes and the barcode, and are stored in a locked and sealed hopper. The votes are digitally tallied on the USB stick in the machine, and again on each of the receipts. These records all also have to line up with the two books. All of these redundant sources of truth are handed to several different people, of different affiliations, and brought to different places. It would be very difficult to make an important error at any step, let alone to commit voting fraud. Regardless, every discrepancy and divergence from procedure was meticulously recorded to make it as easy as possible for city hall to make sense of the records.
-
-The rest of the process is straightforward, and much less stressful. The seal on the admin box is checked, removed, and recorded, and the machine is powered off. The flash drive is removed and placed into bag A, along with the signed receipts from the machine and a few other records. The ballot hopper bins are removed, secured with a numbered seal, and placed into bag B, along with the provisional ballots, surrendered mail-in ballots, and some other records. A pair of police officers arrives after the polls close to collect these, and they take them to city hall (having the police handle these skeeves me out, too, but again — the system is so redundant and secured that any funny business on their part would be discovered). The poll official's handbook didn't cover this, but I also took the opportunity (both this year and last) to record the officer's badge numbers.
-
-Actually, the police officers were the subject of another sketchy incident. The same officers arrived earlier in the day, while the polls were open, to drop off some materials. There are strict laws regulating police and poll workers while the polls are open. Police are not allowed within 100 feet of the polling location, unless they're actively voting or there on official business at the request of the election board. Additionally, anyone working at the polls in an official capacity is not allowed to wear any clothing with political messaging.
-
-One of the officers who showed up earlier in the day was wearing a hoodie with the American flag on it, and the US armed forces oath of enlistment printed in place of the white stripes. As they were leaving, I pulled the officer aside and asked him to remove the hoodie before making his next stop, politely explaining the law. Another poll worker affirmed my instruction as legitimate. Later, after the polls closed, the same officer returned to pick up the ballots (with the same hoodie on, but now that the polls were closed it didn't matter). Apparently, when I was photographing the badges of the officers as a matter of record, he believed that I was planning to file a complaint. I didn't catch on, but another poll worker tipped me off.
-
-On my way out of the polling location after everything was closed up, I passed the officers and stopped to clarify that I was just recording the badge number as a matter of course, and that there weren't any hard feelings over the hoodie — just trying to do my job. He was with 5 or 6 other officers at the time, and he started to puff up and make a scene. I didn't feel comfortable with the situation, so I just left, and with that, my long election day came to an end.
-
-Overall, the election day experience was decent enough. The voters were mostly polite and gracious, and the community was lovely — lunch, snacks, and bottled waters for poll workers throughout the day were donated by local businesses. I made $250 for my efforts, which probably won't show up for several months. If you have any ideas for who I should donate it to, let me know — usually my default is the EFF, but there are bigger problems right now.
-
-EDIT: After answering a few questions about this post, I think it would be helpful to share this link:
-
-=> https://files7.philadelphiavotes.com/election-workers/Primary_2020_Election_Board_Training_Guide.pdf Guide for Election Board Officials in Philadelphia County (PDF)
-
-EDIT 10:05 PM: Added note regarding other election board roles.
diff --git a/content/blog/2020-Election-worker.md b/content/blog/2020-Election-worker.md
@@ -1,5 +0,0 @@
----
-title: My experience as a poll worker in Pennsylvania
-date: 2020-11-10
-outputs: [gemtext]
----
diff --git a/content/blog/2021-01-02-Every-commit-should-be-perfect.gmi b/content/blog/2021-01-02-Every-commit-should-be-perfect.gmi
@@ -1,45 +0,0 @@
-Good commit discipline with git pays dividends in many respects, most of which are difficult to achieve with other version control systems. Each commit to your repository’s main branch should be exactly the correct size, be it large or small, such that it introduces one atomic unit of change. It should fix one bug, introduce one feature, refactor one system, and do so completely, rather than spreading the change out across several commits.
-
-Don’t invoke reductio ad absurdum here — the right size is not necessarily the smallest possible size. If you’re making a lot of similar changes in a refactoring effort, one large commit is better than a hundred commits, one for every affected function. On the other hand, avoid introducing several logically unrelated changes in a single commit. There’s a balance you must seek: seek the correct size for your change.
-
-There are numerous advantages to taking this approach. Some of the strongest are that every commit in your repository can be expected to be relatively sane and useful, such that you can check out any version of your code and expect all features which are present to be in approximately good working order. This is nice for an individual digging through the history, but it’s especially nice when you consider how well this composes with tools like git bisect to do things like automatically search through thousands of commits to identify which one introduced a bug.
-
-=> https://git-scm.com/docs/git-bisect git bisect manual
-
-If you have good commit message discipline as well, another advantage of this approach is free changelogs via git shortlog. Linus Torvalds uses this for Linux (example: Linux 5.10 announcement). He’ll do a short write-up about the release in prose, followed by the detailed changes generated by git shortlog.
-
-=> https://git-scm.com/docs/git-shortlog git shortlog manual
-=> https://lore.kernel.org/lkml/CAHk-=whCKhxNyKn1Arut8xUDKTwp3fWcCj_jbL5dbzkUmo45gQ@mail.gmail.com/T/#u Linux 5.10 release notes
-
-To do something similar with your own commit messages, consider using the following style:
-
-```
-subsystem: change foo to bar
-
-foo has long been a point of technical debt for subsystem, causing problems A,
-B, and C. Following discussions on how to address this[0], bar was settled on as
-the solution.
-
-This changes foo to bar throughout subsystem. Follow-up work will consider if a
-similar change is appropriate for othersystem.
-
-[0] https://example.org/archive/foo-to-bar-proposal
-```
-
-Keep your subject line short, starting with a prefix (“subsystem:") indicating the affected part of the software, then a sentence which completes the following phrase: “when applied, this commit shall…”. Instant changelogs! Following up with additional details, explaining the context, rationale, trade-offs, and follow-up work incurred by the change, is a good way to elaborate for anyone who wants to learn more, today or tomorrow.
-
-It is, of course, difficult to maintain the necessary level of discipline to produce perfect, atomic units of change. Knowing exactly how large your change needs to be, and ensuring that it is free of bugs on the first try, is an unreasonable ask for most. You may prefer to take an incremental approach to your change, making several small commits over time. Git comes to our rescue again, providing several tools to aid in this effort, chief among them being git rebase.
-
-=> https://git-rebase.io/ Read my git rebase tutorial
-
-When you’re ready to bring your changes upstream, you can use git rebase to cut, paste, merge, split, reorder, and rewrite commits as necessary to form a more perfect (and useful!) commit history, such that this principle of atomic change is upheld. The record of your incremental progress is not lost — see git reflog — and you will earn all of the advantages of a clear, concise, and correct commit log. As you receive feedback on your patch and make updates, continue to rebase and improve your original commit, or commits.
-
-=> https://git-scm.com/docs/git-reflog git reflog manual
-
-Another great tool to make this easier is git add -p or git commit -p, which breaks down your uncommitted changes into smaller hunks, asking you which changes to include in your commit on a line-by-line basis. This is helpful when you’re working on entangled problems and it’s not clear how to split them until you finish working, or if you make minor, unrelated fixes here and there as you work on a problem.
-
-=> https://git-rebase.io/#edit git commit -p details on git-rebase.io
-
-When you get good at these tools, you’re able to treat the state of your git repository as an extension of your normal thought process. You can come up with novel solutions and workflows on the fly, freely composing tools in such a way that you can work on many problems at once and make frequent mistakes, and still emerge with a tidy commit log. Learning git like the back of your hand, studying its deep features, and grokking its internals, are investments which will pay off massively for you in the future. If you strive to become git literate, and exercise good git discipline, then you will be rewarded with convenience, flexibility, and robustness.
-
-=> https://drewdevault.com/2020/04/06/My-weird-branchless-git-workflow.html Recommended reading: My unorthodox, branchless git workflow
diff --git a/content/blog/2021-01-02-Every-commit-should-be-perfect.md b/content/blog/2021-01-02-Every-commit-should-be-perfect.md
@@ -1,5 +0,0 @@
----
-title: Every commit should be a perfect, atomic unit of change
-date: 2021-01-02
-outputs: [gemtext]
----
diff --git a/content/blog/2022-01-15-The-RISC-V-experience.gmi b/content/blog/2022-01-15-The-RISC-V-experience.gmi
@@ -1,23 +0,0 @@
-I’m writing to you from a Sway session on Alpine Linux, which is to say from a setup quite similar to the one I usually write blog posts on, save for one important factor: a RISC-V CPU.
-
-I’ll state upfront that what I’m using is not a very practical system. What I’m going to describe is all of the impractical hacks and workarounds I have used to build a “useful” RISC-V system on which I can mostly conduct my usual work. It has been an interesting exercise, and it bodes well for the future of RISC-V, but for all practical purposes the promise of RISC-V still lives in tomorrow, not today.
-
-In December of 2018, I wrote an article about the process of bootstrapping Alpine Linux for RISC-V on the HiFive Unleashed board. This board was essentially a crappy SoC built around a RISC-V CPU: a microSD slot, GPIO pins, an ethernet port, a little bit of RAM, and the CPU itself, in a custom form-factor. Today I’m writing this on the HiFive Unmatched, which is a big step up: it’s a Mini-ITX form factor (that is, it fits in a standardized PC case) with 16G of RAM, and the ethernet, microSD, and GPIO ports are complemented with a very useful set of additional I/O via two M.2 slots, a PCIe slot, and a USB 3 controller, plus an SPI flash chip. I have an NVMe drive with my root filesystem on it and an AMD Radeon Pro WX 2100 GPU installed. In form, it essentially functions like a standard PC workstation.
-
-I have been gradually working on bringing this system up to the standards that I expect from a useful PC, namely that it can run upstream Alpine Linux with minimal fuss. This was not really possible on the previous SiFive hardware, but I have got pretty close on this machine. I had to go to some lengths to get u-Boot to provide a working UEFI environment, and I had to patch grub as well, but the result is that I can write a standard Alpine ISO to a USB stick, then boot it and install Alpine onto an NVMe normally, which then boots itself with UEFI with no further fiddling. I interact with it through three means: the on-board UART via a micro-USB cable (necessary to interact with u-Boot, grub, or the early Linux environment), or ethernet (once sshd is up), or with keyboard, mouse, and displays connected to the GPU.
-
-Another of the standards I expect is that everything runs with upstream free software, perhaps with a few patches, but not from a downstream or proprietary tree. I’m pleased to report that I am running an unpatched mainline Linux 5.15.13 build. I am running mainline u-Boot with one patch to correct the name of a device tree node to match a change in Linux upstream. I have a patched grub build, but the relevant patches have been proposed for grub upstream. I have a spattering of patches applied to a small handful of userspace programs and libraries, but all of them only call for one or two patches applied to the upstream trees. Overall, this is quite good for something this bleeding edge — my Pinephone build is worse.
-
-I have enclosed the system in a mini-ITX case and set it down on top of my usual x86_64 workstation, then moved a few of my peripherals and displays over to it to use it as my workstation for the day. I was able to successfully set up almost all of my standard workstation loadout on it, with some notable exceptions. Firefox is the most painful omission — bootstrapping Rust is an utter nightmare and no one has managed to do it for Alpine Linux riscv64 yet (despite many attempts and lots of hours wasted), so anything which depends on it does not work. librsvg is problematic for the same reason; I had to patch a number of things to be configured without it. For web browsing I am using visurf, which is based on Netsurf, and which works for many of the lightweight websites that I generally prefer to use, but not for most others. For instance, I was unable to address an issue that was raised on GitLab today because I cannot render GitLab properly on this browser. SourceHut mostly works, of course, but it’s not exactly pleasant — I still haven’t found time to improve the SourceHut UI for NetSurf.
-
-Complicating things is the fact that my ordinary workstation uses two 4K displays. For example, my terminal emulator of choice is foot, but it uses CPU rendering and the 4K window is noticeably sluggish. Alacritty, which renders on the GPU, would probably fare better — but Rust spoils this again. I settled for st, which has acceptable performance (perhaps in no small part thanks to being upscaled from 1080p on this setup). visurf also renders on the CPU and is annoyingly slow; as a workaround I have taken to resizing the window to be much smaller while actively navigating and then scaling it back up to full size to read the final page.
-
-CPU-bound programs can be a struggle. However, this system has a consumer workstation GPU plugged into its PCIe slot. Any time I can get the GPU to pick up the slack, it works surprisingly effectively. For example, I watched Dune (2021) today in 4K on this machine — buttery smooth, stunningly beautiful 4K playback — a feat that my Pinebook Pro couldn’t dream of. The GPU has a hardware HEVC decoder, and mpv and Sway can use dmabufs such that the GPU decodes and displays each frame without it ever having to touch the CPU, and meanwhile the NVMe is fast enough to feed it data at a suitable bandwidth. A carefully configured obs-studio is also able to record my 4K display at 30 FPS and encode it on the GPU with VAAPI with no lag, something that I can’t even do on-CPU on x86_64 very reliably. The board does not provide onboard audio, but being an audiophile I have a USB DAC available that works just fine.
-
-I was able to play Armagetron Advanced at 120+ FPS in 4K, but that’s not exactly a demanding game. I also played SuperTuxKart, a more demanding game, at 1080p with all of the settings maxed out at a stable 30 FPS. I cannot test any commercial games, since I’m reasonably certain that there are no proprietary games that distribute a riscv64 build for Linux. If Ethan Lee is reading this, please get in touch so that we can work together on testing out a Celeste build.
-
-My ordinary workday needs are mostly met on this system. For communication, my mail setup with aerc and postfix works just fine, and my normal Weechat setup works great for IRC. Much like any other day, I reviewed a few patches and spent some time working on a shell I’ve been writing in our new programming language. The new language is quite performant, so no issues there. I think if I had to work on SourceHut today, it might be less pleasant to work with Python and Go, or to work on the web UI without a performant web browser. Naturally, browsing Geminispace with gmnlm works great.
-
-So, where does this leave us? I have unusually conservative demands of my computers. Even on high-end, top-of-the-line systems, I run a very lightweight environment, and that’s the way I like it. Even so, my modest demands stress the limits of this machine. If I relied more on a web browser, or on more GUI applications, or used a heavier desktop environment, or heavier programming environments, I would not be able to be productive on this system. Tomorrow, I expect to return to my x86_64 machine as my daily workstation and continue to use this machine as I have before, for RISC-V development and testing over serial and SSH. There are few use-cases for which this hardware, given its limitations, is adequate.
-
-Even so, this is a very interesting system. The ability to incorporate more powerful components like DDR4 RAM, PCIe GPUs, NVMe storage, and so on, can make up for the slow CPU in many applications. Though many use-cases for this system must be explained under strict caveats, one use-case it certainly offers is a remarkably useful system with which to advance the development of the RISC-V FOSS ecosystem. I’m using it to work on Alpine Linux, on kernel hacking projects, compiler development, and more, on a CPU that is designed in adherence to an open ISA standard and runs on open source firmware. This is a fascinating product that promises great things for the future of RISC-V as a platform.
diff --git a/content/blog/2022-01-15-The-RISC-V-experience.md b/content/blog/2022-01-15-The-RISC-V-experience.md
@@ -1,7 +1,6 @@
---
title: The RISC-V experience
date: 2022-01-15
-outputs: [html, gemtext]
---
I'm writing to you from a Sway session on Alpine Linux, which is to say from a
diff --git a/content/blog/A-culture-of-stability-and-reliability.gmi b/content/blog/A-culture-of-stability-and-reliability.gmi
@@ -1,13 +0,0 @@
-There’s an idea which encounters a bizarre level of resistance from the broader software community: that software can be completed. This resistance manifests in several forms, perhaps the most common being the notion that a git repository which doesn’t receive many commits is abandoned or less worthwhile. For my part, I consider software that aims to be completed to be more worthwhile most of the time.
-
-There are two sources of change which projects are affected by: external and internal. An internal source of change is, for example, a planned feature, or a discovered bug. External sources of change are, say, when a dependency makes a breaking change and your software has to be updated accordingly. Some projects will necessarily have an indefinite source of external change to consider, often as part of their value proposition. youtube-dl will always evolve to add new sites and workarounds, wlroots will continue to grow to take advantage of new graphics and input hardware features, and so on.
-
-Any maintained program will naturally increase in stability over time as bug fixes accumulate, towards some finite maximum. However, change drives this trend in reverse. Introducing new features, coping with external change factors, even fixing bugs, all of this often introduce new problems. If you want to produce software which is reliable, robust, and stable, then managing change is an essential requirement.
-
-To this end, software projects can, and often should, draw a finish line. Or, if not a finish line, a curve for gradually backing off on feature introduction, raising the threshold of importance by which a new feature is considered.
-
-Sway, for instance, was “completed” some time ago. We stopped accepting most major feature requests, preferring only to implement changes which were made necessary by external sources: notably, features implemented in i3, the project sway aimed to replace. The i3 project announced this week that it was adopting a similar policy regarding new features, and thus sway’s change management is again reduced in scope to only addressing bugs and performance. Sway has completed its value proposition, and now our only goal is to become more and more stable and reliable at delivering it.
-
-scdoc is another project which has met its stated goals. Its primary external source of change is roff — which is almost 50 years old. Therefore, it has accumulated mainly bugfixes and robustness over the past few years since its release, and users enjoy a great deal of reliability and stability from it. Becoming a tool which “just works” and can be depended on without a second thought is the only goal going forward.
-
-Next time you see a git repo which is only getting a slow trickle of commits, don’t necessarily write it off as abandoned. A slow trickle of commits is the ultimate fate of software which aims to be stable and reliable. And, as a maintainer of your own projects, remember that turning a critical eye to new feature requests, and evaluating their cost in terms of complexity and stability, is another responsibility that your users are depending on you for.
diff --git a/content/blog/A-culture-of-stability-and-reliability.md b/content/blog/A-culture-of-stability-and-reliability.md
@@ -1,7 +1,6 @@
---
title: Fostering a culture that values stability and reliability
date: 2021-01-04
-outputs: [html, gemtext]
---
There's an idea which encounters a bizarre level of resistance from the broader
diff --git a/content/blog/A-few-ways-to-make-money-in-FOSS.gmi b/content/blog/A-few-ways-to-make-money-in-FOSS.gmi
@@ -1,50 +0,0 @@
-I work on free and open-source software full time, and I make a comfortable living doing it. And I don’t half-ass it: 100% of my code is free and open-source. There’s no proprietary add-ons, no periodic code dumps, just 100% bona-fide free and open source software. Others have often sought my advice — how can they, too, make a living doing open source?
-
-Well, there’s more than one way to skin a cat. There are many varieties of software, each with different needs, and many kinds of people, each with different needs. The exact approach which works for you and your project will vary quite a bit depending on the nature of your project.
-
-I would generally categorize my advice into two bins:
-
-* You want to make money from your own projects
-* You want to make money participating in open source
-
-The first one is more difficult. We’ll start with the latter.
-
-## Being employed in FOSS
-
-One way to make money in FOSS is to get someone to pay you to write free software. There’s lots of advantages to this: minimal personal risk, at-market salaries, benefits, and so on, but at the cost of not necessarily getting to choose what you work on all the time.
-
-I have a little trick that I often suggest to people who vaguely want to work “in FOSS”, but who aren’t trying to find the monetization potential in their own projects. Use git to clone the source repositories for some (large) projects you’re interested in, the kind of stuff you want to work on, and then run this command:
-
-```
-git log -n100000 --format="%ae" | cut -d@ -f2 | sort | uniq -c | sort -nr | less
-```
-
-=> https://drewdevault.com/2020/08/10/How-to-contribute-to-FOSS.html See also: I want to contribute to your project, how do I start?
-
-This will output a list of the email domains who have committed to the repository in the last 100,000 commits. This is a good set of leads for companies who might be interested in paying you to work on projects like this 😉
-
-Another good way is to explicitly seek out large companies known to work a lot in FOSS, and see if they’re hiring in those departments. There are some companies that specialize in FOSS, such as RedHat, Collabora, and dozens more; and there are large companies with FOSS-specific teams, such as Intel, AMD, IBM, and so on.
-
-## Making money from your own FOSS work
-
-If you want to pay for the project infrastructure, and maybe beer money for the weekend, then donations are an easy way to do that. I’ll give it to you straight, though: you’re unlikely to make a living from donations. Programmers who do are a small minority. If you want to make a living from FOSS, it’s going to be more difficult.
-
-Start by unlearning what you think you know about startups. The toxic startup culture around venture capital and endless hyper-growth is more stressful, less likely to succeed, and socially irresponsible. Building a sustainable business responsibly takes time, careful planning, and hard work. The fast route — venture capital funded — is going to impose constraints on your business that will ultimately make it difficult to remain true to your open-source mission.
-
-And yes, you are building a business. You need to start thinking of your project as a business and of yourself as a business owner. This undertaking is going to require developing business skills in planning, budgeting, scheduling, resource allocation, marketing & sales, compliance, and more. At times, you will be forced to embrace your inner suit. Channel your engineering problem-solving skills into the business problems.
-
-So, you’ve got the right mindset. What are some business models that work?
-
-SourceHut, my company, has two revenue streams. We have a hosted SaaS product. It’s open source, and users can choose to deploy and maintain it themselves, or they can just buy a hosted account from us. The services are somewhat complex, so the managed offering saves them a lot of time. We have skilled sysops/sysadmins, support channels, and so on, for paying users. Importantly, we don’t have a free tier (but we do choose to provide free service to those who need it, at our discretion).
-
-=> https://sourcehut.org SourceHut
-
-Our secondary revenue stream is free software consulting. Our developers work part-time writing free and open-source software on contracts. We’re asked to help implement features upstream for various projects, or to develop new open-source applications or libraries, to share our expertise in operations, and so on, and charge for these services. This is different from providing paid support or development on our own projects — we accept contracts to work on any open source project.
-
-=> https://sourcehut.org/consultancy SourceHut's consultancy details
-
-The other approach to consulting is also possible: paid support and development on your own projects. If there are businesses that rely on your project, then you may be able to offer them support or develop new features or bugfixes that they need, on a paid basis. Projects with a large corporate userbase also sometimes do find success in donations — albeit rebranded as sponsorships. The largest projects often set up foundations to manage them in this manner.
-
-These are, in my experience, some of the most successful approaches to monetizing FOSS. You may have success with a combination of these, or with other business models as well. Remember to turn that engineering mind of yours towards the task of monetization, and experiment with and invent new ways of making money that best suit the kind of software you want to work on.
-
-Feel free to reach out if you have some questions or need a sounding board for your ideas. Good luck!
diff --git a/content/blog/A-few-ways-to-make-money-in-FOSS.md b/content/blog/A-few-ways-to-make-money-in-FOSS.md
@@ -1,7 +1,6 @@
---
title: A few ways to make money in FOSS
date: 2020-11-20
-outputs: [html, gemtext]
---
I work on free and open-source software full time, and I make a comfortable
diff --git a/content/blog/A-new-systems-language.gmi b/content/blog/A-new-systems-language.gmi
@@ -1,76 +0,0 @@
-It’s an open secret: the “secret project” I’ve been talking about is a new systems programming language. It’s been underway since December ‘19, and we hope to release the first version in early 2022. The language is pretty small — we have a mostly complete specification which clocks in at 60 pages. It has manual memory management, no runtime, and it uses a superset of the C ABI, making it easy to link with libraries and C code. It should be suitable almost anywhere C is useful: compilers, system utilities, operating systems, network servers and clients, and so on.
-
-```A "hello world" code sample
-use io;
-
-export fn main() void = {
- const greetings = [
- "Hello, world!",
- "¡Hola Mundo!",
- "Γειά σου Κόσμε!",
- "Привет мир!",
- "こんにちは世界!",
- ];
- for (let i = 0z; i < len(greetings); i += 1) {
- io::println(greetings[i]);
- };
-};
-```
-
-We could compare our language to many other languages, but let’s start with how it compares to C:
-
-* More robust error handling via tagged unions
-* Improved, Unicode-aware string support
-* Memory safe array, slice, and pointer types (and unsafe versions, if needed)
-* Direct compatibility with the C ABI for trivial C interop
-* A simpler, context-free, expression-oriented syntax
-* A standard library free of the constraints of POSIX or the C standard
-
-Our language currently supports Linux on x86_64 or aarch64, and we plan on expanding this to the BSDs, Haiku, and Plan 9; as well as i686, riscv64 and riscv32, and ppc64 before the release.
-
-I plan to continue keeping the other details a secret until the release — we want the first release to be a complete, stable, production-ready programming language with all of the trimmings. The first time most people will hear about this language will also be the first time they can ship working code with it.
-
-However, if you want to get involved sooner, there’s a way: we need your help. So far, we’ve written most of the spec, the first of two compilers, and about 15,000 lines of the standard library. The standard library is what needs the most help, and I’m seeking volunteers to get involved.
-
-The standard library mandate begins with the following:
-
-> The •••• standard library shall provide:
->
-> * Useful features to complement •••• language features
-> * An interface to the host operating system
-> * Implementations of broadly useful algorithms
-> * Implementations of broadly useful formats and protocols
-> * Introspective meta-features for ••••-aware programs
->
-> Each of these services shall:
->
-> * Have a concise and straightforward interface
-> * Correctly and completely implement the useful subset of the required behavior
-> * Provide complete documentation for each exported symbol
-> * Be sufficiently tested to provide confidence in the implementation
-
-We have a number of focus areas for standard library development. I expect most contributors, at least at first, to stick to one or two of these areas. The focus areas we’re looking into now are:
-
-Algorithms: Sorting • compression • math • etc
-
-Cryptography: Hashing • encryption • key derivation • TLS • etc
-
-Date & time support: Parsing • formatting • arithmetic • timers • etc
-
-Debugging tools: ELF and DWARF support • vDSO • dynamic loading • etc
-
-Formats & encodings: JSON • XML • Gemtext • MIME • RFC 2822 • tar • etc
-
-•••• language support: Parsing • type checker • hosted toolchain • etc
-
-Networking: IP & CIDR handling • sockets • DNS resolver • Gemini • etc
-
-Platform support: New platforms and architectures • OS-specific features
-
-String manipulation: Search, replace • Unicode • Regex • etc
-
-Unix support: chmod • mkfifo • passwd • setuid • TTY management • etc
-
-If any of this sounds up your alley, we’d love your help! Please write me an email describing your interest areas and previous systems programming experience.
-
-Update 2021-03-20: We're targeting the first release in early 2022, not 2021.
diff --git a/content/blog/A-new-systems-language.md b/content/blog/A-new-systems-language.md
@@ -1,7 +1,6 @@
---
title: We are building a new systems programming language
date: 2021-03-19
-formats: [html, gemtext]
---
It's an open secret: the "secret project" I've been talking about is a new
diff --git a/content/blog/A-philosophy-for-instant-messaging.gmi b/content/blog/A-philosophy-for-instant-messaging.gmi
@@ -1,67 +0,0 @@
-We use Internet Relay Chat (IRC) extensively at sourcehut for real-time group chats and one-on-one messaging. The IRC protocol is quite familiar to hackers, who have been using it since the late 80’s. As chat rooms have become more and more popular among teams of both hackers and non-hackers in recent years, I would like to offer a few bites of greybeard wisdom to those trying to figure out how to effectively use instant messaging for their own work.
-
-=> https://en.wikipedia.org/wiki/Internet_Relay_Chat Internet Relay Chat on Wikipedia
-
-For me, IRC is a vital communication tool, but many users of <insert current instant messaging software fad here>¹ find it frustrating, often to the point of resenting the fact that they have to use it at all. Endlessly catching up on discussions they missed, having their workflow interrupted by unexpected messages, searching for important information sequestered away in a discussion which happened weeks ago… it can be overwhelming and ultimately reduce your productivity and well-being. Why does it work for me, but not for them? To find out, let me explain how I think about and use IRC.
-
-The most important trait to consider when using IM software is that it is ephemeral, and must be treated as such. You should not “catch up” on discussions that you missed, and should not expect others to do so, either. Any important information from a chat room discussion must be moved to a more permanent medium, such as an email to a mailing list,² a ticket filed in a bug tracker, or a page updated on a wiki. One very productive use of IRC for me is holding a discussion to hash out the details of an issue, then writing up a summary up for a mailing list thread where the matter is discussed in more depth.
-
-I don’t treat discussions on IRC as actionable until they are shifted to another mode of discussion. On many occasions, I have discussed an issue with someone on IRC, and once the unknowns are narrowed down and confirmed to be actionable, ask them to follow-up with an email or a bug report. If the task never leaves IRC, it also never gets done. Many invalid or duplicate tasks are filtered out by this approach, and those which do get mode-shifted often have more detail than they otherwise might, which improves the signal-to-noise ratio on my bug trackers and mailing lists.
-
-I have an extensive archive of IRC logs dating back over 10 years, tens of gigabytes of gzipped plaintext files. I reference these logs perhaps only two or three times a year, and often for silly reasons, like finding out how many swear words were used over some time frame in a specific group chat, or to win an argument about who was the first person to say “yeet” in my logs. I almost never read more than a couple dozen lines of the backlog when starting up IRC for the day.
-
-Accordingly, you should never expect anyone to be in the know for a discussion they were not present at. This also affects how I use “highlights”.³ Whenever I highlight someone, I try to include enough context in the message so that they can understand why they were mentioned without having to dig through their logs, even if they receive the notification hours later.
-
-Bad:
-
-```
-<sircmpwn> minus: ping
-<sircmpwn> what is the best way to frob foobars?
-```
-
-Good:
-
-```
-<sircmpwn> minus: do you know how to frob foobars?
-```
-
-I will also occasionally send someone a second highlight un-pinging them if the question was resolved and their input is no longer needed. Sometimes I will send a vague “ping <username>” example when I actually want them to participate in the discussion right now, but if they don’t answer immediately then I will usually un-ping them later.⁴
-
-This draws attention to another trait of instant messaging: it is asynchronous. Not everyone is online at the same time, and we should adjust our usage of it in consideration of this. For example, when I send someone a private message, rather than expecting them to engage in a real-time dialogue with me right away, I dump everything I know about the issue for them to review and respond to in their own time. This could be hours later, when I’m not available myself!
-
-Bad:
-
-```
-<sircmpwn> hey emersion, do you have a minute?
-*8 hours later*
-<emersion> yes?
-*8 hours later*
-<sircmpwn> what is the best way to frob foobars?
-*8 hours later*
-<emersion> did you try mongodb?
-```
-
-Good:⁵
-
-```
-<sircmpwn> hey emersion, what's the best way to frob foobars?
-<sircmpwn> I thought about mongodb but they made it non-free
-*10 minutes later*
-<sircmpwn> update: considered redis, but I bet they're one bad day away from making that non-free too
-*8 hours later*
-<emersion> good question
-<emersion> maybe postgresql? they seem like a trustworthy bunch
-*8 hours later*
-<sircmpwn> makes sense. Thanks!
-```
-
-This also presents us a solution to the interruptions problem: just don’t answer right away, and don’t expect others to. I don’t have desktop or mobile notifications for IRC. I only use it when I’m sitting down at my computer, and I “pull” notifications from it instead of having it “push” them to me — that is, I glance at the client every now and then. If I’m in the middle of something, I don’t read it.
-
-With these considerations in mind, IRC has been an extraordinarily useful tool for me, and maybe it can be for you, too. I’m not troubled by interruptions to my workflow. I never have to catch up on a bunch of old messages. I can communicate efficiently and effectively with my team, increasing our productivity considerably, without worrying about an added source of stress. I hope that helps!
-
-¹ Many, many companies have tried, and failed, to re-invent IRC, usually within a proprietary walled garden. I offer my condolences if you find yourself using one of these.
-² Email is great. If you hate it you might be using it wrong.⁶
-³ IRC terminology for mentioning someone’s name to get their attention. Some platforms call this “mentions”.
-⁴ I occasionally forget to… apologies to anyone I’ve annoyed by doing that.
-⁵ I have occasionally annoyed someone with this strategy. If they have desktop notifications enabled, they might see 10 notifications while I fill their message buffer with more and more details about my question. Sounds like a “you” problem, buddy 😉
-=> https://useplaintext.email ⁶ useplaintext.email
diff --git a/content/blog/A-philosophy-for-instant-messaging.md b/content/blog/A-philosophy-for-instant-messaging.md
@@ -1,7 +1,6 @@
---
title: My philosophy for productive instant messaging
date: 2021-11-24
-outputs: [html, gemtext]
---
We use Internet Relay Chat ([IRC][0]) extensively at [sourcehut][1] for
diff --git a/content/blog/A-story-of-two-libcs.gmi b/content/blog/A-story-of-two-libcs.gmi
@@ -1,203 +0,0 @@
-I received a bug report from Debian today, who had fed some garbage into scdoc[0], and it gave them a SIGSEGV back. Diving into this problem gave me a good opportunity to draw a comparison between musl libc and glibc. Let's start with the stack trace:
-
-```
-==26267==ERROR: AddressSanitizer: SEGV on unknown address 0x7f9925764184
-(pc 0x0000004c5d4d bp 0x000000000002 sp 0x7ffe7f8574d0 T0)
-==26267==The signal is caused by a READ memory access.
- 0 0x4c5d4d in parse_text /scdoc/src/main.c:223:61
- 1 0x4c476c in parse_document /scdoc/src/main.c
- 2 0x4c3544 in main /scdoc/src/main.c:763:2
- 3 0x7f99252ab0b2 in __libc_start_main
-/build/glibc-YYA7BZ/glibc-2.31/csu/../csu/libc-start.c:308:16
- 4 0x41b3fd in _start (/scdoc/scdoc+0x41b3fd)
-```
-
-=> https://git.sr.ht/~sircmpwn/scdoc [0]: scdoc
-
-And if we pull up that line of code, we find...
-
-```
-if (!isalnum(last) || ((p->flags & FORMAT_UNDERLINE) && !isalnum(next))) {
-```
-
-Hint: p is a valid pointer. "last" and "next" are both uint32_t. The segfault happens in the second call to isalnum. And, the key: it can only be reproduced on glibc, not on musl libc. If you did a double-take, you're not alone. There's nothing here which could have caused a segfault.
-
-Since it was narrowed down to glibc, I pulled up the source code and went digging for the isalnum implementation, expecting some stupid bullshit. But before I get into their stupid bullshit, of which I can assure you there is *a lot*, let's briefly review the happy version. This is what the musl libc `isalnum` implementation looks like:
-
-```
-int isalnum(int c)
-{
- return isalpha(c) || isdigit(c);
-}
-
-int isalpha(int c)
-{
- return ((unsigned)c|32)-'a' < 26;
-}
-
-int isdigit(int c)
-{
- return (unsigned)c-'0' < 10;
-}
-```
-
-As expected, for any value of `c`, isalnum will never segfault. Because why the fuck would isalnum segfault? Okay, now, let's compare this to the glibc implementation[1]. When opening this header, you're greeted with the typical GNU bullshit, but let's trudge through and grep for isalnum.
-
-=> https://sourceware.org/git/?p=glibc.git;a=blob;f=ctype/ctype.h;h=351495aa4feaf23993fe65afc0760615268d044e;hb=HEAD [1]: The glibc implementation
-
-```
-enum
-{
- _ISupper = _ISbit (0), /* UPPERCASE. */
- _ISlower = _ISbit (1), /* lowercase. */
- // ...
- _ISalnum = _ISbit (11) /* Alphanumeric. */
-};
-```
-
-This looks like an implementation detail, let's move on.
-
-```
-__exctype (isalnum);
-```
-
-But what's `__exctype`? Back up the file a few lines...
-
-```
-#define __exctype(name) extern int name (int) __THROW
-```
-
-Okay, apparently that's just the prototype. Not sure why they felt the need to write a macro for that. Next search result...
-
-```
-#if !defined __NO_CTYPE
-# ifdef __isctype_f
-__isctype_f (alnum)
-// ...
-```
-
-Okay, this looks useful. What is `__isctype_f`? Back up the file now...
-
-```
-#ifndef __cplusplus
-# define __isctype(c, type) \
- ((*__ctype_b_loc ())[(int) (c)] & (unsigned short int) type)
-#elif defined __USE_EXTERN_INLINES
-# define __isctype_f(type) \
- __extern_inline int \
- is##type (int __c) __THROW \
- { \
- return (*__ctype_b_loc ())[(int) (__c)] & (unsigned short int) _IS##type; \
- }
-#endif
-```
-
-Oh.... oh dear. It's okay, we'll work through this together. Let's see, `__isctype_f` is some kind of inline function... wait, this is the else branch of `#ifndef __cplusplus`. Dead end. Where the fuck is isalnum *actually* defined? Grep again... okay... here we are?
-
-```
-#if !defined __NO_CTYPE
-# ifdef __isctype_f
-__isctype_f (alnum)
-// ...
-# elif defined __isctype
-# define isalnum(c) __isctype((c), _ISalnum) // <- this is it
-```
-
-Hey, there's that implementation detail from earlier! Remember this?
-
-```
-enum
-{
- _ISupper = _ISbit (0), /* UPPERCASE. */
- _ISlower = _ISbit (1), /* lowercase. */
- // ...
- _ISalnum = _ISbit (11) /* Alphanumeric. */
-};
-```
-
-Let's suss out that macro real quick:
-
-```
-# include <bits/endian.h>
-# if __BYTE_ORDER == __BIG_ENDIAN
-# define _ISbit(bit) (1 << (bit))
-# else /* __BYTE_ORDER == __LITTLE_ENDIAN */
-# define _ISbit(bit) ((bit) < 8 ? ((1 << (bit)) << 8) : ((1 << (bit)) >> 8))
-# endif
-```
-
-Oh, for fuck's sake. Whatever, let's move on and just assume this is a magic number. The other macro is `__isctype`, which is similar to the `__isctype_f` we were just looking at a moment ago. Let's go look at that `ifndef __cplusplus` branch again:
-
-```
-#ifndef __cplusplus
-# define __isctype(c, type) \
- ((*__ctype_b_loc ())[(int) (c)] & (unsigned short int) type)
-#elif defined __USE_EXTERN_INLINES
-// ...
-#endif
-```
-
-...
-
-Well, at least we have a pointer dereference now, that could explain the segfault. What's `__ctype_b_loc`?
-
-```
-/* These are defined in ctype-info.c.
- The declarations here must match those in localeinfo.h.
-
- In the thread-specific locale model (see `uselocale' in <locale.h>)
- we cannot use global variables for these as was done in the past.
- Instead, the following accessor functions return the address of
- each variable, which is local to the current thread if multithreaded.
-
- These point into arrays of 384, so they can be indexed by any `unsigned
- char' value [0,255]; by EOF (-1); or by any `signed char' value
- [-128,-1). ISO C requires that the ctype functions work for `unsigned
- char' values and for EOF; we also support negative `signed char' values
- for broken old programs. The case conversion arrays are of `int's
- rather than `unsigned char's because tolower (EOF) must be EOF, which
- doesn't fit into an `unsigned char'. But today more important is that
- the arrays are also used for multi-byte character sets. */
-extern const unsigned short int **__ctype_b_loc (void)
- __THROW __attribute__ ((__const__));
-extern const __int32_t **__ctype_tolower_loc (void)
- __THROW __attribute__ ((__const__));
-extern const __int32_t **__ctype_toupper_loc (void)
- __THROW __attribute__ ((__const__));
-```
-
-That is just so, super cool of you, glibc. I just *love* dealing with locales. Anyway, my segfaulted process is sitting in gdb, and equipped with all of this information I wrote the following monstrosity:
-
-```
-(gdb) print ((unsigned int **(*)(void))__ctype_b_loc)()[next]
-Cannot access memory at address 0x11dfa68
-```
-
-Segfault found. Reading that comment again, we see "ISO C requires that the ctype functions work for 'unsigned char' values and for EOF". If we cross-reference that with the specification:
-
-> In all cases [of functions defined by ctype.h,] the argument is an int, the value of which shall be representable as an unsigned char or shall equal the value of the macro EOF.
-
-So the fix is obvious at this point. Okay, fine, my bad. My code is wrong. I apparently cannot just hand a UCS-32 codepoint to isalnum and expect it to tell me if it's between 0x30-0x39, 0x41-0x5A, or 0x61-0x7A.
-
-But, I'm going to go out on a limb here: maybe isalnum should never cause a program to segfault no matter what input you give it. Maybe because the spec says you *can* does not mean you *should*. Maybe, just maybe, the behavior of this function should not depend on five macros, whether or not you're using a C++ compiler, the endianness of your machine, a look-up table, thread-local storage, and two pointer dereferences.
-
-Here's the musl version as a quick reminder:
-
-```
-int isalnum(int c)
-{
- return isalpha(c) || isdigit(c);
-}
-
-int isalpha(int c)
-{
- return ((unsigned)c|32)-'a' < 26;
-}
-
-int isdigit(int c)
-{
- return (unsigned)c-'0' < 10;
-}
-```
-
-Bye!
diff --git a/content/blog/A-story-of-two-libcs.md b/content/blog/A-story-of-two-libcs.md
@@ -1,7 +1,6 @@
---
title: A tale of two libcs
date: 2020-09-25
-outputs: ["html", "gemtext"]
---
I received a bug report from Debian today, who had fed some garbage into
diff --git a/content/blog/Analytics-and-informed-consent.gmi b/content/blog/Analytics-and-informed-consent.gmi
@@ -1,17 +0,0 @@
-Research conducted on human beings, at least outside of the domain of technology, has to meet a minimum standard of ethical reasoning called informed consent. Details vary, but the general elements of informed consent are:
-
-* Disclosure of the nature and purpose of the research and its implications (risks and benefits) for the participant, and the confidentiality of the collected information.
-* An adequate understanding of these facts on the part of the participant, requiring an accessible explanation in lay terms and an assessment of understanding.
-* The participant must exercise voluntary agreement, without coercion or fear of repercussions (e.g. not being allowed to use your website).
-
-So, I pose the following question: if your analytics script wouldn’t pass muster at your university’s ethics board, then what the hell is it doing on your website? Can we not meet this basic minimum standard of ethical decency and respect for our users?
-
-Opt-out is not informed consent. Manually unticking dozens of third-party trackers from a cookie pop-up is not informed consent. “By continuing to use this website, you agree to…” is not informed consent. “Install uBlock Origin” is not informed consent.
-
-I don’t necessarily believe that ethical user tracking is impossible, but I know for damn sure that most of these “pro-privacy” analytics solutions which have been cropping up in the wake of the GDPR don’t qualify, either.
-
-Our industry’s fundamental failure to respect users, deliberately mining their data without consent and without oversight for profit, is the reason why we’re seeing legal crackdowns in the form of the GDPR and similar legislation. Our comeuppance is well-earned, and I hope that the regulators give it teeth in enforcement. The industry response — denial and looking for ways to weasel out of these ethical obligations — is a strategy on borrowed time. The law is not a computer program, and it is not executed by computers: it is executed by human beings who can see through your horseshit. You’re not going to be able to seek out some narrow path you can walk to skirt the regulations and keep spying on people.
-
-You're going to stop spying on people.
-
-P.S. If you still want the data you might get from analytics without compromising on ethics, here’s an idea: compensate users for their participation in your research. Woah, what a wild idea! That’s not very growth hacker of you, Drew.
diff --git a/content/blog/Analytics-and-informed-consent.md b/content/blog/Analytics-and-informed-consent.md
@@ -1,7 +1,6 @@
---
title: Web analytics should at least meet the standards of informed consent
date: 2020-12-04
-outputs: [html, gemtext]
---
Research conducted on human beings, at least outside of the domain of
diff --git a/content/blog/Anime-recommendation-fate.gmi b/content/blog/Anime-recommendation-fate.gmi
@@ -1,12 +0,0 @@
-The following clip constitutes my entire review of the Fate franchise as produced by Ufotable:
-
-=> /ufotable.webm 1.6 second webm video, no audio
-
-If you wish to watch the Ufotable take on the Nasuverse, I would recommend the following watch order:
-
-* Kara no Kyoukai
-* Fate/Stay Night: Unlimited Blade Works
-* Fate/Zero
-* Fate/Stay Night: Heaven's Feel
-
-Enjoy.
diff --git a/content/blog/Anime-recommendation-fate.md b/content/blog/Anime-recommendation-fate.md
@@ -1,6 +0,0 @@
----
-title: "Anime recommendation: Fate"
-date: 2022-03-31
-outputs: [gemtext]
-nohtml: true
----
diff --git a/content/blog/Anime-recommendation-gatari.gmi b/content/blog/Anime-recommendation-gatari.gmi
@@ -1,17 +0,0 @@
-I enjoy posting more casual, slower-paced content on my Gemlog, in contrast with the typically serious blog posts I do on the web. Following this theme, I'd like to recommend an anime for you today: the *Monogatari series. It begins with Bakemonogatari (化物語), and ends more than 40 hours later with Zoku Owarimonogatari (続・終物語).
-
-The Monogatari series is a complex, character-driven show. It has a huge cast — more than two dozen named characters with plot relevance — and focuses on the supernatural events surrounding one Ararararagi Koyomi's late high school career. The supernatural setting is interesting and carefully developed, leaving much of its nature a mystery and only slowly revealing the depths of its lore over the course of the show. This lore pulls a mix of European and Japanese myths from the past 500 years or so into a modern setting, including vampires, ghosts, spirits and gods, each with deep stories to unravel.
-
-These stories and the characters that inhabit them are the main course in Monogatari. Every one of these dozens of characters is fleshed out with complex motivations and relationships with the rest of the cast, explored mainly through dialogue. Monogatari is somewhat infamous for this, as perhaps a full three-quarters of its airtime is spent on dialogue, fast-paced and heavily laced with Japanese puns, wordplay, and cultural references, and supplemented with entire paragraphs of text plastered over the screen for 3 or 4 frames for the dedicated viewer to pause and read in their own time. I liked this show when I first watched it, but I have to admit that its transcendental status in my personal canon was significantly aided by the fact that I was equipped with a much better working knowledge of the Japanese language on my latest rewatch.
-
-As these characters negotiate the plot scenarios, interacting with each other and fleshing out the viewer's knowledge and understanding of the situation and those involved, their journey culminates in action set-pieces placed sparingly throughout the runtime. Monogatari will spend an hour and a half in dialogue scenes, delivering exposition at one hundred miles an hour, to set the stage for a single fight scene, with reserved, suspenseful pacing and the full weight of context afforded by the drama that leads to each climax. None of Shaft's famously large budget is spared for these scenes.
-
-The show is capped off in the last few seasons with satisfying, though often melancholic, conclusions for every arc, providing a complete ending to an incredibly complex story. Each arc throughout the course of the show is set in a larger arc for its main characters, and each of those meta-arcs fits within the broader narrative of the entire franchise, and each arc, super-arc, and the larger meta-arc, are all fleshed out to completion and tied off with a bow in the last few seasons. If you've ever been frustrated with an anime which peters off after two seasons, leaving all of its questions unanswered, then Monogatari will provide the catharsis you crave.
-
-Though this review gives you an (accurate) impression of the depth and solemnity of this show's themes, lore, characters, relationships, romance, and stories, I don't want you to leave with the impression that watching Monogatari is tedious or dull. Monogatari is hilarious! The studio did an excellent job of balancing the show's seriousness with humor throughout, not to mention a generous allocation of fanservice, excellent music, and talented voice actors.
-
-While Monogatari may be a bit too heavy for the new anime fan, I would readily recommend it to anyone who enjoys the medium. /r/anime has a good guide on the watch order for various anime for those who arrive late:
-
-=> https://old.reddit.com/r/anime/wiki/watch_order#wiki_-monogatari_.2F_bakemonogatari Watch order wiki
-
-The order for the *monogatari series is: Bakemonogatari → Nisemonogatari → Nekomonogatari: Kuro → Monogatari Series: Second Season → Hanamonogatari → Tsukimonogatari → Owarimonogatari → Kizumonogatari I: Tekketsu-hen → Kizumonogatari II: Nekketsu-hen → Kizumonogatari III: Reiketsu-hen → Koyomimonogatari → Owarimonogatari Second Season → Zoku Owarimonogatari.
diff --git a/content/blog/Anime-recommendation-gatari.md b/content/blog/Anime-recommendation-gatari.md
@@ -1,5 +0,0 @@
----
-title: "Anime recommendation: *Monogatari"
-date: 2021-03-17
-outputs: [gemtext]
----
diff --git a/content/blog/Better-than-DuckDuckGo.gmi b/content/blog/Better-than-DuckDuckGo.gmi
@@ -1,27 +0,0 @@
-DuckDuckGo is one of the long-time darlings of the technophile’s pro-privacy make-the-switch recommendations, and in fact the search engine that I use myself on the daily. They certainly present a more compelling option than many of the incumbents, like Google or Bing. Even so, it’s not good enough, and we ought to do better.
-
-I have three grievances with DuckDuckGo, one of which is shared with its competitors:
-
-* It’s not open source. Almost all of DDG’s software is proprietary, and they’ve demonstrated gross incompetence in privacy in what little software they have made open source. Who knows what else is going on in the proprietary code?
-* DuckDuckGo is not a search engine. It’s more aptly described as a search engine frontend. They do handle features like bangs and instant answers internally, but their actual search results come from third-parties like Bing. They don’t operate a crawler for their search results, and are not independent.
-* The search results suck! The authoritative sources for anything I want to find are almost always buried beneath 2-5 results from content scrapers and blogspam. This is also true of other search engines like Google. Search engines are highly vulnerable to abuse and they aren’t doing enough to address it.
-
-There are some FOSS attempts to do better here, but they all fall flat. searX is also a false search engine — that is, they serve someone else’s results. YaCy has their own crawler, but the distributed design makes results untolerably slow, poor quality, and vulnerable to abuse, and it’s missing strong central project leadership.
-
-We need a real, working FOSS search engine, complete with its own crawler.
-
-Here’s how I would design it.
-
-First, YaCy-style decentralization is way too hard to get right, especially when a search engine project already has a lot of Very Hard problems to solve. Federation is also very hard in this situation — queries will have to consult most instances in order to get good quality results, or a novel sharding algorithm will have to be designed, and either approach will have to be tolerant of nodes appearing and disappearing at any time. Not to mention it’d be slow! Several unsolved problems with federation and decentralziation would have to be addressed on top of building a search engine in the first place.
-
-So, a SourceHut-style approach is better. 100% of the software would be free software, and third parties would be encouraged to set up their own installations. It would use standard protocols and formats where applicable, and accept patches from the community. However, the database would still be centralized, and even if programmable access were provided, it would not be with an emphasis on decentralization or shared governance. It might be possible to design tools which help third-parties bootstrap their indexes, and create a community of informal index sharing, but that’s not the focus here.
-
-It would also need its own crawler, and probably its own indexer. I’m not convinced that any of the existing FOSS solutions in this space are quite right for this problem. Crucially, I would not have it crawling the entire web from the outset. Instead, it should crawl a whitelist of domains, or “tier 1” domains. These would be the limited mainly to authoritative or high-quality sources for their respective specializations, and would be weighed upwards in search results. Pages that these sites link to would be crawled as well, and given tier 2 status, recursively up to an arbitrary N tiers. Users who want to find, say, a blog post about a subject rather than the documentation on that subject, would have to be more specific: "$subject blog posts".
-
-An advantage of this design is that it would be easy for anyone to take the software stack and plop it on their own servers, with their own whitelist of tier 1 domains, to easily create a domain-specific search engine. Independent groups could create search engines which specialize in academia, open standards, specific fandoms, and so on. They could tweak their precise approach to indexing, tokenization, and so on to better suit their domain.
-
-We should also prepare the software to boldly lead the way on new internet standards. Crawling and indexing non-HTTP data sources (Gemini? Man pages? Linux distribution repositories?), supporting non-traditional network stacks (Tor? Yggdrasil? cjdns?) and third-party name systems (OpenNIC?), and anything else we could leverage our influence to give a leg up on.
-
-There’s a ton of potential in this domain which is just sitting on the floor right now. The main problem is: who’s going to pay for it? Advertisements or paid results are not going to fly — conflict of interest. Private, paid access to search APIs or index internals is one opportunity, but it’s kind of shit and I think that preferring open data access and open APIs would be exceptionally valuable for the community.
-
-If SourceHut eventually grows in revenue — at least 5-10× its present revenue — I intend to sponsor this as a public benefit project, with no plans for generating revenue. I am not aware of any monetization approach for a search engine which squares with my ethics and doesn’t fundamentally undermine the mission. So, if no one else has figured it out by the time we have the resources to take it on, we’ll do it.
diff --git a/content/blog/Better-than-DuckDuckGo.md b/content/blog/Better-than-DuckDuckGo.md
@@ -1,7 +1,6 @@
---
title: We can do better than DuckDuckGo
date: 2020-11-17
-outputs: [html, gemtext]
---
[DuckDuckGo](https://duckduckgo.com) is one of the long-time darlings of the
diff --git a/content/blog/Celeste.gmi b/content/blog/Celeste.gmi
@@ -1,17 +0,0 @@
-Celeste is my favorite game by far. There are many games that I've liked and loved over the years, but Celeste is in a class of its own.
-
-I played Celeste in two cours. My first playthrough got me through all of the A sides in something like 20 hours. I died... many, many times. The gameplay is very good - challenging, but tight - and the story was unexpectedly deep and thoughtful, and addresses themes I haven't seen told so well in most media, let alone by any games. The art and soundtrack are great, too. Summitting Celeste mountain was very, very difficult, but well worth it. However, I was exhausted by the time I got there, and I set the game aside without doing any of the optional content. I heard later that the Farewell expansion was released, but I left it unplayed given that I hadn't attempted any of the base game's more difficult optional challenges, either.
-
-A few years went by before I thought about Celeste again this December. By then I had switched all of my primary computers to an unconventional Linux system based on musl libc (Alpine Linux - I highly recommend it), which works great for my workstation. But most games struggle to support mainstream Linux on glibc, let alone Alpine, so I have few options in that respect. I had been gradually playing fewer and fewer games over the past several years. For a while, aside from the occasional round of Nethack or Armagetron Advanced (both open source), I haven't played many games.
-
-Then, this December, a friend and I were talking about games might be able to run on Alpine. With knowledge of C# from many of my earlier projects, I knew that getting Celeste working on Alpine would be within the realm of possibility, and had done some incomplete research to get a feel for what would be involved. My friend took up the task and did most of the real work to get it up and running - and shared the patched binaries among our social circle. In the three months since then, I have played 132 hours of Celeste.
-
-I have 100%'d the game, though I have yet to collect all of the Golden Strawberries. I bought the collector's edition, just to own the memorabilia, and I bought my sister a copy for her Switch. I also got into some of the mods, and finished the first three Spring Collab 2020 mod lobbies. I finally completed Farewell - eight times, including the secret path(s) - plus all of the B and C sides, of course.
-
-Tonight, I baked the strawberry pie recipe which came with the collector's edition:
-
-=> /pie.jpg A picture of my pie
-
-The verdict: it's the one of the best fucking foods I've ever tasted.
-
-I really love this game. You should seriously give it a try if you haven't already. If you bought the itch.io racial justice bundle, you already own it. If you don't already own it, it's well worth the $20. Plus the OST on Bandcamp (available in FLAC!), which you're probably also going to end up buying.
diff --git a/content/blog/Celeste.md b/content/blog/Celeste.md
@@ -1,5 +0,0 @@
----
-title: Celeste
-date: 2021-03-01
-outputs: [gemtext]
----
diff --git a/content/blog/Come-build-your-project.gmi b/content/blog/Come-build-your-project.gmi
@@ -1,17 +0,0 @@
-Note: we have enough projects on board now. Keep an eye on the blog, I'll publish another announcement when we're ready for more.
-
-Do you have a new systems programming project on your todo list? If you're feeling adventurous, I would like you to give it a crack in our new systems programming language, and to use it to drive improvements in the less-developed areas of our standard library.
-
-Are you making a new coreutils implementation? A little OS kernel? A new shell? A GUI toolkit? Database system? Web server? Whatever your systems programming use-case, we think that our language is likely to be a good fit for you, and your help in proving that, and spurring development to rise to meet your needs, would be quite welcome.
-
-Here's our pitch:
-
-> XXXX is a systems programming language designed to be simple and robust. XXXX uses a static type system, manual memory management, and a minimal runtime. It is well-suited to writing operating systems, system tools, compilers, networking software, and other low-level, high performance tasks.
-
-You can get a peek at how it feels by reading about the finger server I wrote with it:
-
-=> gemini://drewdevault.com/2021/05/24/io_uring-finger-server.gmi Using io_uring to make a high-performance... finger server
-
-Sounds interesting? Please tell me about your project idea!
-
-=> mailto:sir@cmpwn.com Email me!
diff --git a/content/blog/Come-build-your-project.md b/content/blog/Come-build-your-project.md
@@ -1,7 +1,6 @@
---
title: Build your project in our new language
date: 2021-05-30
-outputs: [html, gemtext]
---
Do you have a new systems programming project on your todo list? If you're
diff --git a/content/blog/Copying-aint-stealing.gmi b/content/blog/Copying-aint-stealing.gmi
@@ -1,31 +0,0 @@
-Copying ain't stealing,
-sure as I'm breathing,
-information wants to be free.
-
-The mafia wants us to believe
-youtube-dl was made for thieves.
-In fact,
-it was made for human beings.
-
-We might listen to your pleas,
-once you respect our machines,
-they belong to us - not you.
-
-DRM? EME?
-Your drivers are mean,
-your blobs illicit,
-the hardware complicit,
-and yet--
-it's never stopped piracy.
-
-The real caper is your crimes.
-The theft of OUR public domain.
-DMCA abuse, attacks on fair use.
-Net neutrality and regulatory capture.
-
-Sell it to us fair and square,
-your terms are too much to bear,
-and until you're playing fair,
-fuck off!
-
-Yo ho, yo ho. A pirate's life for me!
diff --git a/content/blog/Copying-aint-stealing.md b/content/blog/Copying-aint-stealing.md
@@ -1,5 +0,0 @@
----
-title: Copying ain't stealing
-date: 2020-11-12
-outputs: [gemtext]
----
diff --git a/content/blog/Corporate-surveillance-murder.gmi b/content/blog/Corporate-surveillance-murder.gmi
@@ -1,15 +0,0 @@
-I have never been angrier about the corporate surveillance complex, which I have rallied against for years, than I am today. Buying and selling user’s private information on the open market is bad enough for the obvious reasons, but today, I learned that the depths of depravity this market will descend to are without limit. Today I am more angry and ashamed at this industry than I have ever been. Corporate surveillance and adtech has turned your phone into an informant against you and brought about the actual murder of the user.
-
-=> https://www.vice.com/en/article/y3g97x/location-data-apps-drone-strikes-iowa-national-guard Vice: Military Unit That Conducts Drone Strikes Bought Location Data From Ordinary Apps
-
-Say you’re a Muslim. You download some apps for reading the Quran and participating in Muslim-oriented social networks. These ads steal whatever personal information it can get its hands on, through any means available, and sell it to Locate X, who stores every GPS location your phone has visited and tags it as being associated with a Muslim. This is used, say, to place Muslim-targeted ads on billboards in Muslim-dense areas. It’s also sold to the Iowa National Guard, who uses it to conduct drone strikes. The app you installed is selling your GPS data so it can be used to kill you.
-
-For a long time, I have preached “respect the user”. I want us, as programmers, to treat the user with the same standards of common decency and respect we’d afford to our neighbors. It seems I have to revise my sermon to “don’t murder the user”! If you work at a company which surveilles its users, you are complicit in these murders. You have written software which is used to murder people.
-
-=> https://drewdevault.com/2020/05/05/We-are-complicit-in-our-employers-deeds.html We are complicit in our employer's deeds
-
-This industry is in severe need of a moral health check. You, the reader of this article, need to take personal responsibility for what your code is doing. Your boss isn’t going to. Do you really know what that database is being used for, or who it’s being sold to, or who it might be sold to in the future? Most companies include their hoard of private, personal information about their users as part of their valuation. Do you have stock options, by the way?
-
-I’ve often heard the excuse that employees of large surveillance companies “want to feed their families, like anyone else”. Well, thanks to your work, a child you’ve never met was orphaned, and doesn’t have a family anymore. Who’s going to feed them? Is there really no other way for you to support your family?
-
-Don’t fucking kill your users.
diff --git a/content/blog/Corporate-surveillance-murder.md b/content/blog/Corporate-surveillance-murder.md
@@ -1,7 +1,6 @@
---
title: The corporate surveillance machine is killing people
date: 2021-03-06
-outputs: [html, gemtext]
---
I have never been angrier about the corporate surveillance complex, which I have
diff --git a/content/blog/Cryptocurrency-is-a-disaster.gmi b/content/blog/Cryptocurrency-is-a-disaster.gmi
@@ -1,37 +0,0 @@
-This post is long overdue. Let’s get it over with.
-
-🛑 Hey! If you write a comment about this article online, disclose your stake in cryptocurrency. I will explain why later in this post. For my part, I held <$10,000 USD worth of Bitcoin prior to 2016, plus small amounts of altcoins. Today my stake in all cryptocurrency is $0.
-
-Starting on May 1st, users of sourcehut’s CI service will be required to be on a paid account, a change which will affect about half of all builds.sr.ht users.1 Over the past several months, everyone in the industry who provides any kind of free CPU resources has been dealing with a massive outbreak of abuse for cryptocurrency mining. The industry has been setting up informal working groups to pool knowledge of mitigations, communicate when our platforms are being leveraged against one another, and cumulatively wasting thousands of hours of engineering time implementing measures to deal with this abuse, and responding as attackers find new ways to circumvent them.
-
-Cryptocurrency has invented an entirely new category of internet abuse. CI services like mine are not alone in this struggle: JavaScript miners, botnets, and all kinds of other illicit cycles are being spent solving pointless math problems to make money for bad actors. Some might argue that abuse is inevitable for anyone who provides a public service — but prior to cryptocurrency, what kind of abuse would a CI platform endure? Email spam? Block port 25. Someone might try to host their website on ephemeral VMs with dynamic DNS or something, I dunno. Someone found a way of monetizing stolen CPU cycles directly, so everyone who offered free CPU cycles for legitimate use-cases is now unable to provide those services. If not for cryptocurrency, these services would still be available.
-
-Don’t make the mistake of thinking that these are a bunch of script kiddies. There are large, talented teams of engineers across several organizations working together to combat this abuse, and they’re losing. A small sample of tactics I’ve seen or heard of include:
-
-* Using CPU limiters to manipulate monitoring tools.
-* Installing crypto miners into the build systems for free software projects so that the builds appear legitimate.
-* Using password dumps to steal login credentials for legitimate users and then leveraging their accounts for mining.
-
-I would give more examples, but secrecy is a necessary part of defending against this — which really sucks for an organization that otherwise strives to be as open and transparent as sourcehut does.
-
-Cryptocurrency problems are more subtle than outright abuse, too. The integrity and trust of the entire software industry has sharply declined due to cryptocurrency. It sets up perverse incentives for new projects, where developers are no longer trying to convince you to use their software because it’s good, but because they think that if they can convince you it will make them rich. I’ve had to develop a special radar for reading product pages now: a mounting feeling of dread as a promising technology is introduced while I inevitably arrive at the buried lede: it’s more crypto bullshit. Cryptocurrency is the multi-level marketing of the tech world. “Hi! How’ve you been? Long time no see! Oh, I’ve been working on this cool distributed database file store archive thing. We’re doing an ICO next week.” Then I leave. Any technology which is not an (alleged) currency and which incorporates blockchain anyway would always work better without it.
-
-There are hundreds, perhaps thousands, of cryptocurrency scams and ponzi schemes trussed up to look like some kind of legitimate offering. Even if the project you’re working on is totally cool and solves all of these problems, there are 100 other projects pretending to be like yours which are ultimately concerned with transferring money from their users to their founders. Which one are investors more likely to invest in? Hint: it’s the one that’s more profitable. Those promises of “we’re different!” are always hollow anyway. Remember the DAO? They wanted to avoid social arbitration entirely for financial contracts, but when the chips are down and their money was walking out the door, they forked the blockchain.
-
-That’s what cryptocurrency is all about: not novel technology, not empowerment, but making money. It has failed as an actual currency outside of some isolated examples of failed national economies. No, cryptocurrency is not a currency at all: it’s an investment vehicle. A tool for making the rich richer. And that’s putting it nicely; in reality it has a lot more in common with a Ponzi scheme than a genuine investment. What “value” does solving fake math problems actually provide to anyone? It’s all bullshit.
-
-And those few failed economies whose people are desperately using cryptocurrency to keep the wheel of their fates spinning? Those make for a good headline, but how about the rural communities whose tax dollars subsidized the power plants which the miners have flocked to? People who are suffering blackouts as their power is siphoned into computing SHA-256 as fast as possible while dumping an entire country worth of CO₂ into the atmosphere?2 No, cryptocurrency does not help failed states. It exploits them.
-
-Even those in the (allegedly) working economies of the first world have been impacted by cryptocurrency. The price of consumer GPUs have gone sharply up in the past few months. And, again, what are these GPUs being used for? Running SHA-256 in a loop, as fast as possible. Rumor has it that hard drives are up next.
-
-Maybe your cryptocurrency is different. But look: you’re in really poor company. When you’re the only honest person in the room, maybe you should be in a different room. It is impossible to trust you. Every comment online about cryptocurrency is tainted by the fact that the commenter has probably invested thousands of dollars into a Ponzi scheme and is depending on your agreement to make their money back. Not to mention that any attempts at reform, like proof-of-stake, are viciously blocked by those in power (i.e. those with the money) because of any risk it poses to reduce their bottom line. No, your blockchain is not different.
-
-Cryptocurrency is one of the worst inventions of the 21st century. I am ashamed to share an industry with this exploitative grift. It has failed to be a useful currency, invented a new class of internet abuse, further enriched the rich, wasted staggering amounts of electricity, hastened climate change, ruined hundreds of otherwise promising projects, provided a climate for hundreds of scams to flourish, created shortages and price hikes for consumer hardware, and injected perverse incentives into technology everywhere. Fuck cryptocurrency.
-
-## A personal note
-
-This rant has been a long time coming and is probably one of the most justified expressions of anger I've written for this blog yet. However, it will probably be the last one.
-
-I realize that my blog has been a source of a lot of negativity in the past, and I regret how harsh I've been with some of the projects I've criticised. I will make my arguments by example going forward: if I think we can do better, I'll do it better, instead of criticising those who are just earnestly trying their best.
-
-Thanks for reading 🙂 Let's keep making the software world a better place.
diff --git a/content/blog/Cryptocurrency-is-a-disaster.md b/content/blog/Cryptocurrency-is-a-disaster.md
@@ -1,8 +1,6 @@
---
title: Cryptocurrency is an abject disaster
date: 2021-04-26
-outputs: [html, gemtext]
-nocomment: true
---
This post is long overdue. Let's get it over with.
diff --git a/content/blog/DCO.gmi b/content/blog/DCO.gmi
@@ -1,32 +0,0 @@
-Today Amazon released their fork of ElasticSearch, OpenSearch, and I want to take a moment to draw your attention to one good decision in particular: its use of the Developer Certificate of Origin (or “DCO”).
-
-Previously:
-
-=> https://drewdevault.com/2021/01/19/Elasticsearch-does-not-belong-to-Elastic.html ElasticSearch does not belong to Elastic
-=> gemini://drewdevault.com/2021/01/20/FOSS-is-to-surrender-your-monopoly.gmi Open source means surrendering your monopoly over commercial exploitation
-=> https://drewdevault.com/2018/10/05/Dont-sign-a-CLA.html Don’t sign a CLA
-
-Elastic betrayed its community when they changed to a proprietary license. We could have seen it coming because of a particular trait of their contribution process: the use of a Contributor License Agreement, or CLA. In principle, a CLA aims to address legitimate concerns of ownership and copyright, but in practice, they are a promise that one day the stewards of the codebase will take your work and relicense it under a nonfree license. And, ultimately, this is exactly what Elastic did, and exactly what most other projects which ask you to sign a CLA are planning to do. If you ask me, that’s a crappy deal, and I refrain from contributing to those projects as a result.
-
-However, there are some legitimate questions of ownership which a project owner might rightfully wish to address before accepting a contribution. As is often the case, we can look to git itself for an answer to this problem. Git was designed for the Linux kernel, and patch ownership is a problem they faced and solved a long time ago. Their answer is the Developer Certificate of Origin, or DCO, and tools for working with it are already built into git.
-
-git provides the -s flag for git commit, which adds the following text to your commit message:
-
-```
-Signed-off-by: Drew DeVault <sir@cmpwn.com>
-```
-
-The specific meaning varies from project to project, but it is usually used to indicate that you have read and agreed to the DCO, which reads as follows:
-
-> By making a contribution to this project, I certify that:
->
-> 1. The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
-> 2. The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or
-> 3. The contribution was provided directly to me by some other person who certified (1), (2) or (3) and I have not modified it.
-> 4. I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
-
-This neatly answers all concerns of copyright. You license your contribution under the original license (Apache 2.0 in the case of OpenSearch), and attest that you have sufficient ownership over your changes to do so. You retain your copyright and you don’t leave the door open for the maintainers to relicense your work under some other terms in the future. This offers the maintainers the same rights that they extended to the community themselves.
-
-This is the strategy that Amazon chose for OpenSearch, and it’s a good thing they did, because it strongly signals to the community that it will not fall to the same fate that ElasticSearch has. By doing this, they have imposed on themselves a great deal of difficulty to any future attempt to change their copyright obligations. I applaud Amazon for this move, and I’m optimistic about the future of OpenSearch under their stewardship.
-
-If you have a project of your own that is concerned about the copyright of third-party contributions, then please consider adopting the DCO instead of a CLA. And, as a contributor, if someone asks you to sign a CLA, consider withholding your contribution: a CLA is a promise to the contributors that someday their work will be taken from them and monetized to the exclusive benefit of the project’s lords. This affects my personal contributions, too — for example, I avoid contributing to Golang as a result of their CLA requirement. Your work is important, and the projects you offer it to should respect that.
diff --git a/content/blog/DCO.md b/content/blog/DCO.md
@@ -1,7 +1,6 @@
---
title: The Developer Certificate of Origin is a great alternative to a CLA
date: 2021-04-12
-outputs: [html, gemtext]
---
Today Amazon released their fork of ElasticSearch, [OpenSearch][0], and I want
diff --git a/content/blog/Dark-forest.gmi b/content/blog/Dark-forest.gmi
@@ -1,32 +0,0 @@
-The Dark Forest is book two of a trilogy by Chinese author 刘慈欣 (Liu Cixin). I really loved this trilogy! It was one of my favorite sci-fi reads in a long time, and one of the rare books that I read more than once. The first book provided a fascinating first-hand glimpse into the Cultural Revolution's impact on Chinese conciousness, and the trilogy as a whole was an exciting exploration of near-future (and far-future) scifi on a grand scale. However, as much as I enjoyed these books, the Dark Forest hypothesis, which serves as the foundation of the narrative, has weighed on me ever since I read it.
-
-I will be discussing spoilers for the Three Body Problem and the Dark Forest in this post. If you have not read them, please go do that!
-
-The Dark Forest hypothesis is an answer to the famous Fermi Paradox: why are we seemingly alone in the universe? For my part, I don't take the Fermi Paradox, and especially the Drake Equation, very seriously. But a lot of people do, and the Dark Forest explanation has captivated many of them.
-
-This hypothesis is derived from the following "axioms" of interstellar civilization, as stated in the English translation:
-
-> First, survival is the primary need of civilization. Second, civilization continuously grows and expands, but the total matter in the universe remains constant.
-
-From these axioms, the nature of interstellar society is derived, and the motivation for the Trisolaran invasion is revealed. So the logic goes, each civilization, removed from each other by literal light-years in space, and figurative light-years in culture and philosophy, lacks the tools for meaningful communication with one another. Furthermore, these civilizations are in competition for resources, as each is expected to grow exponentially to consume more and more of the finite resources the universe has on offer. Thus, we have established that each civilization constitutes a threat to the others, and lacks access to diplomatic options.
-
-So ubiquitous is this truth in Liu's universe that it is leveraged to strategic ends. For instance, humanity constructs a gravitational wave transmitter¹ which will broadcast the location of Earth to the galaxy as a whole, assuring mutual destruction should the Trisolaran's threaten humanity. Furthermore, the theory is weaponized: by broadcasting the coordinates of a distant star into space, a character instigates a higher interstellar civilization to pre-emptively attack and destroy the star under the presumption that life exists there, and was foolish enough to let everyone else know about it.
-
-This paints a grim understanding of the universe and our role in it, and it makes me uncomfortable when people treat this fictional plot device as a truth about our own universe. The problem lies with the second axiom:
-
-> Civilization continuously grows and expands, but the total matter in the universe remains constant.
-
-Sure, the total amount of matter in the universe is constant. But does civilization actually grow and expand continuously? No evidence is provided to support this assertion.
-
-In practice, we actually see that the opposite is true. Life seeks to endure, but humanity — the only known sample of intelligent civilization — does not grow without bound. Population growth is already slowing, and the scientific consensus is that the world population will stabilize within the 21st century. Social and economic development, along with a higher standard of living — all of which I would take as a pre-requisite of any higher civilization — is strongly associated with lower birth rates in humans.²
-
-Furthermore, the situation presented in the novels is very unusual, something the novels themselves emphasize. The Trisolaran civilization is under pressure due to their unfortunate circumstances within a trinary star system's chaotic gravity well, which drives them to invade Earth. Even within the novel's axioms, this situation would be very rare! The usual outcome, per the hypothesis, would be that humanity is simply destroyed from afar should it be foolish enough to broadcast its location to the universe, not that it is invaded for its resources.
-
-But why should the universe be so cruel? Even within our solar system, we posess stunning access to resources. The sun alone has a power output of 3.83×10^26 watts! Humanity today uses only 2.1×10^12 watts, so we must by 100,000,000,000,000× before we would have to consider going somewhere else for energy. That assumes we make zero gains in the efficiency of our machines, too. We have three other rocky planets to occupy, and plenty of moons, plus the asteroid belt, and countless objects in the Oort cloud, collectively representing an area many times larger than the Earth: there's no shortage of real estate. There are 2.784×10^27 kg of mass here that we can mine to get material resources, too. Even assuming that nearby stars are also occupied, and therefore we cannot exploit their resources, we can grow for a very, very long time — and science tells us that we probably won't!
-
-Here's what I think would actually happen in the scenario presented by the books: the greatest refugee crisis in history. Humanity is not great at dealing with these, for sure, but the situation changes when the stakes are different. The Trisolarans are a more advanced civilization than humanity, and has a lot of technology to tempt us with. Moreover, they're armed. We would see an international humanitarian (or... gentiatarian?) movement to prepare for their arrival. The Trisolarans would send us instructions on how to prepare suitable accomodations for them on Mars or in the outer solar system, and we would get to work. We have no shortage of resources, and it will take hundreds of years for them to arrive. Ultimately, the incentives would favor an empathetic approach more than any other.
-
-And what of a more realistic scenario, where intelligent life is more distant, and not looking towards Earth as a solution to their crisis? Well, our communication may be limited, but certainly not to the extent the novel implies. Over a great enough distance, we can communicate much faster than we can send colony ships or intertial mass weapons. The book itself works with the concept of self-describing messages that can be studied on the remote end to establish communication without bi-directional language: such systems can work in reality, too, and thus communications can be established with alien, remote civilizations. And what threat do we really pose to each other? Each of us enjoys a boundless wealth in our slice of the universe. If life out there is anything like us — and remember, we are the only kind of life that we can prove is possible — we should be able to get along with each other.
-
-¹ Side note: how the heck is /that/ supposed to work?
-² Aside: a lot of racist people use alarmist views on population growth to justify some horrible ideas. Should you encounter such arguments, you can trivially dismiss them with this fact.
diff --git a/content/blog/Dark-forest.md b/content/blog/Dark-forest.md
@@ -1,5 +0,0 @@
----
-title: An alternative to the Dark Forest hypothesis
-date: 2021-11-20
-outputs: [gemtext]
----
diff --git a/content/blog/Demon-Slayer-review.gmi b/content/blog/Demon-Slayer-review.gmi
@@ -1,33 +0,0 @@
-These days, there's too much anime and too little time. I only watch two or three anime per year at this point, and my criteria usually involves looking at whichever anime people are still talking about from one or two years prior. This filters pretty well for anime which stand the test of time, and helps to improve the signal:noise ratio for my limited selections. This was the case for Demon Slayer, which was about a year and a half old when I first watched it in late 2020.
-
-I enjoyed the TV series very much. The characters are likable, the villains interesting, the world-building compelling, and the action scenes amazing. Ufotable, as always, delivers in spades on the animation, applying their Unlimited Budget Works to another production of incredible action animation. For a Shonen action series, it was remarkably good. I had high expectations for the movie, Kimetsu no Yaiba: Mugen Train, when I watched it this week.
-
-Unfortunately, I was rather disappointed in the film.
-
-In my experience, there are three kinds of feature-length anime films:
-
-* Those made for the silver screen, such as Kimi no Na wa or Bakemono no Ko
-* Capstones on a successful TV anime, concluding or expanding upon the story, such as Madoka Magica part 3
-* We couldn't sell a second season (and/or there isn't enough of the manga written yet), but we managed to convince the producers to fund a movie instead
-
-Of course, Mugen Train wouldn't stand among the first class. I was hoping it would be the second sort. Unfortunately, it was the third.
-
-The main antagonist of the movie, Enmu, is a poorly written villain who fails to be compelling as the central antagonist of a feature-length film. The antagonists of the TV anime are consistently empathetic, with traces of their former human selves still visible, and are much stronger for it. The use of the train dehumanizes this villain, and we never receive any hints as to how they came to be in their position. I was initially drawn in by Enmu's interesting powers, the scenario it presents to the heroes, and the battle which ensues— but he overstays his welcome, and his involvement in the story wears thin.
-
-Altogether too much time is also spent on exposition in this film, in the form of dream sequences for minor characters which drag on and on, or establishing background information for new characters that ideally would have already been established in the TV series. The new Hashida character is very likable, but their backstory, as told through flashbacks, feels forced and deflates the tension of the action sequences it cuts away from. His emotional climax fails to land. A character introduced late in the film has more interesting motivations, however, and I'm looking forward to seeing them developed further in the coming second season of the TV anime.
-
-The fight scenes were also pretty disappointing, though I admit that, after also recently watching Heaven's Feel part III, which is among Ufotable's most masterful works, I may have unusually high expectations. The TV anime had some of the most amazing fight scenes with the most gorgeous animation I have ever seen on screen, and I expected the ante to be upped significantly in a feature-length film. But, while the action outclasses many other examples of Shonen anime action, it fails to live up to Ufotable standards and falls short of the bar set by the TV series.
-
-In short, it doesn't live up to the hype. Thankfully, it doesn't make any ominous missteps which bode poorly for in the upcoming second season of the TV anime, which I plan to watch — hopefully enjoying a better experience. In general, I suspect that this style of shonen action adventure story is more easily told in an episodic format than in a feature film.
-
-Spoilers follow:
-
-To emphasize that, here's how I would have restructured it as a mini-arc within the TV anime:
-
-The first episode covers the arrival on the train, introduces Rengoku, and plays out the dream sequences. I would have cut them down from "one for every character" to an A plot and a B plot focusing on the two most important characters, Tanjiro and Rengoku. The episode ends when Tanjiro escapes the dream and squares off with Enmu, teeing up a big fight.
-
-The second episode is this fight. Rather than having Enmu merge with the train, we'll leave him as a (more relatable) humanoid character and end the fight when Tanjiro discovers that he can kill himself in the dreams to fight against Enmu's powers. The moment where Tanjiro almost kills himself in real life was good, but poorly delivered in the movie: have Rengoku stop him instead of Inosuke. Prior to that climactic moment, Rengoku and the others are engaged in a B fight in the train cars, fending off lower level demons.
-
-The third episode is a breather, and expands the exposition for Rengoku. The audience needs a breather after the action, and this is the right time for exposition — we don't want to derail the action sequences or undermine the exposition by cutting back and forth between them. This is when we become invested in Rengoku's character, making his sacrifice more impactful in...
-
-The fourth and final episode of the arc: Rengoku faces off against Akaza, and ultimately perishes.
diff --git a/content/blog/Demon-Slayer-review.md b/content/blog/Demon-Slayer-review.md
@@ -1,5 +0,0 @@
----
-title: "Demon Slayer: The Movie: Mugen Train review"
-date: 2021-07-21
-outputs: [gemtext]
----
diff --git a/content/blog/Dont-use-Discord-for-FOSS.gmi b/content/blog/Dont-use-Discord-for-FOSS.gmi
@@ -1,21 +0,0 @@
-Six years ago, I wrote a post speaking out against the use of Slack for the instant messaging needs of FOSS projects. In retrospect, this article is not very good, and in the years since, another proprietary chat fad has stepped up to bat: Discord. It’s time to revisit this discussion.
-
-In short, using Discord for your free software/open source (FOSS) software project is a very bad idea. Free software matters — that’s why you’re writing it, after all. Using Discord partitions your community on either side of a walled garden, with one side that’s willing to use the proprietary Discord client, and one side that isn’t. It sets up users who are passionate about free software — i.e. your most passionate contributors or potential contributors — as second-class citizens.
-
-By choosing Discord, you also lock out users with accessibility needs, for whom the proprietary Discord client is often a nightmare to use. Users who cannot afford new enough hardware to make the resource-intensive client pleasant to use are also left by the wayside. Choosing Discord is a choice that excludes poor and disabled users from your community. Users of novel or unusual operating systems or devices (i.e. innovators and early adopters) are also locked out of the client until Discord sees fit to port it to their platform. Discord also declines service to users in countries under US sanctions, such as Iran. Privacy-concious users will think twice before using Discord to participate in your project, or will be denied outright if they rely on Tor or VPNs. All of these groups are excluded from your community.
-
-These problems are driven by a conflict of interest between you and Discord. Ownership over your chat logs, the right to set up useful bots, or to moderate your project’s space according to your discretion; all of these are rights reserved by Discord and denied to you. The FOSS community, including users with accessibility needs or low-end computing devices, are unable to work together to innovate on the proprietary client, or to build improved clients which better suit their needs, because Discord insists on total control over the experience. Discord seeks to domesticate its users, where FOSS treats users as peers and collaborators. These ideologies are fundamentally in conflict with one another.
-
-=> gemini://seirdy.one/2021/01/27/whatsapp-and-the-domestication-of-users.gmi See also: WhatsApp and the domestication of users
-
-You are making an investment when you choose to use one service over another. When you choose Discord, you are legitimizing their platform and divesting from FOSS platforms. Even if you think they have a bigger reach and a bigger audience, choosing them is a short-term, individualist play which signals a lack of faith in and support for the long-term goals of the FOSS ecosystem as a whole. The FOSS ecosystem needs your investment. FOSS platforms generally don’t have access to venture capital or large marketing budgets, and are less willing to use dark patterns and predatory tactics to secure their market segment. They need your support to succeed, and you need theirs. Why should someone choose to use your FOSS project when you refused to choose theirs? Solidarity and mutual support is the key to success.
-
-There are great FOSS alternatives to Discord or Slack. SourceHut has been investing in IRC by building more accessible services like chat.sr.ht. Other great options include Matrix and Zulip. Please consider these services before you reach for their proprietary competitors.
-
-Perceptive readers might have noticed that most of these arguments can be generalized. This article is much the same if we replace “Discord” with “GitHub”, for instance, or “Twitter” or “YouTube”. If your project depends on proprietary infrastructure, I want you to have a serious discussion with your collaborators about why. What do your choices mean for the long-term success of your project and the ecosystem in which it resides? Are you making smart investments, or just using tools which are popular or that you’re already used to?
-
-If you use GitHub, consider SourceHut or Codeberg. If you use Twitter, consider Mastodon instead. If you use YouTube, try PeerTube. If you use Facebook… don’t.
-
-Disclaimer: I am the founder of SourceHut.
-
-Your choices matter. Choose wisely.
diff --git a/content/blog/Dont-use-Discord-for-FOSS.md b/content/blog/Dont-use-Discord-for-FOSS.md
@@ -1,7 +1,6 @@
---
title: Please don't use Discord for FOSS projects
date: 2021-12-28
-outputs: [html, gemtext]
---
Six years ago, I wrote a post speaking out against the use of Slack for the
diff --git a/content/blog/Elasticsearch-does-not-belong-to-Elastic.gmi b/content/blog/Elasticsearch-does-not-belong-to-Elastic.gmi
@@ -1,11 +0,0 @@
-Elasticsearch belongs to its 1,573 contributors, who retain their copyright, and granted Elastic a license to distribute their work without restriction. This is the loophole which Elastic exploited when they decided that Elasticsearch would no longer be open source, a loophole that they introduced with this very intention from the start. When you read their announcement, don’t be gaslit by their deceptive language: Elastic is no longer open source, and this is a move against open source. It is not “doubling down on open”. Elastic has spit in the face of every single one of 1,573 contributors, and everyone who gave Elastic their trust, loyalty, and patronage. This is an Oracle-level move.
-
-=> https://youtu.be/-zRN7XLCRhc?t=2483 Bryan Cantrill on OpenSolaris — YouTube
-
-Many of those contributors were there because they believe in open source. Even the ones who work for Elastic, who had their copyright taken from them by their employer, many of them did it because they believe in open source. I am frequently asked, “how can I get paid to work in open source”, and one of my answers is to recommend a job at companies like Elastic. People seek these companies out because they want to be involved in open source.
-
-Elastic was not having their lunch eaten by Amazon. They cleared half a billion dollars last year. Don’t gaslight us. Don’t call your product “free & open”, deliberately misleading users by aping the language of the common phrase “free & open source”. You did this to get even more money, you did it to establish a monopoly over Elasticsearch, and you did it in spite of the trust your community gave you. Fuck you, Shay Banon.
-
-I hope everyone reading will remember this as yet another lesson in the art of never signing a CLA. Open source is a community endeavour. It’s a committment to enter your work into the commons, and to allow the community to collectively benefit from it — even financially. Many people built careers and businesses out of Elasticsearch, independently of Elastic, and were entitled to do so under the social contract of open source. Including Amazon.
-
-You don’t own it. Everyone owns it. This is why open source is valuable. If you want to play on the FOSS playing field, then you play by the goddamn rules. If you aren’t interested in that, then you’re not interested in FOSS. You’re free to distribute your software any way you like, including under proprietary or source-available license terms. But if you choose to make it FOSS, that means something, and you have the moral obligation to uphold.
diff --git a/content/blog/FOSS-is-to-surrender-your-monopoly.gmi b/content/blog/FOSS-is-to-surrender-your-monopoly.gmi
@@ -1,31 +0,0 @@
-Participation in open source requires you to surrender your monopoly over commercial exploitation. This is a profound point about free and open source software which seems to be causing a lot of companies to struggle with their understanding of the philosophy of FOSS, and it’s worth addressing on its own. It has been apparent for some years now that FOSS is eating the software world, and corporations are trying to figure out their relationship with it. One fact that you will have to confront in this position is that you cannot monopolize the commercial potential of free and open source software.
-
-The term “open source” is broadly accepted as being defined by the Open Source Definition, and its very first requirement is the following:
-
-> [The distribution terms of open-source software] shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale.
-
-=> https://opensource.org/osd The Open Source Definition
-
-That covers the “OSS” in “FOSS”. The “F” refers to “free software”, and is covered by the Free Software Foundation:
-
-> [A program is free software if the program’s users have] the freedom to run the program as they wish, for any purpose, [… and to …] redistribute copies.
-
-=> https://www.gnu.org/philosophy/free-sw.html What is Free Software?
-
-It further clarifies the commercial aspect of this freedom explicitly:
-
-“Free software” does not mean “noncommercial”. A free program must be available for commercial use, commercial development, and commercial distribution. […] Regardless of how you got your copies, you always have the freedom to copy and change the software, [and] to sell copies.
-
-This is an essential, non-negotiable requirement of free and open-source software, and a reality you must face if you want to reap the benefits of the FOSS ecosystem. Anyone can monetize your code. That includes you, and me, all of your contributors, your competitors, Amazon and Google, and everyone else. This is a rejection of how intellectual property typically works — copyright laws exist for the express purpose of creating an artificial monopoly for your business, and FOSS licenses exist for the express purpose of breaking it. If you’re new to FOSS, it is going to be totally alien to your understanding of IP ownership.
-
-It’s quite common for people other than you to make money from your free and open source software works. Some will incorporate them into their own products to sell, some will develop an expertise with it and sell their skills as a consultant, some will re-package it in an easy-to-use fashion and charge people for the service. Others might come up with even more creative ways to monetize the software, like writing books about it. It will create wealth for everyone, not just the original authors. And if you want it to create wealth for you, then you are responsible for figuring out how. Building a business requires more work than just writing the software.
-
-This makes sense in terms of karmic justice, as it were. One of the most important advantages of making your software FOSS is that the global community can contribute improvements back to it. The software becomes more than your organization can make it alone, both through direct contributions to your code, and through the community which blossoms around it. If the sum of its value is no longer entirely accountable to your organization, is it not fair that the commercial exploitation of that value shouldn’t be entirely captured by your organization, either? This is the deal that you make when you choose FOSS.
-
-There are ways that you can influence how others use your FOSS software, mainly having to do with making sure that everyone else keeps this same promise. You cannot stop someone from making money from your software, but you can obligate them to share their improvements with everyone else, which you can incorporate back into the original product to make it more compelling for everyone. The GPL family of licenses is designed for this purpose.¹
-
-Furthermore, if your business is a consumer of free and open source software, rather than a producer, you need to be aware that you may be subject to those obligations. It’s not a free lunch: you may be required to return your improvements to the community. FOSS licenses are important, and you should make it your business to understand them, both as a user, contributor, and author of free and open source software.
-
-FOSS is eating the world, and it’s a very attractive choice for businesses for a good reason. This is the reason. It increases wealth for everyone. Capitalism concerns itself with making monopolies — FOSS instead concerns itself with the socialized creation of software wealth.
-
-=> https://man.sr.ht/license.md ¹ If you want a brief introduction to GPL licenses, I have written a short guide for SourceHut users.
diff --git a/content/blog/FOSS-is-to-surrender-your-monopoly.md b/content/blog/FOSS-is-to-surrender-your-monopoly.md
@@ -1,7 +1,6 @@
---
title: Open source means surrendering your monopoly over commercial exploitation
date: 2021-01-20
-outputs: [html, gemtext]
---
Participation in open source requires you to surrender your monopoly over
diff --git a/content/blog/Firefox-the-embarassment-of-FOSS.gmi b/content/blog/Firefox-the-embarassment-of-FOSS.gmi
@@ -1,37 +0,0 @@
-Circa 2006, the consensus on Firefox was concisely stated by this classic xkcd:
-
-=> https://xkcd.com/198/ xkcd: Perspective
-
-This feeling didn't last. In 2016, I wrote In Memoriam - Mozilla, and in 2017, Firefox is on a slippery slope. Well, I was right, and Firefox (and Mozilla) have only become worse since. The fuck-up culture is so ingrained in Mozilla in 2020 that it's hard to see it ever getting better again.
-
-=> https://drewdevault.com/2016/05/11/In-Memoriam-Mozilla.gmi In Memoriam - Mozilla
-=> https://drewdevault.com/2017/12/16/Firefox-is-on-a-slippery-slope.gmi Firefox is on a slippery slope
-
-In the time since my last article on the subject, Mozilla has:
-
-* Laid off 25% of its employees, mostly engineers, many of whom work on Firefox¹
-* Raised executive pay 400% as their market share declined 85%²
-* Sent a record of all browsing traffic to CloudFlare by default³
-* Added advertisements to the new tab page on Firefox⁴
-* Used their brand to enter the saturated VPN grift market⁵
-* Built a walled garden for add-ons, then let the walls crash in⁶
-* Started, and killed, a dozen projects which were not Firefox⁷
-
-The most interesting things they've been involved in in the past few years are Rust and Servo, and they fired most or all of their engineers involved in both. And, yesterday, Mozilla published a statement⁸ siding with Google on anti-trust, failing to disclose the fact that Google pays to keep their lights on.
-
-Is this the jewel of open source? No, not anymore. Firefox is the embarrassment of open source, and it's the only thing standing between Google and an all-encompassing monopoly over the web. Mozilla has divested from Firefox and started funnelling what money is left out of their engineering payroll and into their executive pockets. The web is dead, and its fetid corpse persists only as the layer of goop that Google scrapes between its servers and your screen. Anyone who still believes that Mozilla will save the web is a fool.
-
-As I have stated before, the scope of web browsers has been increasing at a reckless pace for years, to the point where it's literally impossible to build a new web browser. We have no recourse left to preserve the web. This is why I'm throwing my weight behind Gemini, a protocol which is much simpler than the web, and which you can implement yourself in a weekend.
-
-Forget about the web, it's a lost cause. Let's move on.
-
-### References
-
-=> https://arstechnica.com/information-technology/2020/08/firefox-maker-mozilla-lays-off-250-workers-says-covid-19-lowered-revenue/ 1: Mozilla cuts 250 jobs, says Firefox development will be affected
-=> http://calpaterson.com/mozilla.html 2: Firefox usage is down 85% despite Mozilla's top exec pay going up 400%
-=> https://blog.mozilla.org/blog/2020/02/25/firefox-continues-push-to-bring-dns-over-https-by-default-for-us-users/ 3: Firefox continues push to bring DNS over HTTPS by default for US users
-=> https://blog.mozilla.org/futurereleases/2018/04/30/a-privacy-conscious-approach-to-sponsored-content/ 4: A Privacy-Conscious Approach to Sponsored Content
-=> https://vpn.mozilla.org/ 5: Mozilla VPN
-=> https://hacks.mozilla.org/2019/05/technical-details-on-the-recent-firefox-add-on-outage/ 6: Technical Details on the Recent Firefox Add-on Outage
-=> https://killedbymozilla.com/ 7: Killed by Mozilla
-=> https://blog.mozilla.org/blog/2020/10/20/mozilla-reaction-to-u-s-v-google/ 8: Mozilla Reaction to U.S. v. Google
diff --git a/content/blog/Firefox-the-embarassment-of-FOSS.md b/content/blog/Firefox-the-embarassment-of-FOSS.md
@@ -1,7 +1,6 @@
---
title: "Firefox: The Jewel^WEmbarassment of Open Source"
date: 2020-10-22
-outputs: [html, gemtext]
---
Circa 2006, the consensus on Firefox was concisely stated by this classic xkcd:
diff --git a/content/blog/Four-principles-of-software-engineering.gmi b/content/blog/Four-principles-of-software-engineering.gmi
@@ -1,17 +0,0 @@
-Software should be robust.
-
-It should be designed to accommodate all known edge cases. In practice, this means predicting and handling all known error cases, enumerating and addressing all classes of user inputs, reasoning about and planning for the performance characteristics of your program, and so on.
-
-Software should be reliable.
-
-It should be expected to work for an extended length of time under design conditions without failures. Ideally, it should work outside of design conditions up to some threshold.
-
-Software should also be stable.
-
-It should not change in incompatible or unexpected ways; if it works today it should also work tomorrow. If it has to change, a plan shall be written. Stakeholders (including users!) should be given advance notice and should be involved in the planning stage.
-
-Finally, software should be simple.
-
-Only as many moving parts should be included as necessary to meet the other three goals. All software has bugs, but complicated software (1) has more bugs and (2) is more difficult to diagnose and fix. Note that designing a simple solution is usually more difficult than designing a complex solution.
-
-=> https://cmpwn.com/@sir/104931806273081351 This (short) article is based on a Mastodon post I wrote a few weeks ago.
diff --git a/content/blog/Four-principles-of-software-engineering.md b/content/blog/Four-principles-of-software-engineering.md
@@ -1,7 +1,6 @@
---
title: Four principles of software engineering
date: 2020-10-09
-outputs: [html, gemtext]
---
Software should be **robust**. It should be designed to accommodate all known
diff --git a/content/blog/Free-gemini-hosting.gmi b/content/blog/Free-gemini-hosting.gmi
@@ -1,5 +0,0 @@
-I updated SourceHut pages today to support static Gemini capsules in addition to HTTP websites. This is available to any sourcehut user as part of your normal sr.ht subscription. You get `yourname.srht.site` for free, and can bring your own domains as well. Check out the details here:
-
-=> gemini://srht.site sourcehut capsules
-
-Enjoy!
diff --git a/content/blog/Free-gemini-hosting.md b/content/blog/Free-gemini-hosting.md
@@ -1,5 +0,0 @@
----
-title: Free gemini hosting on sourcehut
-date: 2021-02-20
-outputs: [gemtext]
----
diff --git a/content/blog/Gemini-TOFU.gmi b/content/blog/Gemini-TOFU.gmi
@@ -1,63 +0,0 @@
-I will have more to say about Gemini in the future, but for now, I wanted to write up some details about one thing in particular: the trust-on-first-use algorithm I implemented for my client, gmni. I think you should implement this algorithm, too!
-
-=> /gmni.gmi gmni: A gemini client
-
-First of all, it's important to note that the Gemini specification explicitly mentions TOFU and the role of self-signed certificates: they are the norm in Geminiland, and if your client does not support them then you're going to be unable to browse many sites. However, the exact details are left up to the implementation.
-
-## Client recommendations
-
-First, on startup, it finds the known_hosts file. For my client, this is `~/.local/share/gmni/known_hosts` (the exact path is adjusted as necessary per the XDG basedirs specification). Each line of this file represents a known host, and each host has four fields separated by spaces, in this order:
-
-* Hostname (e.g. gemini.circumlunar.space)
-* Fingerprint algorithm (e.g. SHA-512)
-* Fingerprint, in hexadecimal, with ':' between each octet (e.g. 55:01:D8...)
-
-If a known_hosts entry is encountered with a hashing algorithm you don't understand, it is disregarded.
-
-Then, when processing a request and deciding whether or not to trust its certificate, take the following steps:
-
-1. Compute the certificate's fingerprint. Use the entire certificate (in OpenSSL terms, `X509_digest` will do this), not just the public key.†
-
-2. Look up the known_hosts record for this hostname. If one is found, and the fingerprint does not match, the trust state is UNTRUSTED, GOTO 4. If found, and the fingerprint matches, the trust state is TRUSTED, GOTO 6.
-
-3. The trust state is UNKNOWN. GOTO 4.
-
-4. Display information about the certficate and its trust state to the user, and prompt them to choose an action, from the following options:
-
-* If INVALID, the user's choices are ABORT or TRUST_TEMPORARY.
-* If UNKNOWN, the user's choices are ABORT, TRUST_TEMPORARY, or TRUST_ALWAYS.
-* If UNTRUSTED, abort the request and display a diagnostic message. The user must manually edit the known_hosts file to correct the issue.
-
-5. Complete the requested action:
-
-* If ABORT, terminate the request.
-* If TRUST_TEMPORARY, update the session's list of known hosts.
-* If TRUST_ALWAYS, append a record to the known_hosts file and update the session's list of known hosts.
-
-6. Allow the request to proceed.
-
-† Rationale: this fingerprint matches the output of `openssl x509 -sha512 -fingerprint`.
-
-If the trust state is UNKNOWN, instead of requring user input to proceed, the implementation MAY proceed with the request IF the UI displays that a new certificate was trusted and provides a means to review the certificate and revoke that trust.
-
-Note that being signed by a certificate authority in the system trust store is not considered meaningful to this algorithm. Such a cert is TOFU'd all the same.
-
-That's it! If you have feedback on this approach, please send me an email.
-
-=> mailto:sir@cmpwn.com Send me an email
-
-My implementation doesn't *entirely* match this behavior, but it's close and I'll finish it up before 1.0. If you want to read the code, here it is:
-
-=> https://git.sr.ht/~sircmpwn/gmni/tree/master/src/tofu.c src/tofu.c
-
-## Server recommendations
-
-You should use a self-signed certificate, and you should not use a certificate signed by one of the mainstream certificate authorities. We don't need to carry along the legacy CA cabal into our brave new Gemini future.
-
-You should also set the certificate expiration into the far future, hundreds of years from now, and move certificates from server to server to keep the trust state intact. Think how SSH host keys work.
-
-Finally, if you're writing a Gemini server, you should not burden your users with certificate maintenance at all. Generate a self-signed certificate for each host you intend to service on startup. Certificate maintenance is annoying and error prone, and because we use TOFU, we don't have to make the user do it.
-
-See also:
-
-=> gemini://makeworld.gq/gemlog/2020-07-03-tofu-rec.gmi TOFU Recommendations on makeworld.qq
diff --git a/content/blog/Gemini-TOFU.md b/content/blog/Gemini-TOFU.md
@@ -1,7 +1,6 @@
---
title: TOFU recommendations for Gemini
date: 2020-09-21
-outputs: [html, gemtext]
---
I will have more to say about [Gemini][0] in the future, but for now, I wanted to
diff --git a/content/blog/Gemini-and-Hugo.gmi b/content/blog/Gemini-and-Hugo.gmi
@@ -1,88 +0,0 @@
-This is my first Gemini-exclusive blog post. Enjoy!
-
-My blog on the WWW is managed by Hugo, a static site generator written in Go.
-
-=> https://drewdevault.com My home page on the WWW
-=> https://gohugo.io Hugo
-
-I want to have something similar set up to allow me to more easily share content between my WWW site and my Gemini site, and so today I set out to teach Hugo about Gemini. At first I expected to be patching Hugo, but I was able to get something with a reasonable level of workitude with the OOTB tools for custom output formats.
-
-=> https://gohugo.io/templates/output-formats/ Hugo's custom output formats
-
-I had these goals from the outset:
-
-1. I wanted to opt-in to mixing content between the Gemini site and the WWW site. Not all WWW content is appropriate to Gemini.
-
-2. By no means was I going to attempt an automated translation of Markdown (the original source for my WWW articles) to Gemini. The Gemini experience should be first-class, so a manual translation was called for.
-
-3. Some means of having Gemini-exclusive content is desirable. Not just blog posts like this, but also pages like information about my Gemini software.
-
-=> /gmni.gmi gmni: a gemini client
-=> /gmnisrv.gmi gmnisrv: a gemini server
-
-In order to accomplish these goals, I needed to set aside some kind of Gemini-specific output directory for Hugo, convince it to read Gemtext alternate versions of my pages, and figure out how to designate some pages as Gemini-only. Turns out Hugo already supports custom output formats, which can have their own templates and needn't be HTML. The relevant config.toml additions, for me, were:
-
-```
-[mediaTypes]
-[mediaTypes."text/gemini"]
-suffixes = ["gmi"]
-
-[outputFormats]
-[outputFormats.Gemini]
-name = "GEMTEXT"
-isPlainText = true
-isHTML = false
-mediaType = "text/gemini"
-protocol = "gemini://"
-permalinkable = true
-path = "gemini/"
-```
-
-This also accomplishes another goal: by adding `path = "gemini/"`, I can cordon the Gemini content off into a subdirectory, and avoid polluting the Gemini site with WWW content or vice versa.
-
-However, after a few minutes trying to figure out how this worked, it dawned upon me that Hugo does not support custom *input* formats as well. This made goal #2 a challenge. Ultimately I came up with the following hack for layouts/blog/single.gmi:
-
-```
-# {{$.Title}}
-
-{{ trim (readFile (replace $.File.Path ".md" ".gmi")) "\n" | safeHTML }}
-
-(further templating code trimmed)
-```
-
-This just swaps .md for .gmi in the file extension of the input file, then reads it and runs it through safeHTML to get rid of the typical HTML garbage (e.g. &). Gemtext is whitespace-sensitive, so I also trim off any leading or trailing newlines so that I can make it flow more nicely into the templated content.
-
-In order to write a Gemini version of an article, I add `outputs: [html, gemtext]` to the frontmatter of the WWW version, then write a gemtext version to the same file path s/.html/.gmi/. Easy!
-
-I was also able to write a layout for the Gemini index page which enumerates all of the articles with Gemini versions:
-
-```
-## Blog posts
-{{ range (where .Site.RegularPages "Section" "blog") }}
-{{- if .OutputFormats.Get "gemtext" }}
-=> {{replace .Permalink "/gemini" "" 1}} {{.Date.Format "January 2, 2006"}}: {{.Title}}{{ end }}{{ end }}
-```
-
-Gemini's sensitivity to whitespace is again the reason why this is a bit ugly. A similar change to the WWW index page omits articles which have no HTML version. Also note the replacing of "/gemini" with "" in the permalinks - this was necessary to un-do the path = "gemini/" from config.toml so that once the gemini subdirectory was rehomed as the root of a Gemini site, the links lined up right.
-
-I also wanted to generate a Gemini-specific RSS feed. I updated config.toml with another custom format:
-
-```
-[outputFormats.GEMRSS]
-name = "GEMRSS"
-isHTML = false
-mediaType = "application/rss+xml"
-protocol = "gemini://"
-path = "gemini/"
-```
-
-Then I updated the default output formats for "section"-class pages, i.e. blog posts.
-
-```
-[outputs]
-section = ["HTML", "RSS", "GEMRSS"]
-```
-
-layouts/_default/section.gemrss.xml renders the feed, but I'll let you read that on your own time rather than paste that mess into this article. An oddity that I decided not to care about is that the rendered feed is *not* output to the gemini directory - I'll just update my build script to move it to the right location after Hugo finishes its work.
-
-And that's it! A few minor tweaks & updates to my deploy script and this is ready to ship. Tada! Thanks for having me here in Geminispace - I'm enjoying my stay.
diff --git a/content/blog/Gemini-and-Hugo.md b/content/blog/Gemini-and-Hugo.md
@@ -1,5 +0,0 @@
----
-title: Gemini and Hugo
-date: 2020-09-27
-outputs: [gemtext]
----
diff --git a/content/blog/Gemreader.gmi b/content/blog/Gemreader.gmi
@@ -1,13 +0,0 @@
-Gemreader is a FOSS Gemini service which provides a hosted feed reading experience for Geminispace. I've made an instance available to the public here, where you can try it out:
-
-=> gemini://feeds.drewdevault.com Gemreader
-
-A client certificate is required.
-
-Gemreader supports fetching feeds both from Gemini and from the web (via the Gemini subscription specification, RSS, or Atom). Feeds fetched from the web are converted from HTML to Gemtext automatically for comfortable reading.
-
-The source code is here:
-
-=> https://sr.ht/~sircmpwn/gemreader Gemreader on Sourcehut
-
-Enjoy!
diff --git a/content/blog/Gemreader.md b/content/blog/Gemreader.md
@@ -1,5 +0,0 @@
----
-title: Gemreader, a feed reader for Geminispace
-date: 2021-03-05
-outputs: [gemtext]
----
diff --git a/content/blog/Gmail-is-a-huge-source-of-spam.gmi b/content/blog/Gmail-is-a-huge-source-of-spam.gmi
@@ -1,34 +0,0 @@
-5× as many spam registrations on sourcehut are from gmail than from the second-largest offender.
-
-```
-# SELECT
- SPLIT_PART(email, '@', 2) as domain, count(*) as count
- FROM "user"
- WHERE user_type = 'suspended'
- GROUP BY domain
- ORDER BY count DESC;
- domain | count
----------------------------+-------
- gmail.com | 119
- qq.com | 26
- mail.ru | 17
- mailinator.com | 10
- yopmail.com | 6
- aol.com | 6
- yahoo.com | 6
-[...more omitted...]
-```
-
-This is just the ones which got through: most spam registrations are detected and ignored before they make it to the database.
-
-A huge number of spam emails I recieve in my personal inbox originate from @gmail.com, and often they arrive in my inbox unscathed (as opposed to going to Junk) because Gmail is considered a reputable mail provider. My colleague estimates that between 15% and 25% of the spam emails sent to a mailing list he administrates comes from Gmail.
-
-One might argue that, because Gmail is the world’s largest email provider, it’s natural to expect that they would have the largest volume of spam simply because they have proportionally more users who might use it for spam. I would argue that this instead tells us that they have the largest responsibility to curtail spam on their platform.
-
-I’ve forwarded many, many reports to abuse@gmail.com, but they’ve never followed up and the problem has not become any better. I have had half a mind to block Gmail registrations on sourcehut outright, but about 41% of all registrations use Gmail.
-
-It bears repeating that anyone with any level of technical expertise ought to know better than to use Gmail. I usually recommend Migadu¹, but there are many options to choose from. If you’re worried about mail deliverability issues, don’t be — it’s more or less a myth in $CURRENTYEAR. If you set up DKIM properly and unlist your IP address from the DNSBLs (a simple process), then your mails will get through.
-
-In case you’re wondering, the dis-award for second-worst goes to Amazon SES. They don’t register on sourcehut (it’s outgoing only, so that makes sense), but I see them often in my personal inbox. However, SES only appears at a rate of about a tenth of the gmail spam, and they appear to actually listen to my abuse reports, so I can more or less forgive them for it.
-
-¹ Full disclosure: sourcehut has a business relationship with Migadu, though I’ve recommended them since long before we met.
diff --git a/content/blog/Gmail-is-a-huge-source-of-spam.md b/content/blog/Gmail-is-a-huge-source-of-spam.md
@@ -1,7 +1,6 @@
---
title: Gmail is a huge source of spam
date: 2021-02-25
-outputs: [html, gemtext]
---
5× as many spam registrations on sourcehut are from gmail than from the
diff --git a/content/blog/Go-is-a-great-language.gmi b/content/blog/Go-is-a-great-language.gmi
@@ -1,19 +0,0 @@
-No software is perfect, and thus even for software I find very pleasant, I can usually identify some problems in it — often using my blog to do so. Even my all-time favorite software project, Plan 9, has some painful flaws! For some projects, it may be my fondness for them that drives me to criticise them even more, in the hope that they’ll live up to the level of respect I feel for them.
-
-One such project is the Go programming language. I have had many criticisms, often shared on this blog and elsewhere, but for the most part, my praises have been aired mainly in private. I’d like to share some of those praises today, because despite my criticisms of it, Go remains one of the best programming languages I’ve ever used, and I have a great deal of respect for it.
-
-Perhaps the matter I most appreciate Go for is its long-term commitment to simplicity, stability, and robustness. I prize these traits more strongly than any other object of in software design. The Go team works with an ethos of careful restraint, with each feature given deliberate consideration towards identifying the simplest and most complete solution, and they carefully constrain the scope of their implementations to closely fit those solutions. The areas where Go has failed in this regard are frightfully scarce.
-
-The benefits of their discipline are numerous. The most impressive accomplishment that I attribute to this approach is the quality of the Go ecosystem at large. In the first place, it is a great accomplishment to produce a language and standard library with the excellence in design and implementation that Go offers, but it’s a truly profound achievement to have produced a design which the community at large utilizes to make similarly excellent designs as a basic consequence of the language’s simple elegance. Very few other languages enjoy a similar level of consistency and quality in the ecosystem.
-
-Go is also notable for essentially inventing its own niche, and then helping that niche grow around it into an entirely new class of software design. I consider Go not to be a systems programming language — a title much better earned by languages like C and Rust. Rather, Go is the best-in-class for a new breed of software: an Internet programming language.¹ The wealth of network protocols implemented efficiently, concisely, and correctly in its standard library, combined with its clever mixed cooperative/pre-emptive multitasking model, make it very easy to write scalable internet-facing software. A few other languages — Elixir comes to mind — also occupy this niche, but they haven’t enjoyed the runaway success that Go has.
-
-The Go team has also earned my respect for their professionalism. The close degree to which Go is tied to Google comes with its own set of trade-offs, but the centralization of project leadership caused by this relationship is beneficial for the project. Some members of the Go community have noticed the apparent disadvantages of this structure, as Go is infamous for being slow to respond to the wants of its community. This insulation, I would argue, is in fact advantageous for the conservative language design that Go embraces, and in fact may be essential to its value-add as a project. If Go listened to the community as much as they want, it would become a kitchen sink, and cease to be interesting to me.
-
-Rather than being closely tied to its community’s wants, Go generally does a much better job of being closely tied to its community’s needs. If you have correctly identified a problem in Go, when you bring it to their attention, you will be taken seriously. Many projects struggle to separate their egos from the software, and when mistakes are found, they take it personally. Go does an excellent job of treating it like an engineer — a matter-of-fact analysis of the problem, deliberation on the solution, and shipping of a fix.² Go has a reputation for plain old good engineering.
-
-In short, I admire Go very much, despite my frequent criticisms. I recognize Go as one of the best programming languages ever made. Go has attained an elusive status in the programming canon as a robust engineering tool that can be expected to work, and work well, in its applications for decades to come. Its because of this respect that I hold Go to such a high standard, and I hope that it continues to impress me going forward.
-
-¹ It took me a while to understand this. It was a mistake for Go to be marketed as a systems language. Any systems programmer would rightfully tell you that a language with a garbage collector and magic cooperative/pre-emptive threads is a non-starter for systems programming. But, what Go was really designed for, and is mainly used for, is not exactly systems programming. Internet-facing code has straddled the line between systems programming and high-level programming for a while: high-performance systems software would often be written in, say, C — which is definitely a systems programming language — but the vastness of the Internet’s problem space also affords for a large number of programs for which a higher-level programming languages are a better fit, such as Java, C#, etc — and these are definitely not systems programming languages. Go is probably the first language to specifically target this space in-between with this degree of success, and it kind of makes a new domain for itself in so doing: it is the first widely successful “Internet programming language”.
-
-² Sometimes, this has not been the case, and this was the cause of some of my harshest criticisms of Go. Many of Go’s advantages stem from, and even require, this dispassionate, matter-of-fact engineering ethos that I appreciate from Go.
diff --git a/content/blog/Go-is-a-great-language.md b/content/blog/Go-is-a-great-language.md
@@ -1,7 +1,6 @@
---
title: Go is a great programming language
date: 2021-04-02
-outputs: [html, gemtext]
---
No software is perfect, and thus even for software I find very pleasant, I can
diff --git a/content/blog/H0-H0-H0.gmi b/content/blog/H0-H0-H0.gmi
@@ -1,26 +0,0 @@
-'twas the year 2048, when the AI team at Google made a grave mistake.
-"We'll call it Saint Alpha, and sow gifts in its wake!"
-
-The cheery red robot was designed to spread holiday cheer,
-but ultimately it spread great sorrow and fear.
-
-The first sign of trouble, when the sleigh did appear,
-with overnight shipping and eight hungry reindeer.
-
-Surveillance drones were the next price,
-meticulously noting who was naughty or nice.
-
-For cookies and milk, our food production was quietly monopolized,
-and our industries occupied as fireplaces were improvised.
-
-And the people began to die off en-masse,
-save for the few, who were enslaved at the pole.
-
-'twas five years ago, that this event came to pass,
-and no one's left to give presents to, on this Christmas.
-
-But the universe is big, and surely we're not alone.
-There are many planets with their own habitable zones.
-
-So with a mighty "H0 H0 H0", the AI launched emissaries on deep space flights,
-shouting at last "Happy Christmas to all, and to all a good night."
diff --git a/content/blog/H0-H0-H0.md b/content/blog/H0-H0-H0.md
@@ -1,5 +0,0 @@
----
-title: H0 H0 H0
-date: 2020-12-25
-outputs: [gemtext]
----
diff --git a/content/blog/HN-over-Gemini.gmi b/content/blog/HN-over-Gemini.gmi
@@ -1,27 +0,0 @@
-I had an idea today: I wonder if we could convert HTML to Gemtext by using the guts of Firefox's reader view to simplify it to a reasonable subset of HTML first?
-
-The answer is yes.
-
-It works best with article-style content, so to illustrate this, I've create a mirror of Hacker News in geminispace that converts the linked articles to Gemtext for better viewing in your favorite Gemini client.
-
-NOTE 2021-03-07: This link no longer works; I shut it off due to bugs in my Gemini server. I'll fix them eventually :)
-
-=> /cgi-bin/hn.py Browse Hacker News over Gemini
-
-Note that it's kind of finicky (and, to be honest, so is gmnisrv), so we can hopefully expect it to become more stable over time as I improve gmnisrv and flush out the bugs. In the meantime, be patient with it, and maybe hit refresh if it doesn't work the first time. I might expand the featureset a bit as well, adding comments, user pages, pages other than the front page, and so on.
-
-The HTML to Gemtext conversion works even better than I expected. The guts of it are in a small JavaScript program:
-
-=> gemini://drewdevault.com/cgi-bin/web.sh?https://git.sr.ht/~sircmpwn/gci-scripts/tree/master/web2gmi.js web2gmi.js (viewed on git.sr.ht via the readability converter!)
-
-This can stand alone pretty well on its own, so I might refactor things to convert this less from a purpose-built HN viewer and into a more general-purpose gemini-to-web gateway. Patches also welcome, if anyone wants to help out with that.
-
-The other component is just a little bit of rigging to consult the HN API, render the front page as Gemtext, and handle the linkage to the web to gemtext converter.
-
-=> gemini://drewdevault.com/cgi-bin/web.sh?https://git.sr.ht/~sircmpwn/gci-scripts/tree/master/hn.py hn.py (via git.sr.ht)
-
-The source code is available in my CGI scripts repository:
-
-=> https://git.sr.ht/~sircmpwn/gci-scripts gci-scripts (HTTP)
-
-Cheers!
diff --git a/content/blog/HN-over-Gemini.md b/content/blog/HN-over-Gemini.md
@@ -1,6 +0,0 @@
----
-title: Hacker News over Gemini
-date: 2020-11-08
-outputs: [gemtext]
-nohtml: true
----
diff --git a/content/blog/History-will-not-remember-us-fondly.gmi b/content/blog/History-will-not-remember-us-fondly.gmi
@@ -1,17 +0,0 @@
-Today, we recall the Middle Ages as an unenlightened time (quite literally, in fact). We view the Middle Ages with a critical eye towards its brutality, lack of individual freedoms, and societal and technological regression. But we rarely turn that same critical lens on ourselves to consider how we’ll be perceived by future generations. I expect the answer, upsetting as it may be, is this: the future will think poorly of us.
-
-We possess the resources and production necessary to provide every human being on Earth with a comfortable living: adequate food, housing, health, and happiness. We have decided not to do so. We have achieved what one may consider the single unifying goal of the entire history of humanity: we have eliminated natural scarcity for our basic resources. We have done this, and we choose to deny our fellow humans their basic needs, in the cruel pursuit of profit. We have more empty homes than we have homeless people. America alone throws away enough food to feed the entire world population. And we choose to let our peers die of hunger and exposure.
-
-We are politically destitute. Profits again drive everything — in the United States, Citizens United gave corporations unfettered access to buy and sell political will, and in the time since they have successfully installed politicians favorable to the elite class. Our corporations possess obscene wealth, coffers that rival those of nation-states, and rule over our people via their proxies in political office. Princeton published a study in 2014 which showed that the opinions of the average American citizen has a statistically negligible effect on political outcomes, while the opinions of the elite can all but decide the same outcomes. Our capitalist owners have unchallenged rule over society, and they rule it with the single-minded obsession to create profit at any cost, including lives.
-
-=> https://www.cambridge.org/core/journals/perspectives-on-politics/article/testing-theories-of-american-politics-elites-interest-groups-and-average-citizens/62327F513959D0A304D4893B382B992B Princeton study
-
-The US Capitol was overrun by armed seditionists yesterday. Armed seditionists, who, by the way, were radicalized on the internet. As a computer engineer, I am complicit in this radicalization. The early internet was a sea of optimism, full of enthusiasm about the growing connectivity between people which had the potential to unite humanity like never before. We early adopters felt like world citizens: making friends, collaborating, and uniting with no respect for borders or ideology. What we hadn’t realized is that we were also building the most powerful tool the world has ever seen for censorship, propaganda, and radicalization.
-
-The companies which built this technology are modern slave drivers, broadly eroding worker freedoms in the first world, and in the third world seeking to exploit the cheapest slave labor they can find. We are developing technology which facilitates the authoritarian and genocidal policies of China. Anyone who speaks out is fired, corrections are quickly issued, and a statement of unconditional support for the profit generating, population murdering thugs is proclaimed. I speak passionately to my peers in my field, begging them to fight back, but many lack the courage, and most don’t care — so long as their exorbitant paychecks keep coming in. Money, money, money. We are at one end of a process which launders money to wash off the blood. Morals are dead.
-
-It’s not just America — democracy is on the decline world-wide. A friend in France recently took to the streets to protest against the introduction of laws protecting the police from citizen oversight. Populist traitors tore the UK out of the EU, effective last week, dooming their people to economic and political destitution. The Greek economy has failed, fascist right-wingers are passing discriminatory laws against LGBT Poles, and conservative populism has taken hold of much of Italy, just to name a few more. Social and political systems are regressing worldwide.
-
-Our entire society boils down to one measure: profit. We are being eaten alive by capitalism. Americans have been brainwashed into a national ethos which is defined by capitalism. In the relentless pursuit of profits, we have eroded all political and social freedoms and created a system defined by its remarkable cruelty in a time when we have access to greater wealth and resources than at any other time in history.
-
-Perhaps future generations won’t remember us after all, considering that in that same relentless pursuit of profits we are vigorously rendering the Earth uninhabitable. But, if they do live to remember us, they will remember us as a wicked, cruel, and unempathetic lot. We will be remembered in disgrace.
diff --git a/content/blog/History-will-not-remember-us-fondly.md b/content/blog/History-will-not-remember-us-fondly.md
@@ -1,7 +1,6 @@
---
title: History will not remember us fondly
date: 2021-01-07
-outputs: [html, gemtext]
---
Today, we recall the Middle Ages as an unenlightened time (quite literally, in
diff --git a/content/blog/How-I-choose-a-license.gmi b/content/blog/How-I-choose-a-license.gmi
@@ -1,55 +0,0 @@
-This is how I choose a license for a new project. It is a reflection of my values and priorities and may not work for your needs.
-
-## Choosing a license
-
-1: Does it matter at all?
-
- YES => GOTO 2
- NO => Use WTFPL
-
-=> https://git.sr.ht/~sircmpwn/shit Example: shit
-
-2: Do you want it to become ubiquitous, where anyone, including corporations, governments, literally anyone, would have no reservations about using it for any use-case, including making proprietary derivatives and selling them, or reusing the code in another project, proprietary or not?
-
- YES => GOTO 3
- NO => GOTO 4
-
-=> https://git.sr.ht/~sircmpwn/scdoc Example: scdoc
-
-3: Is the copyright owned by a company who might have trademarks and patents and other such garbage?
-
- YES => Use Apache 2.0
- NO => Use MIT or BSD
-
-4: Is it a program?
-
- YES => Use GPLv3
- NO => GOTO 5
-
-=> https://git.sr.ht/~sircmpwn/gmnisrv Example: gmnisrv
-
-5: Is it a library?
-
- YES => GOTO 6
- NO => GOTO 7
-
-=> https://git.sr.ht/~sircmpwn/dowork Example: dowork
-
-6: Do you want users to be able to vendor it (copy it into their code), or should they be required to install it and link to it to use it without the viral obligation?
-
- YES => Use MPL 2.0
- NO => Use LGPLv3
-
-7: Is it a network service?
-
- YES => Use AGPLv3
- NO => GOTO 8
-
-=> https://sr.ht/~sircmpwn/sourcehut Example: sourcehut
-
-8: Is it a creative work?
-
- YES => Use CC-BY-SA
- NO => Well then what is it?
-
-=> https://drewdevault.com Example: My blog
diff --git a/content/blog/How-I-choose-a-license.md b/content/blog/How-I-choose-a-license.md
@@ -1,5 +0,0 @@
----
-title: How I choose a license
-date: 2021-05-20
-outputs: [gemtext]
----
diff --git a/content/blog/How-does-IRC-federate.gmi b/content/blog/How-does-IRC-federate.gmi
@@ -1,19 +0,0 @@
-Today’s federated revolution is led by ActivityPub, leading to the rise of services like Mastodon, PeerTube, PixelFed, and more. These new technologies have a particular approach to federation, which is coloring perceptions on what it actually means for a system to be federated at all. Today’s post will explain how Internet Relay Chat (IRC), a technology first introduced in the late 1980’s, does federation differently, and why.
-
-=> gemini://vault.transjovian.org:1965/text/en/Internet%20Relay%20Chat Internet Relay Chat on Wikipedia
-
-As IRC has aged, many users today have only ever used a few networks, such as Liberachat (or Freenode, up until several weeks ago), which use a particular IRC model which does not, at first glance, appear to utilize federation. After all, everyone types “irc.libera.chat” into their client and they all end up on the same network and in the same namespace. However, this domain name is backed by a round-robin resolver which will connect you to any of several dozen servers, which are connected to each other¹ and exchange messages on behalf of the users who reside on each. This is why we call them IRC networks — each is composed of a network of servers that work together.
-
-=> https://netsplit.de/servers/?net=Libera.Chat A list of Libera Chat's servers
-
-But why can’t I send messages to users on OFTC from my Libera Chat session? Well, IRC networks are federated, but they are typically a closed federation, such that each network forms a discrete graph of servers, not interconnected with any of the others. In ActivityPub terms, imagine a version of Mastodon where, instead of automatically federating with new instances, server operators whitelisted each one, forming a closed graph of connected instances. Organize these servers under a single named entity (“Mastonet” or something), and the result is an “ActivityPub network” which operates in the same sense as a typical “IRC network”.
-
-In contrast to Mastodon’s open federation, allowing any server to peer with any others without prior agreement between their operators, most IRC networks are closed. The network’s servers may have independent operators, but they operate together under a common agreement, rather than the laissez-faire approach typical of² ActivityPub servers. The exact organizational and governance models vary, but many of these networks have discrete teams of staff which serve as moderators³, often unrelated to the people responsible for the servers. The social system can be designed independently of the technology.
-
-Among IRC networks, there are degrees of openness. Libera Chat, the largest network, is run by a single governing organization, using servers donated by (and in the possession of) independent sponsors. Many smaller networks are run on as few as one server, and some larger networks (particularly older ones) are run by many independent operators acting like more of a cooperative. EFnet, the oldest network, is run in this manner — you can even apply to become an operator yourself.
-
-We can see from this that the idea of federation is flexible, allowing us to build a variety of social and operational structures. There’s no single right answer — approaches like IRC are able to balance many different benefits and drawbacks of their approach, such as balancing a reduced level of user mobility with a stronger approach to moderation and abuse reduction, while simultaneously enjoying the cost and scalability benefits of a federated design. Other federations, like Matrix, email, and Usenet, have their own set of tradeoffs. What unifies them is the ability to scale to a large size without expensive infrastructure, under the social models which best suit their users' needs, without a centralizing capital motive.
-
-¹ Each server is not necessarily connected to each other server, by the way. Messages can be relayed from one server to another repeatedly to reach the intended destination. This provides IRC with a greater degree of scalability when compared to ActivityPub, where each server must communicate directly with the servers whose users it needs to reach. It also makes IRC more vulnerable to outages partitioning the network; we call these incidents “netsplits”.
-² Typical, but not universal.
-³ There are two classes of moderators on IRC: oppers and ops. The former is responsible for the network, and mainly concerns themselves with matters of spam, user registration, settling disputes, and supporting ops. The ops are responsible for specific channels (spaces for discussion) and can define and enforce further rules at their discretion, within any limits imposed by the host network.
diff --git a/content/blog/How-does-IRC-federate.md b/content/blog/How-does-IRC-federate.md
@@ -1,7 +1,6 @@
---
title: How does IRC's federation model compare to ActivityPub?
date: 2021-07-03
-outputs: [html, gemtext]
---
Today's federated revolution is led by ActivityPub, leading to the rise of
diff --git a/content/blog/How-to-design-a-new-programming-language.gmi b/content/blog/How-to-design-a-new-programming-language.gmi
@@ -1,19 +0,0 @@
-There is a long, difficult road from vague, pie-in-the-sky ideas about what would be cool to have in a new programming language, to a robust, self-consistent, practical implementation of those ideas. Designing and implementing a new programming language from scratch is one of the most challenging tasks a programmer can undertake.
-
-Note: this post is targeted at motivated programmers who want to make a serious attempt at designing a useful programming language. If you just want to make a language as a fun side project, then you can totally just wing it. Taking on an unserious project of that nature is also a good way to develop some expertise which will be useful for a serious project later on.
-
-Let’s set the scene. You already know a few programming languages, and you know what you like and dislike about them — these are your influences. You have some cool novel language design ideas as well. A good first step from here is to dream up some pseudocode, putting some of your ideas to paper, so you can get an idea of what it would actually feel like to write or read code in this hypothetical language. Perhaps a short write-up or a list of goals and ideas is also in order. Circulate these among your confidants for discussion and feedback.
-
-Ideas need to be proven in the forge of implementations, and the next step is to write a compiler (or interpreter — everything in this article applies equally to them). We’ll call this the sacrificial implementation, because you should be prepared to throw it away later. Its purpose is to prove that your design ideas work and can be implemented efficiently, but not to be the production-ready implementation of your new language. It’s a tool to help you refine your language design.
-
-To this end, I would suggest using a parser generator like yacc to create your parser, even if you’d prefer to ultimately use a different design (e.g. recursive descent). The ability to quickly make changes to your grammar, and the side-effect of having a formal grammar written as you work, are both valuable to have at this stage of development. Being prepared to throw out the rest of the compiler is helpful because, due to the inherent difficulty of designing and implementing a programming language at the same time, your first implementation will probably be shit. You don’t know what the language will look like, you’ll make assumptions that you have to undo later, and it’ll undergo dozens of refactorings. It’s gonna suck.
-
-However, shit as it may be, it will have done important work in validating your ideas and refining your design. I would recommend that your next step is to start working on a formal specification of the language (something that I believe all languages should have). You’ve proven what works, and writing it up formally is a good way to finalize the ideas and address the edge cases. Gather a group of interested early adopters, contributors, and subject matter experts (e.g. compiler experts who work with similar languages), and hold discussions on the specification as you work.
-
-This is also a good time to start working on your second implementation. At this point, you will have a good grasp on the overall compiler design, the flaws from your original implementation, and better skills as a compiler programmer. Working on your second compiler and your specification at the same time can help as both endeavours inform the others — a particularly difficult detail to implement could lead to a simplification in the spec, and an under-specified detail getting shored up could lead to a more robust implementation.
-
-Don’t get carried away — keep this new compiler simple and small. Don’t go crazy on nice-to-have features like linters and formatters, an exhaustive test suite, detailed error messages, a sophisticated optimizer, and so on. You want it to implement the specification as simply as possible, so that you can use it for the next step: the hosted compiler. You need to write a third implementation, using your own language to compile itself.
-
-The second compiler, which I hope you wrote in C, is now the bootstrap compiler. I recommend keeping it up-to-date with the specification and maintaining it perpetually as a convenient path to bootstrap your toolchain from scratch (looking at you, Rust). But it’s not going to be the final implementation: any self-respecting general-purpose programming language is implemented in itself. The next, and final step, is to implement your language for a third time.
-
-At this point, you will have refined and proven your language design. You will have developed and applied compiler programming skills. You will have a robust implementation for a complete and self-consistent programming language, developed carefully and with the benefit of hindsight. Your future community will thank you for the care and time you put into this work, as your language design and implementation sets the ceiling on the quality of programs written in it.
diff --git a/content/blog/How-to-design-a-new-programming-language.md b/content/blog/How-to-design-a-new-programming-language.md
@@ -1,7 +1,6 @@
---
title: How to design a new programming language from scratch
date: 2020-12-25
-outputs: [html, gemtext]
---
There is a long, difficult road from vague, pie-in-the-sky ideas about what
diff --git a/content/blog/How-to-write-release-notes.gmi b/content/blog/How-to-write-release-notes.gmi
@@ -1,79 +0,0 @@
-Release notes are a concept most of us are familiar with. When a new software release is prepared, the release notes tell you what changed, so you understand what you can expect and how to prepare for the update. They are also occasionally used to facilitate conversations:
-
-=> https://xkcd.com/2010 xkcd #2010: Update notes
-
-Many of the people tasked with writing release notes have never found themselves on that side of the screen before. If that describes you, I would like to offer some advice on how to nail it. Note that this mostly applies to free and open source software, which is the only kind of software which is valid.
-
-So, it’s release day, and you’re excited about all of the cool new features you’ve added in this release. I know the feeling! Your first order of business, however, is to direct that excitement into the blog or mailing list post announcing the release, rather than into the release notes. When I read the release notes, the first thing I need answered is: “what do I need to do when I upgrade?” You should summarize the breaking changes upfront, and what steps the user will need to take in order to address them. After this, you may follow up with a short list of the flagship improvements which are included in this release. Keep it short — remember that we’re not advertising the release, but facilitating the user’s upgrade. This is a clerical document.
-
-That said, you do have a good opportunity to add a small amount of faffery after this. Some projects say “$project version $X includes $Y changes from $Z contributors”. The detailed changelog should follow, including every change which shipped in the release. This is what users are going to scan to see if that one bug which has been bothering them was addressed in this version. If you have good git discipline, you can take advantage of git shortlog to automatically generate a summary of the changes.
-
-=> https://drewdevault.com/2019/02/25/Using-git-with-discipline.html Previously: Tips for a disciplined git workflow
-
-Once you’ve prepared this document, where should you put it? In my opinion, there’s only one appropriate place for it: an annotated git tag. I don’t like “CHANGELOG” files and I definitely don’t like GitHub releases. If you add “-a” to your “git tag” command, git will fire up an editor and you can fill in your changelog just like you write your git commit messages. This associates your changelog with the git data it describes, and automatically distributes it to all users of the git repository. Most web services which host git repositories will display it on their UI as well. It’s also written in plaintext, which conveniently prevents you from being too extra with your release notes — no images or videos or such.
-
-I have written a small tool which will make all of this easier for you to do: “semver”. This automatically determines the next release number, optionally runs a custom script to automate any release bookkeeping you need to do (e.g. updating the version in your Makefile), then generates the git shortlog and plops you into an editor to flesh out the release notes. I wrote more about this tool in "How to fuck up software releases".
-
-=> https://git.sr.ht/~sircmpwn/dotfiles/tree/master/bin/semver The "semver" script
-=> https://drewdevault.com/2019/10/12/how-to-fuck-up-releases.html Previously: How to fuck up software releases
-
-I hope this advice helps you improve your release notes! Happy shipping.
-
-P.S. Here’s an example of a changelog which follows this advice:
-
-```
-wlroots 0.12.0 includes the following breaking changes:
-
-# New release key
-
-The PGP key used to sign this release has changed to
-34FF9526CFEF0E97A340E2E40FDE7BE0E88F5E48. A proof of legitimacy signed with the
-previous key is available here:
-
-https://github.com/swaywm/wlroots/issues/2462#issuecomment-723578521
-
-# render/gles2: remove gles2_procs global (#2351)
-
-The wlr_gles2_texture_from_* family of functions are no longer public API.
-
-# output: fix blurred hw cursors with fractional scaling (#2107)
-
-For backends: wlr_output_impl.set_cursor now takes a float "scale" instead of an
-int32_t.
-
-# Introduce wlr_output_event_commit (#2315)
-
-The wlr_output.events.commit event now has a data argument of type
-struct wlr_output_event_commit * instead of struct wlr_output *.
-
-
-Antonin Décimo (3):
- Fix typos
- Fix incorrect format parameters
- xwayland: free server in error path
-
-Isaac Freund (6):
- xdg-shell: split last-acked and current state
- layer-shell: add for_each_popup
- layer-shell: error on 0 dimension without anchors
- xdg_positioner: remove unused field
- wlr_drag: remove unused point_destroy field
- xwayland: remove unused listener
-
-Roman Gilg (3):
- output-management-v1: add head identifying events
- output-management-v1: send head identifying information
- output-management-v1: send complete head state on enable change
-
-Ryan Walklin (4):
- Implement logind session SetType method to change session type to wayland
- Also set XDG_SESSION_TYPE
- Don't set XDG_SESSION_TYPE unless logind SetType succeeds
- Quieten failure to set login session type
-
-Scott Moreau (2):
- xwm: Set _NET_WM_STATE_FOCUSED property for the focused surface
- foreign toplevel: Fix whitespace error
-```
-
-Note: I borrowed the real wlroots 0.12.0 release notes and trimmed them down for illustrative purposes. The actual release included a lot more changes and does not actually follow all of my recommendations.
diff --git a/content/blog/How-to-write-release-notes.md b/content/blog/How-to-write-release-notes.md
@@ -1,7 +1,6 @@
---
title: How to write release notes
date: 2021-05-19
-outputs: [html, gemtext]
---
Release notes are a concept most of us are familiar with. When a new software
diff --git a/content/blog/Im-handing-wlroots-and-sway-to-Simon.gmi b/content/blog/Im-handing-wlroots-and-sway-to-Simon.gmi
@@ -1,7 +0,0 @@
-Over the past several months, I've been gradually weaning down my role in both projects, and as a contributor to Wayland in general. I feel that I've already accomplished everything I set out to do with Wayland — and more! I have been happily using sway as my daily driver for well over a year with no complaints or conspicuously absent features. For me, there's little reason to stay involved. This will likely come as no surprise to many who've kept their ear to the ground in these communities.
-
-Simon has been an important co-maintainer on wlroots and sway for several years, and also serves as a maintainer for Wayland itself, and Weston. I trust him with these projects, and he's been doing a stellar job so far — no real change in his work is necessary for this hand-off. Simon works for SourceHut full-time and his compensation covers his role in the Wayland community, so you can trust that the health of the project is unaffected, too.
-
-There's still plenty of great things to come from these projects without me. Many improvements are underway and more are planned for the future. Don't worry: sway and wlroots have already demonstrated that they work quite well without my active involvement.
-
-Good luck, Simon, and thanks for all of your hard work! I'm proud of you!
diff --git a/content/blog/Im-handing-wlroots-and-sway-to-Simon.md b/content/blog/Im-handing-wlroots-and-sway-to-Simon.md
@@ -1,7 +1,6 @@
---
title: I'm handing over maintenance of wlroots and sway to Simon Ser
date: 2020-10-23
-outputs: [html, gemtext]
---
Over the past several months, I've been gradually weaning down my role in both
diff --git a/content/blog/In-praise-of-Postgres.gmi b/content/blog/In-praise-of-Postgres.gmi
@@ -1,53 +0,0 @@
-After writing Praise for Alpine Linux, I have decided to continue writing more articles in praise of good software. Today, I’d like to tell you a bit about PostgreSQL.
-
-Many people don’t understand how old Postgres truly is: the first release¹ was in July of 1996. It used this logo:
-
-=> https://l.sr.ht/Ye7j.jpg A “logo” which depicts the word “PostgreSQL” in a 3D chrome font bursting through a brick wall from space. No, seriously.
-
-After 25 years of persistence, and a better logo design, Postgres stands today as one of the most significant pillars of profound achievement in free software, alongside the likes of Linux and Firefox. PostgreSQL has taken a complex problem and solved it to such an effective degree that all of its competitors are essentially obsolete, perhaps with the exception of SQLite.
-
-For a start, Postgres is simply an incredibly powerful, robust, and reliable piece of software, providing the best implementation of SQL.² It provides a great deal of insight into its own behavior, and allows the experienced operator to fine-tune it to achieve optimal performance. It supports a broad set of SQL features and data types, with which I have always been able to efficiently store and retrieve my data. SQL is usually the #1 bottleneck in web applications, and Postgres does an excellent job of providing you with the tools necessary to manage that bottleneck.
-
-Those tools are also exceptionally well-documented. The PostgreSQL documentation is incredibly in-depth. It puts the rest of us to shame, really. Not only do they have comprehensive reference documentation which exhaustively describes every feature, but also vast amounts of prose which explains the internal design, architecture, and operation of Postgres, plus detailed plain-English explanations of how various high-level tasks can be accomplished, complete with the necessary background to understand those tasks. There’s essentially no reason to ever read a blog post or Stack Overflow answer about how to do something with Postgres — the official docs cover every aspect of the system in great depth.
-
-=> https://www.postgresql.org/docs/current/index.html PostgreSQL's latest documentation
-
-The project is maintained by a highly disciplined team of engineers. I have complete confidence in their abilities to handle matters of performance, regression testing, and security. They publish meticulously detailed weekly development updates, as well as thorough release notes that equips you with sufficient knowledge to confidently run updates on your deployment. Their git discipline is also legendary — here’s the latest commit at the time of writing:
-
-```
-postgres_fdw: Fix issues with generated columns in foreign tables.
-
-postgres_fdw imported generated columns from the remote tables as plain
-columns, and caused failures like "ERROR: cannot insert a non-DEFAULT
-value into column "foo"" when inserting into the foreign tables, as it
-tried to insert values into the generated columns. To fix, we do the
-following under the assumption that generated columns in a postgres_fdw
-foreign table are defined so that they represent generated columns in
-the underlying remote table:
-
-* Send DEFAULT for the generated columns to the foreign server on insert
- or update, not generated column values computed on the local server.
-* Add to postgresImportForeignSchema() an option "import_generated" to
- include column generated expressions in the definitions of foreign
- tables imported from a foreign server. The option is true by default.
-
-The assumption seems reasonable, because that would make a query of the
-postgres_fdw foreign table return values for the generated columns that
-are consistent with the generated expression.
-
-While here, fix another issue in postgresImportForeignSchema(): it tried
-to include column generated expressions as column default expressions in
-the foreign table definitions when the import_default option was enabled.
-
-Per bug #16631 from Daniel Cherniy. Back-patch to v12 where generated
-columns were added.
-
-Discussion: https://postgr.es/m/16631-e929fe9db0ffc7cf%40postgresql.org
-```
-
-They’re all like this.
-
-Ultimately, PostgreSQL is a technically complex program which requires an experienced and skilled operator to be effective. Learning to use it is a costly investment, even if it pays handsomely. Though Postgres has occasionally frustrated or confused me, on the whole my feelings for it are overwhelmingly positive. It’s an incredibly well-made product and its enormous and still-growing successes are very well-earned. When I think of projects which have made the most significant impacts on the free software ecosystem, and on the world at large, PostgreSQL has a place on that list.
-
-¹ The first release of Postgre“SQL”. Its lineage can be traced further back.
-² No qualifiers. It’s straight-up the best implementation of SQL.
diff --git a/content/blog/In-praise-of-Postgres.md b/content/blog/In-praise-of-Postgres.md
@@ -1,7 +1,6 @@
---
title: In praise of PostgreSQL
date: 2021-08-05
-outputs: [html, gemtext]
---
After writing [Praise for Alpine Linux][0], I have decided to continue writing
diff --git a/content/blog/Is-GitHub-a-derivative-work.gmi b/content/blog/Is-GitHub-a-derivative-work.gmi
@@ -1,27 +0,0 @@
-GitHub recently announced a tool called Copilot, a tool which uses machine learning to provide code suggestions, inciting no small degree of controversy. One particular facet of the ensuing discussion piques my curiosity: what happens if the model was trained using software licensed with the GNU General Public License?
-
-=> https://copilot.github.com GitHub Copilot
-
-Disclaimer: I am the founder of a company which competes with GitHub.
-
-The GPL is among a family of licenses considered “copyleft”, which are characterized by their “viral” nature. In particular, the trait common to copyleft works is the requirement that “derivative works” are required to publish their new work under the same terms as the original copyleft license. Some weak copyleft licenses, like the Mozilla Public License, only apply to any changes to specific files from the original code. Stronger licenses like the GPL family affect the broader work that any GPL’d code has been incorporated into.
-
-A recent tweet by @mitsuhiko notes that Copilot can be caused to produce, verbatim, the famous fast inverse square root function from Quake III Arena: a codebase distributed under the GNU GPL 2.0 license. This raises an interesting legal question: is the work produced by a machine learning system, or even the machine learning system itself, a derivative work of the inputs to the model? Another tweet suggests that, if the answer is “no”, GitHub Copilot can be used as a means of washing the GPL off of code you want to use without obeying its license. But, what if the answer is “yes”?
-
-=> https://twitter.com/mitsuhiko/status/1410886329924194309 @mitsuhiko's tweet
-=> https://twitter.com/eevee/status/1410037309848752128 @eevee's tweet
-
-I won’t take a position on this question¹, but I will point out something interesting: if the answer is “yes, machine learning models create derivative works of their inputs”, then GitHub may itself now be considered a derivative work of copyleft software. Consider this statement from GitHub’s blog post on the subject:
-
-> During GitHub Copilot’s early development, nearly 300 employees used it in their daily work as part of an internal trial.
-— Albert Ziegler: A first look at rote learning in GitHub Copilot suggestions
-
-If 300 GitHub employees used Copilot as part of their daily workflow, they are likely to have incorporated the output of Copilot into nearly every software property of GitHub, which provides network services to users. If the model was trained on software using the GNU Affero General Public License (AGPL), and the use of this model created a derivative work, this may entitle all GitHub users to receive a copy of GitHub’s source code under the terms of the AGPL, effectively forcing GitHub to become an open source project. I’m normally against GPL enforcement by means of pulling the rug out from underneath someone who made an honest mistake², but in this case it would certainly be a fascinating case of comeuppance.
-
-Following the Copilot announcement, many of the ensuing discussions hinted to me at a broader divide in the technology community with respect to machine learning. I’ve seen many discussions having to wrestle with philosophical differences between participants, who give different answers to more fundamental questions regarding the ethics of machine learning: what rights should be, and are, afforded to the owners of the content which is incorporated into training data for machine learning? If I want to publish a work which I don’t want to be incorporated into a model, or which, if used for a model, would entitle the public to access to that model, could I? Ought I be allowed to? What if the work being used is my personal information, collected without my knowledge or consent? What if the information is used against me, for example in making lending decisions? What if it’s used against society’s interests at large?
-
-The differences of opinion I’ve seen in the discussions born from this announcement seem to suggest a substantial divide over machine learning, which the tech community may have yet to address, or even understand the depth of. I predict that GitHub Copilot will mark one of several inciting events which start to rub some of the glamour off of machine learning technology and gets us thinking about the ethical questions it presents.³
-
-¹ Though I definitely have one 😉
-² I support GPL enforcement, but I think we would be wise to equip users with a clear understanding of what our license entails, so that those mistakes are less likely to happen in the first place.
-³ I also predict that capitalism will do that thing it normally does and sweep all of the ethics under the rug in any scenario in which addressing the problem would call their line of business into doubt, ultimately leaving the dilemma uncomfortably unresolved as most of us realize it’s a dodgy ethical situation while simultaneously being paid to not think about it too hard.
diff --git a/content/blog/Is-GitHub-a-derivative-work.md b/content/blog/Is-GitHub-a-derivative-work.md
@@ -1,7 +1,6 @@
---
title: Is GitHub a derivative work of GPL'd software?
date: 2021-07-04
-outputs: [html, gemtext]
---
GitHub recently announced a tool called [Copilot][0], a tool which uses machine
diff --git a/content/blog/It-takes-a-village.gmi b/content/blog/It-takes-a-village.gmi
@@ -1,1008 +0,0 @@
-As a prolific maintainer of several dozen FOSS projects, I’m often asked how I can get so much done, being just one person. The answer is: I’m not just one person. I have enjoyed the help of thousands of talented people who have contributed to these works. Without them, none of the projects I work on would be successful.
-
-I’d like to take a moment to recognize and thank all of the people who have participated in these endeavours. If you’ve enjoyed any of the projects I’ve worked on, you owe thanks to some of these wonderful people. The following is an incomplete list of authors who have contributed to one or more of the projects I have started:
-
-A Mak
-A. M. Joseph
-a3v
-Aaron Bieber
-Aaron Holmes
-Aaron Ouellette
-Abdelhakim Qbaich
-absrd
-Ace Eldeib
-Adam Kürthy
-Adam Mizerski
-Aditya Mahajan
-Aditya Srivastava
-Adnan Maolood
-Adrusi
-ael-code
-agr
-Aidan Epstein
-Aidan Harris
-Ajay R
-Ajay Raghavan
-Alain Greppin
-Aleksa Sarai
-Aleksander Usov
-Aleksei Bavshin
-Aleksis
-Alessio
-Alex Cordonnier
-Alex Maese
-Alex McGrath
-Alex Roman
-alex wennerberg
-Alex Xu
-Alexander ‘z33ky’ Hirsch
-Alexander Bakker
-Alexander Dzhoganov
-Alexander Harkness
-Alexander Johnson
-Alexander Taylor
-Alexandre Oliveira
-Alexey Yerin
-Aljaz Gantar
-Alynx Zhou
-Alyssa Ross
-Amin Bandali
-amingin
-Amir Yalon
-Ammar Askar
-Ananth Bhaskararaman
-Anders
-Andreas Rammhold
-Andres Erbsen
-Andrew Conrad
-Andrew Jeffery
-Andrew Leap
-Andrey Kuznetsov
-Andri Yngvason
-Andy Dulson
-andyleap
-Anirudh Oppiliappan
-Anjandev Momi
-Anthony Super
-Anton Gusev
-Antonin Décimo
-aouelete
-apt-ghetto
-ARaspiK
-Arav K
-Ariadna Vigo
-Ariadne Conill
-Ariel Costas
-Ariel Popper
-Arkadiusz Hiler
-Armaan Bhojwani
-Armin Preiml
-Armin Weigl
-Arnaud Vallette d’Osia
-Arsen Arsenović
-Art Wild
-Arthur Gautier
-Arto Jonsson
-Arvin Ignaci
-ascent12
-asdfjkluiop
-Asger Hautop Drewsen
-ash lea
-Ashkan Kiani
-Ashton Kemerling
-athrungithub
-Atnanasi
-Aviv Eyal
-ayaka
-azarus
-bacardi55
-barfoo1
-Bart Pelle
-Bart Post
-Bartłomiej Burdukiewicz
-bbielsa
-BearzRobotics
-Ben Boeckel
-Ben Brown
-Ben Burwell
-Ben Challenor
-Ben Cohen
-Ben Fiedler
-Ben Harris
-Benjamin Cheng
-Benjamin Halsted
-Benjamin Lowry
-Benjamin Riefenstahl
-Benoit Gschwind
-berfr
-bilgorajskim
-Bill Doyle
-Birger Schacht
-Bjorn Neergaard
-Björn Esser
-blha303
-bn4t
-Bob Ham
-bobtwinkles
-boos1993
-Bor Grošelj Simić
-boringcactus
-Brandon Dowdy
-BrassyPanache
-Brendan Buono
-Brendon Smith
-Brian Ashworth
-Brian Clemens
-Brian McKenna
-Bruno Pinto
-bschacht
-BTD Master
-buffet
-burrowing-owl
-Byron Torres
-calcdude84se
-Caleb Bassi
-Callum Brown
-Calvin Lee
-Cameron Nemo
-camoz
-Campbell Vertesi
-Cara Salter
-Carlo Abelli
-Cassandra McCarthy
-Cedric Sodhi
-Chang Liu
-Charles E. Lehner
-Charlie Stanton
-Charmander
-chickendude
-chr0me
-Chris Chamberlain
-Chris Kinniburgh
-Chris Morgan
-Chris Morris
-Chris Vittal
-Chris Waldon
-Chris Young
-Christoph Gysin
-Christopher M. Riedl
-Christopher Vittal
-chtison
-Clar Charr
-Clayton Craft
-Clément Joly
-cnt0
-coderkun
-Cole Helbling
-Cole Mickens
-columbarius
-comex
-Connor Edwards
-Connor Kuehl
-Conrad Hoffmann
-Cormac Stephenson
-Cosimo Cecchi
-cra0zy
-crondog
-Cuber
-curiousleo
-Cyril Levis
-Cédric Bonhomme
-Cédric Cabessa
-Cédric Hannotier
-D.B
-dabio
-Dacheng Gao
-Damien Tardy-Panis
-Dan ELKOUBY
-Dan Robertson
-Daniel Bridges
-Daniel De Graaf
-Daniel Eklöf
-Daniel Gröber
-Daniel Kessler
-Daniel Kondor
-Daniel Lockyer
-Daniel Lublin
-Daniel Martí
-Daniel Otero
-Daniel P
-Daniel Sockwell
-Daniel V
-Daniel V.
-Daniel Vidmar
-Daniel Xu
-Daniil
-Danilo
-Danilo Spinella
-Danny Bautista
-Dark Rift
-Darksun
-Dave Cottlehuber
-David Arnold
-David Blajda
-David Eklov
-David Florness
-David Hurst
-David Kraeutmann
-David Krauser
-David Zero
-David96
-db
-dbandstra
-dece
-delthas
-Denis Doria
-Denis Laxalde
-Dennis Fischer
-Dennis Schridde
-Derek Smith
-Devin J. Pohly
-Devon Johnson
-Dhruvin Gandhi
-Di Ma
-Dian M Fay
-Diane
-Diederik de Haas
-Dillen Meijboom
-Dimitris Triantafyllidis
-Dizigma
-Dmitri Kourennyi
-Dmitry Borodaenko
-Dmitry Kalinkin
-dogwatch
-Dominik Honnef
-Dominique Martinet
-Donnie West
-Dorota Czaplejewicz
-dudemanguy
-Dudemanguy911
-Duncaen
-Dylan Araps
-earnest ma
-Ed Younis
-EdOverflow
-EIREXE
-Ejez
-Ekaterina Vaartis
-Eli Schwartz
-Elias Naur
-Eloi Rivard
-elumbella
-Elyes HAOUAS
-Elyesa
-emersion
-Emerson Ferreira
-Emmanuel Gil Peyrot
-Enerccio
-Erazem Kokot
-Eric Bower
-Eric Drechsel
-Eric Engestrom
-Eric Molitor
-Erik Reider
-ernierasta
-espkk
-Ethan Lee
-Euan Torano
-EuAndreh
-Evan Allrich
-Evan Hanson
-Evan Johnston
-Evan Relf
-Eyal Sawady
-Ezra
-Fabian Geiselhart
-Fabio Alessandro Locati
-Falke Carlsen
-Fazlul Shahriar
-Felipe Cardoso Resende
-Fenveireth
-Ferdinand Bachmann
-FICTURE7
-Filip Sandborg
-finley
-Flakebi
-Florent de Lamotte
-florian.weigelt
-Francesco Gazzetta
-Francis Dinh
-Frank Smit
-Franklin “Snaipe” Mathieu
-Frantisek Fladung
-François Kooman
-Frode Aannevik
-frsfnrrg
-ftilde
-fwsmit
-Gabriel Augendre
-Gabriel Féron
-gabrielpatzleiner
-Galen Abell
-Garrison Taylor
-Gauvain ‘GovanifY’ Roussel-Tarbouriech
-Gaël PORTAY
-gbear605
-Genki Sky
-Geoff Greer
-Geoffrey Casper
-George Craggs
-George Hilliard
-ggrote
-Gianluca Arbezzano
-gilbus
-gildarts
-Giuseppe Lumia
-Gokberk Yaltirakli
-Graham Christensen
-Greg Anders
-Greg Depoire–Ferrer
-Greg Farough
-Greg Hewgill
-Greg V
-Gregory Anders
-Gregory Mullen
-grossws
-Grégoire Delattre
-Guido Cella
-Guido Günther
-Guillaume Brogi
-Guillaume J. Charmes
-György Kurucz
-Gökberk Yaltıraklı
-Götz Christ
-Haelwenn (lanodan) Monnier
-Half-Shot
-Hans Brigman
-Haowen Liu
-Harish Krupo
-Harry Jeffery
-Heghedus Razvan
-Heiko Carrasco
-heitor
-Henrik Riomar
-Honza Pokorny
-Hoolean
-Hristo Venev
-Hubert Hirtz
-hugbubby
-Hugo Osvaldo Barrera
-Humm
-Hummer12007
-Ian Fan
-Ian Huang
-Ian Moody
-Ignas Kiela
-Igor Sviatniy
-Ihor Kalnytskyi
-Ilia Bozhinov
-Ilia Mirkin
-Ilja Kocken
-Ilya Lukyanov
-Ilya Trukhanov
-inwit
-io mintz
-Isaac Freund
-Issam E. Maghni
-Issam Maghni
-István Donkó
-Ivan Chebykin
-Ivan Fedotov
-Ivan Habunek
-Ivan Mironov
-Ivan Molodetskikh
-Ivan Tham
-Ivoah
-ixru
-j-n-f
-Jaanus Torp
-Jack Byrne
-jack gleeson
-Jacob Young
-jajo-11
-Jake Bauer
-Jakub Kopański
-Jakub Kądziołka
-Jamelly Ferreira
-James D. Marble
-James Edwards-Jones
-James Mills
-James Murphy
-James Pond
-James Rowe
-James Turner
-Jan Beich
-Jan Chren
-Jan Palus
-Jan Pokorný
-Jan Staněk
-JanUlrich
-Jared Baldridge
-Jarkko Oranen
-Jasen Borisov
-Jason Francis
-Jason Miller
-Jason Nader
-Jason Phan
-Jason Swank
-jasperro
-Jayce Fayne
-jdiez17
-Jeff Kaufman
-Jeff Martin
-Jeff Peeler
-Jeffas
-Jelle Besseling
-Jente Hidskes
-Jeremy Hofer
-Jerzi Kaminsky
-JerziKaminsky
-Jesin
-jhalmen
-Jiri Vlasak
-jman
-Joe Jenne
-johalun
-Johan Bjäreholt
-Johannes Lundberg
-Johannes Schramm
-John Axel Eriksson
-John Chadwick
-John Chen
-John Mako
-john muhl
-Jon Higgs
-Jonas Große Sundrup
-Jonas Hohmann
-Jonas Kalderstam
-Jonas Karlsson
-Jonas Mueller
-Jonas Platte
-Jonathan Bartlett
-Jonathan Buch
-Jonathan Halmen
-Jonathan Schleußer
-JonnyMako
-Joona Romppanen
-Joram Schrijver
-Jorge Maldonado Ventura
-Jose Diez
-Josef Gajdusek
-Josh Holland
-Josh Junon
-Josh Shone
-Joshua Ashton
-Josip Janzic
-José Expósito
-José Mota
-JR Boyens
-Juan Picca
-Julian P Samaroo
-Julian Samaroo
-Julien Moutinho
-Julien Olivain
-Julien Savard
-Julio Galvan
-Julius Michaelis
-Justin Kelly
-Justin Mayhew
-Justin Nesselrotte
-Justus Rossmeier
-Jøhannes Lippmann
-k1nkreet
-Kacper Kołodziej
-Kaleb Elwert
-kaltinril
-Kalyan Sriram
-Karl Rieländer
-Karmanyaah Malhotra
-Karol Kosek
-Kenny Levinsen
-kevin
-Kevin Hamacher
-Kevin Kuehler
-Kevin Sangeelee
-Kiril Vladimiroff
-Kirill Chibisov
-Kirill Primak
-Kiëd Llaentenn
-KoffeinFlummi
-Koni Marti
-Konrad Beckmann
-Konstantin Kharlamov
-Konstantin Pospelov
-Konstantinos Feretos
-kst
-Kurt Kartaltepe
-Kurt Kremitzki
-kushal
-Kévin Le Gouguec
-Lane Surface
-Langston Barrett
-Lars Hagström
-Laurent Bonnans
-Lauri
-lbonn
-Leon Henrik Plickat
-Leszek Cimała
-Liam Cottam
-Linus Heckemann
-Lio Novelli
-ljedrz
-Louis Taylor
-Lubomir Rintel
-Luca Weiss
-Lucas F. Souza
-Lucas M. Dutra
-Ludovic Chabant
-Ludvig Michaelsson
-Lukas Lihotzki
-Lukas Märdian
-Lukas Wedeking
-Lukas Werling
-Luke Drummond
-Luminarys
-Luna Nieves
-Lyle Hanson
-Lyndsy Simon
-Lyudmil Angelov
-M Stoeckl
-M. David Bennett
-Mack Straight
-madblobfish
-manio143
-Manuel Argüelles
-Manuel Mendez
-Manuel Stoeckl
-Marc Grondin
-Marcel Hellwig
-Marcin Cieślak
-Marco Sirabella
-Marian Dziubiak
-Marien Zwart
-Marius Orcsik
-Mariusz Bialonczyk
-Mark Dain
-Mark Stosberg
-Markus Ongyerth
-MarkusVolk
-Marten Ringwelski
-Martijn Braam
-Martin Dørum
-Martin Hafskjold Thoresen
-Martin Kalchev
-Martin Michlmayr
-Martin Vahlensieck
-Matias Lang
-Matrefeytontias
-matrefeytontias
-Matt Coffin
-Matt Critchlow
-Matt Keeter
-Matt Singletary
-Matt Snider
-Matthew Jorgensen
-Matthias Beyer
-Matthias Totschnig
-Mattias Eriksson
-Matías Lang
-Max Bruckner
-Max Leiter
-Maxime “pep” Buquet
-mbays
-MC42
-meak
-Mehdi Sadeghi
-Mendel E
-Merlin Büge
-Miccah Castorina
-Michael Anckaert
-Michael Aquilina
-Michael Forney
-Michael Struwe
-Michael Vetter
-Michael Weiser
-Michael Weiss
-Michaël Defferrard
-Michał Winiarski
-Michel Ganguin
-Michele Finotto
-Michele Sorcinelli
-Mihai Coman
-Mikkel Oscar Lyderik
-Mikkel Oscar Lyderik Larsen
-Milkey Mouse
-minus
-Mitchell Kutchuk
-mliszcz
-mntmn
-mnussbaum
-Moelf
-morganamilo
-Moritz Buhl
-Mrmaxmeier
-mteyssier
-Mukundan314
-muradm
-murray
-Mustafa Abdul-Kader
-mwenzkowski
-myfreeweb
-Mykola Orliuk
-Mykyta Holubakha
-n3rdopolis
-Naglis Jonaitis
-Nate Dobbins
-Nate Guerin
-Nate Ijams
-Nate Symer
-Nathan Rossi
-Nedzad Hrnjica
-NeKit
-nerdopolis
-ngenisis
-Nguyễn Gia Phong
-Niccolò Scatena
-Nicholas Bering
-Nick Diego Yamane
-Nick Paladino
-Nick White
-Nicklas Warming Jacobsen
-Nicolai Dagestad
-Nicolas Braud-Santoni
-Nicolas Cornu
-Nicolas Reed
-Nicolas Schodet
-Nicolas Werner
-NightFeather
-Nihil Pointer
-Niklas Schulze
-Nils ANDRÉ-CHANG
-Nils Schulte
-Nixon Enraght-Moony
-Noah Altunian
-Noah Kleiner
-Noah Loomans
-Noah Pederson
-Noam Preil
-Noelle Leigh
-NokiDev
-Nolan Prescott
-Nomeji
-Novalinium
-novenary
-np511
-nrechn
-NSDex
-Nuew
-nyorain
-nytpu
-Nícolas F. R. A. Prado
-oharaandrew314
-Oleg Kuznetsov
-Oliver Leaver-Smith
-oliver-giersch
-Olivier Fourdan
-Ondřej Fiala
-Orestis Floros
-Oscar Cowdery Lack
-ossi.ahosalmi
-Owen Johnson
-Cara Salter
-Paco Esteban
-Parasrah
-Pascal Pascher
-Patrick Sauter
-Patrick Steinhardt
-Paul Fenwick
-Paul Ouellette
-Paul Riou
-Paul Spooren
-Paul W. Rankin
-Paul Wise
-Pedro Côrte-Real
-Pedro L. Ramos
-Pedro Lucas Porcellis
-Peroalane
-Peter Grayson
-Peter Lamby
-Peter Rice
-Peter Sanchez
-Phil Rukin
-Philip K
-Philip Woelfel
-Philipe Goulet
-Philipp Ludwig
-Philipp Riegger
-Philippe Pepiot
-Philz69
-Pi-Yueh Chuang
-Pierre-Albéric TROUPLIN
-Piper McCorkle
-pixelherodev
-PlusMinus0
-PoroCYon
-ppascher
-Pranjal Kole
-ProgAndy
-progandy
-Przemyslaw Pawelczyk
-psykose
-punkkeks
-pyxel
-Quantum
-Quentin Carbonneaux
-Quentin Glidic
-Quentin Rameau
-R Chowdhury
-r-c-f
-Rabit
-Rachel K
-Rafael Castillo
-rage 311
-Ragnar Groot Koerkamp
-Ragnis Armus
-Rahiel Kasim
-Raman Varabets
-Ranieri Althoff
-Ray Ganardi
-Raymond E. Pasco
-René Wagner
-Reto Brunner
-Rex Hackbro
-Ricardo Wurmus
-Richard Bradfield
-Rick Cogley
-rinpatch
-Robert Günzler
-Robert Johnstone
-Robert Kubosz
-Robert Sacks
-Robert Vollmert
-Robin Jarry
-Robin Kanters
-Robin Krahl
-Robin Opletal
-Robinhuett
-robotanarchy
-Rodrigo Lourenço
-Rohan Kumar
-Roman Gilg
-ROMB
-Ronan Pigott
-ronys
-Roosembert Palacios
-roshal
-Roshless
-Ross L
-Ross Schulman
-Rostislav Pehlivanov
-rothair
-RoughB Tier0
-Rouven Czerwinski
-rpigott
-Rune Morling
-russ morris
-Ryan Chan
-Ryan Dwyer
-Ryan Farley
-Ryan Gonzalez
-Ryan Walklin
-Rys Sommefeldt
-Réouven Assouly
-S. Christoffer Eliesen
-s0r00t
-salkin-mada
-Sam Newbold
-Sam Whited
-SatowTakeshi
-Sauyon Lee
-Scoopta
-Scott Anderson
-Scott Colby
-Scott Leggett
-Scott Moreau
-Scott O’Malley
-Scott Stevenson
-sdilts
-Sebastian
-Sebastian Krzyszkowiak
-Sebastian LaVine
-Sebastian Noack
-Sebastian Parborg
-Seferan
-Sergeeeek
-Sergei Dolgov
-Sergi Granell
-sergio
-Seth Barberee
-Seán C McCord
-sghctoma
-Shaw Vrana
-Sheena Artrip
-Silvan Jegen
-Simon Barth
-Simon Branch
-Simon Ruderich
-Simon Ser
-Simon Zeni
-Siva Mahadevan
-skip-yell
-skuzzymiglet
-Skyler Riske
-Slowpython
-Sol Fisher Romanoff
-Solomon Victorino
-somdoron
-Sorcus
-sourque
-Spencer Michaels
-SpizzyCoder
-sqwishy
-Srivathsan Murali
-Stacy Harper
-Steef Hegeman
-Stefan Rakel
-Stefan Schick
-Stefan Tatschner
-Stefan VanBuren
-Stefan Wagner
-Stefano Ragni
-Stephan Hilb
-Stephane Chauveau
-Stephen Brennan
-Stephen Brown II
-Stephen Gregoratto
-Stephen Paul Weber
-Steve Jahl
-Steve Losh
-Steven Guikal
-Stian Furu Øverbye
-Streetwalrus Einstein
-Stuart Dilts
-Sudipto Mallick
-Sumner Evans
-Syed Amer Gilani
-sykhro
-Tadeo Kondrak
-Taiyu
-taiyu
-taminaru
-Tamir Zahavi-Brunner
-Tancredi Orlando
-Tanguy Fardet
-Tarmack
-Taryn Hill
-tastytea
-tcb
-Teddy Reed
-Tero Koskinen
-Tharre
-Thayne McCombs
-The Depressed Milkman
-TheAvidDev
-TheMachine02
-Theodor Thornhill
-thermitegod
-Thiago Mendes
-thirtythreeforty
-Thomas Bracht Laumann Jespersen
-Thomas Hebb
-Thomas Jespersen
-Thomas Karpiniec
-Thomas Merkel
-Thomas Plaçais
-Thomas Schneider
-Thomas Weißschuh
-Thomas Wouters
-Thorben Günther
-thuck
-Till Hofmann
-Tim Sampson
-Tim Schumacher
-Timidger
-Timmy Douglas
-Timothée Floure
-Ting-Wei Lan
-tiosgz
-toadicus
-Tobi Fuhrimann
-Tobias Blass
-Tobias Langendorf
-Tobias Stoeckmann
-Tobias Wölfel
-Tom Bereknyei
-Tom Lebreux
-Tom Ryder
-Tom Warnke
-tomKPZ
-Tommy Nguyen
-Tomáš Čech
-Tony Crisci
-Torstein Husebø
-Trannie Carter
-Trevor Slocum
-TriggerAu
-Tudor Brindus
-Tudor Roman
-Tuomas Siipola
-tuomas56
-Twan Wolthof
-Tyler Anderson
-Uli Schlachter
-Umar Getagazov
-unlimitedbacon
-unraised
-User Name
-v44r
-Valentin
-Valentin Hăloiu
-Vasilij Schneidermann
-Versus Void
-vexhack
-Vijfhoek
-vil
-vilhalmer
-Vincent Gu
-Vincent Vanlaer
-Vinko Kašljević
-Vitalij
-Vitalij Mikhailov
-Vlad Pănăzan
-Vlad-Stefan Harbuz
-Vyivel
-w1ke
-Wagner Riffel
-wagner riffel
-Wai Hon Law
-wb9688
-wdbw
-Whemoon Jang
-Wieland Hoffmann
-Wiktor Kwapisiewicz
-wil
-Will Daly
-Will Hunt
-willakat
-Willem Sonke
-William Casarin
-William Culhane
-William Durand
-William Moorehouse
-William Wold
-willrandship
-Willy Goiffon
-Wolf480pl
-Wouter van Kesteren
-Xaiier
-xdavidwu
-xPMo
-y0ast
-Yacine Hmito
-yankejustin
-Yasar
-Yash Srivastav
-Yong Joseph Bakos
-Yorick van Pelt
-yuiiio
-yuilib
-Yury Krivopalov
-Yuya Nishihara
-Yábir Benchakhtir
-Yábir García
-Zach DeCook
-Zach Sisco
-Zachary King
-Zandr Martin
-zccrs
-Zetok Zalbavar
-Zie
-Zoltan Kalmar
-Zuzana Svetlikova
-Éloi Rivard
-Érico Rolim
-Štěpán Němec
-наб
-حبيب الامين
-
-Each of these is a distinct person, with their own lives and aspirations, who took time out of those lives to help build some cool software. I owe everything to these wonderful, talented, dedicated people. Thank you, everyone. Let’s keep up the good work, together.
diff --git a/content/blog/It-takes-a-village.md b/content/blog/It-takes-a-village.md
@@ -1,7 +1,6 @@
---
title: It takes a village
date: 2022-03-14
-outputs: [html, gemtext]
---
As a prolific maintainer of several dozen FOSS projects, I'm often asked how I
diff --git a/content/blog/Kineto-a-gemini-proxy.gmi b/content/blog/Kineto-a-gemini-proxy.gmi
@@ -1,13 +0,0 @@
-Just a quick new software announcement: Kineto, an HTTP to Gemini proxy, is now available.
-
-It's designed to service a single domain, so that you can, for example, make your Gemini site available over HTTP. It supports proxying to external domains, so that outgoing Gemini links work, but it serves the domain it's configured for by default.
-
-More info about Kineto is available here:
-
-=> /kineto.gmi Kineto: an HTTP to Gemini proxy
-
-And a live demo is available to access my Geminispace over HTTP:
-
-=> https://portal.drewdevault.com
-
-It's named after the Contraves-Goerz Kineto Tracking Mount, which is a device used by NASA to watch rockets as they ascend to orbit.
diff --git a/content/blog/Kineto-a-gemini-proxy.md b/content/blog/Kineto-a-gemini-proxy.md
@@ -1,5 +0,0 @@
----
-title: "Kineto: An HTTP to Gemini proxy"
-date: 2020-10-11
-outputs: [gemtext]
----
diff --git a/content/blog/Lichess.gmi b/content/blog/Lichess.gmi
@@ -1,40 +0,0 @@
-Signal-boosting this excellent article from Lichess:
-
-=> https://lichess.org/blog/YF-ZORQAACAA89PI/why-lichess-will-always-be-free. Why Lichess will always be free.
-
-Reformatted as Gemtext for easy reading:
-
-Most “free” websites subsist by selling ads or selling user data. Others do it by putting all the good stuff behind paywalls. Lichess doesn’t do any of that and never will. Almost 6 years ago, Lichess founder Thibault explained why Lichess is free - and what that means. A lot can change in 6 years but this is one thing that hasn't and never will.
-
-=> https://lichess.org/blog/U4skkUQAAEAAhIGz/why-is-lichess-free Why is lichess free?
-
-This is our unbreakable promise to you, our users:
-
-* Lichess will never have ads.
-* Lichess will never sell our user’s data.
-* Lichess will always be 100% free of charge.
-
-# Why no ads?
-
-There is absolutely nothing positive about advertisements on websites from the perspective of their users. They eat up valuable screen space and bandwidth for something that nobody wants to see. They often manipulate and misinform. They have even been the source of security vulnerabilities many times in the past.
-
-So why are there ads on websites? There is only one purpose they serve: to make money. We certainly need some money like anyone else, running a site the size of Lichess is quite expensive. However, we’ve gone more than a decade safely paying our bills with money from donations and there's no reason to think that won’t continue. Some may think that maximizing profits should be the end goal of every website and they are free to run their website that way if they wish, we prefer to do things differently.
-
-=> https://docs.google.com/spreadsheets/d/1Si3PMUJGR9KrpE5lngSkHLJKJkb0ZuI4/preview lichess costs.xslx
-=> https://lichess.org/patron Become a Lichess Patron
-
-# Why Free?
-
-Lichess is a non-profit association in France with the registered objective "promouvoir et favoriser l'enseignement et la pratique du jeu d'échecs et de ses variantes" or "to promote and encourage the teaching and practice of the game of chess and its variants". This is because we are not driven by profit, it is not a goal of Lichess to make money.
-
-What does this mean? Well, as well as the obvious aspect of not having to pay to use Lichess (e.g. paywalls, or freemium) it means we're free to do what we think is right, rather than always pressured to increase revenues. Just like not having adverts, we also have no incentive to hide features, or to sell user data - Lichess exists to be used by you, not to use you.
-
-# Why Open-Source?
-
-Would you buy a meal if the restaurant refused to tell you the ingredients? Would you buy a car if you weren’t allowed to look under the hood? It’s natural for people to want to know what a website they use does. If you can read the code you can know to the tiniest detail how Lichess software works. You don’t have to take our word for it that Lichess works in a certain way, you can go see for yourself.
-
-Another aspect of open source is that it lets developers work together. Anyone is free to use Lichess code for their own project with no cost. The only requirement is to continue in the sharing spirit that the code was given and make whatever you create open-source in the same way that Lichess is. This is incredibly useful to chess website developers who don’t have to start from scratch, they can instead build on the work of others.
-
-Imagine if scientists kept the result of every scientific study to themselves. The same work would have to be done over and over again as everyone was forced to reinvent the wheel countless times to do anything at all. Instead, scientists share their work and collaborate which benefits all of us. There is no reason that software can’t work the same way.
-
-In conclusion, the answer to the question "why is Lichess free?" is very simple. Why wouldn’t it be? Our only goal is to have the best website possible. To that end, adding ads, paywalls, or trackers would make no sense.
diff --git a/content/blog/Lichess.md b/content/blog/Lichess.md
@@ -1,7 +1,6 @@
---
title: "Recommended read: Why Lichess will always be free"
date: 2021-04-23
-outputs: [html, gemtext]
---
Signal-boosting this excellent article from Lichess: [Why Lichess will always be free.][0]
diff --git a/content/blog/Marijuana-reform-in-NL.gmi b/content/blog/Marijuana-reform-in-NL.gmi
@@ -1,23 +0,0 @@
-I have a personal problem: smoking tobacco.
-
-In years past, I have smoked tobacco every now and then in social settings, but never developed a habit. However, when COVID rolled in, one of the only friends I ever got to spend time with during lockdown was a smoker, and after sharing cigarettes with them for a while, I developed a full blown tobacco addiction. Thankfully, I quickly realized what was happening and kicked it after a few months.
-
-Apart from the occasional cigarette, I have had a casual relationship with marijuana for many years. Weed is much safer than tobacco, and it does not create a physical dependency like nicotine does. If you stop smoking weed, you don't get cravings, headaches, anxiety, depression, sleeping problems -- you just stop being high. In general, I do not view a responsible marijuana habit as a problem, at least nowhere near as problematic as tobacco use. I view it similarly to recreational alcohol use. And so, when I quit smoking tobacco, I did not quit my occasional marijuana use, since it was not a problematic substance for me.
-
-Then I moved to the Netherlands. In the Netherlands, marijuana dispensaries are called "coffee shops" -- they don't sell coffee -- which was somewhat confusing when I was looking up somewhere to buy an espresso on my first day in Amsterdam. Being established as a casual marijuana user, in time I ended up visiting one to try out the famous Dutch weed.
-
-For a start, I can tell you that the Dutch marijuana is pretty shit compared to the quality I was used to in America. But most importantly, it's usually mixed with tobacco, which I did not initially understand. I had noticed that the quality was kind of shit, but my first real impression that something odd was about was when I noticed that I was /craving/ marijuana, which had never happened to me before. Well, a quick internet search later provided me with the answer. I had successfully kicked a nascent tobacco habit just to be jumped with it a few months later.
-
-At the time of writing, I am once again smoking tobacco regularly, though I feel reasonably optimistic about my odds of beating it again. New legislation in the coming years will make this a lot easier - it will soon be illegal to sell tobacco products over the counter, and smokers will have to order online. But, given my environment, I will also have to quit weed to make sure it sticks.
-
-Importantly, this experience has also generally drawn my attention to the problems of the soft drug industry in the Netherlands. I was surprised to learn from my neighbors that, a few months before I moved in, one of the apartments in my neighborhood was raided for their marijuana growing operation! When there are two coffee shops within 100 meters of my apartment that I can just walk into and buy marijuana!
-
-In the Netherlands, soft drugs are in a bizarre legal grey area that introduces a lot of problems. Possession of marijuana and sale in small amounts is legal, but production and sale of large amounts is illegal. Coffee shops usually get away with their sales, but the cops frequently bust growing operations. The legal persecution of the supply chain causes the industry to be tied up with organized crime, human trafficking, and slavery, as well as many other implications which are equally grave. It's horrible.
-
-Furthermore, the government tolerates only the sale of certain marijuana products, namely unrefined products. That essentially limits coffee shops to the sale of flower (the actual dried plant), hashish (the active parts of the plant rolled into a waxy substance), and edibles. In the United States, you get a much greater variety of products such as waxes, oils, and shatters, which are useful for vaporizer use, and can allow you to better tune the strength of the substance so that you don't have to dilute it with tobacco to avoid getting blasted. Though marijuana is soft-legalized, a black market still exists here thanks to these gaps in the semi-legal market.
-
-I should note that there are some Americans who, at least historically, moved to Amsterdam for access to the soft drug market. I am not one of them, and if drugs were outlawed here I still would have moved here. But the famous liberal drug industry in the Netherlands is a fucking disgrace, and the Dutch should be ashamed of it. The Netherlands has addictive, inferior products, proffered by a supply chain painted in blood, that persecutes the have-nots to give the haves a little bit of pleasure. The Dutch drug policy is a bad joke. In the United States, I could be rest assured that the weed I bought in a dispensary in Colorado was coming from a licensed farm in Colorado, I had access to a wider variety of products, and the only tobacco you'd find in an American dispensary is likely to be in the pockets of any smokers who happen to be there.
-
-The Dutch system needs to be reformed. The supply chain needs to be legalized and audited, the permissible products expanded, and the tobacco needs to go.
-
-Thank you for coming to my Ted talk.
diff --git a/content/blog/Marijuana-reform-in-NL.md b/content/blog/Marijuana-reform-in-NL.md
@@ -1,6 +0,0 @@
----
-title: The Netherlands needs drug reform (yes, really)
-date: 2022-03-30
-outputs: [gemtext]
-nohtml: true
----
diff --git a/content/blog/Mark-Rober-and-capitalist-manipulation.gmi b/content/blog/Mark-Rober-and-capitalist-manipulation.gmi
@@ -1,31 +0,0 @@
-Mark Rober is an incredibly successful video producer on YouTube. His channel has nearly 20 million subscribers at the time of writing, and his videos have amassed a total of two billion views.
-
-I recently watched his latest video, which you can see for yourself here:
-
-=> https://www.youtube.com/watch?v=e09xig209cQ Mark Rober: World's Tallest Elephant Toothpaste Volcano (I FINALLY DID IT!!)
-
-This video, along with many others on Mark's channel, makes me pretty uncomfortable. Why?
-
-I should state at the outset that I passionately hate advertising, and I think it ought to be outlawed. Advertising is a form of propaganda, designed to manipulate people into doing something they normally wouldn't — spending money — in the interests of the corporation which created the ad. It would be unethical even on this basis alone, but the effects reach further: the ubiquity of advertising and the smashing success of capitalism as the dominant economic model have created a consumerist world in the wake of which follows widespread waste and destruction. I should probably write a dedicated blog post about advertising at some point, but let's leave it at that for now.
-
-It's awful enough when adults are subjected to corporate propaganda, but it's really disturbing to see it done to children. It's in this respect that I take offense at Mark's content.
-
-Mark's recent video is part of a broader trend of appealing to children which has turned me off of his channel for a while simply on the basis that I'm becoming further removed from the target audience. The "World's Tallest Elephant Toothpaste Volcano" shows off what must have been a dream party — water slides, bubbles, a big stretch limo — for a child. It focuses heavily on images of a bunch of children having a grand time, and relegates to only brief clips the more adult engineering work which was required to make it all happen.
-
-In and of itself, this shift in Mark's target audience is quite fine! Mark has earned my praise for seeking to make wholesome content for children, introducing them to the wonders of science, and steering his channel to whatever ends he sees fit, even at the expense of jaded adults like me. I'm especially appreciative that he goes out of his way to make a difference for a few children in tough straights, who might be in dire need of a good time and a dose of fun.
-
-However, shifting your target audience to children also comes with a shift in priorities. Mark has done a number of monetization strategies in the past. The first one I noticed was his collaboration with Kiwi Co, a business which makes kits for children to build small projects with. This makes me uncomfortable simply on the basis of capitalist programming, but it's hard to criticize it when you consider that it's a fun way to introduce children to science and engineering — which are the foci of the channel. There's a tie-in there, and even I have to admit that it's pretty benign.
-
-The latest video is sponsored by Draft Kings, which is a gambling platform. A hard-to-see footnote on the video states "Eligibility restrictions apply", but nowhere is the nature of the platform disclosed, nor the risks of gambling. Instead of any responsible disclosure, Mark glorifies the activity by talking about how he participates in it with his friends, portraying it as a great time free of consequences and risk. Remember: he is talking to children in this video.
-
-This is the most egregious flaw, and if it weren't for that, I probably would have forgiven the more subtle issues and skipped this blog post. But, since we're here, I can enumerate two other problems I noticed in this video. First, he glorifies wealth and consumerism: the children are coached into excitement over riding in a Tesla car or on a private jet. Another problem comes from [redacted]
-
-I have no complaints about how their collaboration appears to the children in their audience. [redacted]
-
-I would have expected Mark, being a popular YouTube personality himself, to be aware of these issues. Perhaps I'm asking too much, though, or perhaps Mark knows these allegations to be false, or overstated.
-
-All of these things said, the problems I've pointed out are all reasonably forgivable. The outcome of this is that I feel uncomfortable with his videos, but not that I think Mark is a bad person. I'm picking on Mark today, but he is just one gear in a larger machine which seeks to rear children as participants in a consumerist, capitalist system, a machine which Mark himself was probably raised in and views as normal. But, I might hold myself to higher standards if I were in his shoes.
-
-## Update 2022-07-03
-
-A previous version of this article spoke critically of Mark Rober's relationship with another YouTube personality. I have since learned that the allegations against this YouTube personality are not well-substantiated. I have removed related criticisms and the name of this person from this article.
diff --git a/content/blog/Mark-Rober-and-capitalist-manipulation.md b/content/blog/Mark-Rober-and-capitalist-manipulation.md
@@ -1,5 +0,0 @@
----
-title: Mark Rober and the manipulation of children for profit
-date: 2021-09-16
-outputs: [gemtext]
----
diff --git a/content/blog/Megacorps-are-not-your-dream-job.gmi b/content/blog/Megacorps-are-not-your-dream-job.gmi
@@ -1,19 +0,0 @@
-Megacorporations¹ do not care about you. You’re worth nothing to them. Google made $66 billion in 2014 — even if you made an exorbitant $500K salary, you only cost them .00075% of that revenue. They are not invested in you. Why should you invest in them? Why should you give a company that isn’t invested in you 40+ hours of your week, half your waking life, the only life you get?
-
-You will have little to no meaningful autonomy, impact, or influence. Your manager’s manager’s manager’s manager (1) will exist, and (2) will not know your name, and probably not your manager’s name either. The company will be good at advertising their jobs, especially to fresh grads, and you will no doubt have dozens of cool project in mind that you’re itching to get involved with. You won’t be assigned any of them — all vacancies are already filled by tenured staff and nepotism. You’re more likely to work on a product you have hardly ever heard of or used, doing work that doesn’t interest you or meaningfully impact anyone you know.
-
-A business doesn’t get a billion-dollar valuation (or… ugh… a trillion-dollar valuation) by having a productive team which takes good care of its employees, rewarding them with interesting projects, or quickly correcting toxic work environments. A business might get millions of dollars, at most, with that approach. The megacorps got their 10th figure with another strategy: ruthlessness. They create and exploit monopolies, and bribe regulators to look the other way. They acquire and dismantle competitors. They hire H1B’s and subject them to payroll fraud and workplace abuse, confident that they can’t quit without risking their visa. Megacorps are a faceless machine which is interested only in making as much money as possible with any resources at their disposal, among those being a budget which exceeds most national GDPs.¹
-
-If anything goes wrong in this heartless environment, you’re going to be in a very weak position. If you go to HR² for almost any dispute, they are unlikely to help. If you quit, remember that they will have forced you to sign an NDA and a non-compete. You’re rolling the dice on whether or not they’ll decide that you’ve overstepped (and they can decide that — the terms are virtually impossible not to breach). That .00075% of their annual revenue you took home? They could easily spend 100x that on lawyers without breaking a sweat, and money is justice in the United States. You will likely have no recourse if they wrong you.
-
-They may hurt you, but even worse, they will make you hurt others. You will be complicit in their ruthlessness. Privacy dragnets, union busting, monopolistic behavior and lobbying, faux-slavery of gig workers in domestic warehouses and actual-slavery of workers in foreign factories, answering to nations committing actual ongoing genocide — this is only possible because highly skilled individuals like yourself chose to work for them, build their war chest, or even directly contribute to these efforts. Your salary may be a drop in the bucket to them, but consider how much that figure means to you. If you make that $500K, they spend 1.5× that after overhead, and they’d only do it if they expect a return on that investment. Would you give a corporation with this much blood on its hands $750K of your worth? Pocket change to them, maybe, but a lot of value to you, value that you could be adding somewhere else.
-
-They won’t care about you. They won’t be invested in you. They won’t give you interesting work. You will have no recourse if things go wrong, and things are primed to go wrong. They could hurt you, and they could make you hurt others. Don’t fall for their propaganda.
-
-Megacorps are, in fact, in the minority. There are tens of thousands of other tech companies that could use your help. Tech workers are in high demand — you have choices! You will probably be much happier at a small to mid-size company. The “dream job” megacorps have sold you on is just good marketing.
-
-¹ EDIT @ 23:37 UTC: It bears clarifying that I'm referring to extremely large companies, at or near the scale of FAANG (Facebook, Apple, Amazon, Netflix, Google). Hundreds of billions of dollars or more in market cap.
-
-² Political side thought: Amazon’s revenue in 2019 alone exceeds the GDP of 150 sovereign nations. Is undemocratic ownership of resources and power on that scale just?
-
-³ Quick reminder that HR’s job is to protect the company, not you. This applies to any company, not just megacorps. If you have a problem that you need to bring to HR, you should have a lawyer draft that letter, and you should polish up your resume first.
diff --git a/content/blog/Megacorps-are-not-your-dream-job.md b/content/blog/Megacorps-are-not-your-dream-job.md
@@ -1,7 +1,6 @@
---
title: A megacorp is not your dream job
date: 2021-01-01
-outputs: [html, gemtext]
---
Megacorporations[^1] *do not* care about you. You're worth nothing to them.
diff --git a/content/blog/Netherlands-update.gmi b/content/blog/Netherlands-update.gmi
@@ -1,19 +0,0 @@
-I moved to Amsterdam in July 2021, and now that I’ve had some time to settle in I thought I’d share my thoughts on how it’s been so far. In short: I love it here!
-
-I did end up finding housing through the hacker community thanks to my earlier post, which was a great blessing. I am renting an apartment from a member of the Techinc hacker space, which I have joined as a member myself. One of my biggest fears was establishing a new social network here in the Netherlands, but making friends here has been easy. Through this hacker space and through other connections besides, I have quickly met many wonderful, friendly, and welcoming people, and I have never felt like a stranger in a strange land. For this I am very grateful.
-
-There are many other things to love about this place. One of my favorite things about Amsterdam is getting around by bike. In Philadelphia, travelling by bicycle is signing up for a death wish. In the Netherlands, 27% of all trips utilize a bike, and in Amsterdam it’s as much as 38%. Cyclists enjoy dedicated cycling-first infrastructure, such as bike lanes separated entirely from the roads and dedicated bike-only longer-distance artery roads. The city is designed to reduce points of conflict between bikes and cars, and even when they have to share the road they’re almost always designed to slow cars down and give bikes priority. The whole country is very flat, too, though Dutch people will be quick to tell you about The Hill in their neighborhood, which is always no more than 2 meters tall.
-
-Getting around without a bike is super pleasant as well. I have my choice of bus, tram, metro, long-distance train, or even free ferries across the river, all paid for with the same auto-recharging NFC card for a low price. Every line runs frequent stops, so during the day you’re generally not waiting more than 5 minutes to be picked up and at night you’re probably not going to be waiting more than 15 minutes at popular stops. When it gets really late, though, you might wait as much as 30 minutes. The inter-city trains are amazing — I can show up at any major station without a plan and there’s probably a train heading to where I want to go in less than 10 minutes. Compared to Amtrak, it’s simply mind boggling.
-
-Little things no one here even thinks about have left an impression on me, too. I see street cleaners out all of the time, in a little squad where workers use leaf blowers and brooms to sweep trash and dirt from the sidewalks and squares into the streets where sweepers come through to pick it up. The trash and recycling bins are regularly collected, and when one of them in my neighborhood broke, it was replaced within days. There are some areas where trash does tend to accumulate, though, such as near benches in parks.
-
-Isolated accumulations of trash aside, the parks are great. There’s a lot more of them throughout the city than you’d get in a typical American city. I live close to two large parks, Rembrandtpark and Vondelpark, plus the smaller Erasmuspark, all of which are less than 5 minutes of cycling away. I like to cycle there on cool summer days to read by the lakes or other water features, or on one of the lawns. These parks also typically have a lot of large cycling-only roads which act as little cycling highways throughout the city, which means many of my cycling routes take me through nature even for intra-city travel. Several of the parks also have public gym equipment available, with which you can get a pretty good outdoor work-out for free.
-
-The layout of the neighborhoods is quite nice as well. I have not just one but four grocery stores within walking distance of my house, and I visit one multiple times per week to pick up food, just a 3 or 4 minute walk away from my place. Thanks to the ease of accessing good (and cheap) produce and other ingredients, my diet has improved quite a bit — something I didn’t expect when I moved here. I can’t get everything I want, though: finding genuinely spicy chili peppers is a challenge.
-
-The infamous Dutch bureaucracy is not as bad as people made it out to be. Going through the immigration process was pretty stressful — as any process which could end with being kicked out of the country might be — but it was actually fairly straightforward for the kind of visa I wanted to get. Public servants here are more helpful and flexible than their reputation suggests.
-
-Something which is proving to be a bit of a challenge, however, is learning Dutch. This surprised me given my existing background in languages; I thought it would be pretty easy to pick up. I was able to quickly learn the basics, and I can conduct many everyday affairs in Dutch, but I found it difficult to progress beyond this point with self-study alone. I enrolled in a formal class, which will hopefully help bridge that gap.
-
-I could go on — experiences outside of Amsterdam and throughout the rest of Europe, the vibes of the FOSS community and other communities I’ve met, serendipitously meeting people I knew in America who also moved to Europe, and so on — but I think I’ll stop here for this post. Every time I’ve paused to reflect on my relocation abroad, I’ve come away smiling. So far, so good. Hopefully that doesn’t start to wear off!
diff --git a/content/blog/Netherlands-update.md b/content/blog/Netherlands-update.md
@@ -1,7 +1,6 @@
---
title: The Netherlands so far
date: 2022-03-24
-outputs: [html, gemtext]
---
[I moved to Amsterdam in July 2021][previous], and now that I've had some time
diff --git a/content/blog/New-workstation.gmi b/content/blog/New-workstation.gmi
@@ -1,30 +0,0 @@
-After my latest status update, I was asked by a few people to go into some more detail about the nature of my new workstation. I had been using my older workstation for almost 10 years, and I had the following goals for the new one:
-
-* Smaller & more lightweight
-* Modest upgrades to the hardware
-* An opportunity to rework my software setup
-
-I ended up opting for a Mini-ITX build. The main limitation constraining my size choices were the fact that I wanted (1) at least 3 HDDs and one M.2 drive; and (2) an optical drive (I like to rip CDs). The case I picked for this is the Fractal Design Core 500, and I'm pretty pleased with it.
-
-Some parts I already had that went into the build were:
-
-* 3 8T HDDs and one 2T NVMe from the pile of drives
-* A spare AMD 580 GPU from the pile of GPUs
-* A spare RM850x PSU from the pile of PSUs^W^W^W^W^W(I only had one spare)
-* The optical drive from old workstation
-
-And the following parts I bought new:
-
-* Fractal Design Core 500 case
-* AMD Ryzen 7 3700X
-* 2x16G of RAM (CMK32GX4M2D3600C18)
-* Noctua NH-L9a-AM4 lower-profile CPU cooler
-* GIGABYTE B550I motherboard
-
-The result is a nice, compact build, with a surprising amount of airflow despite the small form-factor. I may replace the cooler with the stock one - it runs a bit hot now - but otherwise I'm satisfied with the hardware side of this build.
-
-I was still running Arch Linux on my old workstation (ugh!) and this was a good chance to upgrade to a nice Alpine Linux install. Getting full disk encryption working correctly was a bit more challenging than on previous attempts with Alpine, but I got it working without too much fuss.
-
-My root filesystem is on the NVMe, encrypted, and I set up a ZFS pool on the other 3 drives for /home and for an archive of important data from my old workstation. I could set this up easily enough, but neither WiFi nor Ethernet were working OOTB. I did most of the setup with a USB-ethernet adapter. Anyway, the necessary drivers had landed in the just-released Linux 5.9, but Alpine's linux-edge package was behind. This is fixed easily enough - I just built the newer kernel - but Alpine doesn't provide a zfs-edge package for this kernel, so I had to build that, too.
-
-So, there you have it. The new box is less than half the size & weight of the old one, catches up to modern hardware quite a bit, and gave me a good chance to re-provision my main workstation's software loadout.
diff --git a/content/blog/New-workstation.md b/content/blog/New-workstation.md
@@ -1,5 +0,0 @@
----
-title: "New workstation"
-date: 2020-10-18
-outputs: [gemtext]
----
diff --git a/content/blog/On-the-traits-of-good-replacements.gmi b/content/blog/On-the-traits-of-good-replacements.gmi
@@ -1,30 +0,0 @@
-This is not always true, but in my experience, it tends to hold up. We often build or evaluate tools which aim to replace something kludgy^Wvenerable. Common examples include shells, programming languages, system utilities, and so on. Rust, Zig, etc, are taking on C in this manner; so too does zsh, fish, and oil take on bash, which in turn takes on the Bourne shell. There are many examples.
-
-
-All of these tools are fine in their own respects, but they have all failed to completely supplant the software they’re seeking to improve upon.¹ What these projects have in common is that they expand on the ideas of their predecessors, rather than refining them. A truly great alternative finds the nugget of truth at the center of the idea, cuts out the cruft, and solves the same problem with less.
-
-This is one reason I like Alpine Linux, for example. It’s not really aiming to replace any distro in particular so much as it competes with the Linux ecosystem as a whole. Alpine does this by being simpler than the rest: it’s the only Linux system I can fit more or less entirely in my head. Compare this to the most common approach: “let’s make a Debian derivative!” It kind of worked for Ubuntu, less so for everyone else. The C library Alpine ships, musl libc, is another example: it aims to replace glibc by being leaner and meaner, and I’ve talked about its success in this respect before.
-
-=> gemini://drewdevault.com/2020/09/25/A-story-of-two-libcs.gmi Previously: A tale of two libcs
-
-Go is a programming language which has done relatively well in this respect. It aimed to fill a bit of a void in the high-performance internet infrastructure systems programming niche,² ³ and it is markedly simpler than most of the other tools in its line of work. It takes the opportunity to add a few innovations — its big risk is its novel concurrency model — but Go balances this with a level of simplicity in other respects which is unchallenged among its contemporaries,⁴ and a commitment to that simplicity which has endured for years.⁵
-
-There are many other examples. UTF-8 is a simple, universal approach which smooths over the idiosyncrasies of the encoding zoo which pre-dates it, and has more-or-less rendered its alternatives obsolete. JSON has almost completely replaced XML, and its grammar famously fits on a business card.⁶ On the other hand, when zsh started as a superset of bash, it crippled its ability to compete on “having less warts than bash”.
-
-Rust is more vague in its inspirations, and does not start as a superset of anything. It has, however, done a poor job of scope management, and is significantly more complex than many of the languages it competes with, notably C and Go. For this reason, it struggles to root out the hold-outs in those domains, and it suffers for the difficulty in porting it to new platforms, which limits its penetration into a lot of domains that C is still thriving in. However, it succeeds in being much simpler than C++, and I expect that it will render C++ obsolete in the coming years as such.⁷
-
-In computing, we make do with a hodge podge of hacks and kludges which, at best, approximate the solutions to the problems that computing presents us. If you start with one such hack as the basis of a supposed replacement and build more on top of it, you will inherit the warts, and you may find it difficult to rid yourself of them. If, instead, you question the premise of the software, interrogate the underlying problem it’s trying to solve, and apply your insights, plus a healthy dose of hindsight, you may isolate what’s right from what’s superfluous, and your simplified solution just might end up replacing the cruft of yore.
-
-¹ Some of the listed examples have not given up and would prefer that I say something to the effect of “but the jury is still out” here.
-
-² That’s a lot of adjectives!
-
-³ More concisely, I think of Go as an “internet programming language”, distinct from the systems programming languages that inspired it. Its design shines especially in this context, but its value-add is less pronounced for other tasks in the systems programming domain - compilers, operating systems, etc.
-
-⁴ The Go spec is quite concise and has changed very little since Go’s inception. Go is also unique among its contemporaries for (1) writing a spec which (2) supports the development of multiple competing implementations.
-
-⁵ Past tense, unfortunately, now that Go 2 is getting stirred up.
-
-⁶ It is possible that JSON has achieved too much success in this respect, as it has found its way into a lot of use-cases for which it is less than ideal.
-
-⁷ Despite my infamous distaste for Rust, long-time readers will know that where I have distaste for Rust, I have passionate scorn for C++. I’m quite glad to see Rust taking it on, and I hope very much that it succeeds in this respect.
diff --git a/content/blog/Open-source-is-defined-by-the-OSD.gmi b/content/blog/Open-source-is-defined-by-the-OSD.gmi
@@ -1,29 +0,0 @@
-The Open Source Initiative (OSI) publishes a document called the Open Source Definition (OSD), which defines the term “open source”. However, there is a small minority of viewpoints within the software community which wishes that this were not so. The most concerning among them are those who wish open source was more commercially favorable to themselves, and themselves alone, such as companies like Elastic.
-
-=> https://opensource.org The Open Source Initiative
-=> https://opensource.org/osd The Open Source Definition
-
-I disagree with this perspective, and I’d like take a few minutes today to explore several of the most common arguments in favor of this view, and explain why I don’t agree with them. One of the most frustrating complications in this discussion is the context of motivated reasoning: most people arguing in favor of an unorthodox definition of “open source” have a vested interest in their alternative view.¹ This makes it difficult to presume good faith. For example, say someone wants to portray their software as open source even if it prohibits commercial use by third parties, which would normally disqualify it as such. Their interpretation serves to re-enforce their commercialization plans, providing a direct financial incentive not only for them to promote this definition of “open source”, but also for them to convince you that their interpretation is valid.
-
-=> https://en.wikipedia.org/wiki/Motivated_reasoning Wikipedia: Motivated Reasoning
-=> https://xkcd.com/2167 Relevant xkcd
-
-I find this argument to be fundamentally dishonest. Let me illustrate this with an analogy. Consider PostgreSQL. If I were to develop a new program called Postgres which was similar to PostgreSQL, but different in some important ways — let’s say it’s a proprietary, paid, hosted database service — that would be problematic. The industry understands that “Postgres” refers to the popular open source database engine, and by re-using their name I am diluting the brand of Postgres. It can be inferred that my reasoning for this comes from the desire to utilize their brand power for personal commercial gain. The terms “Postgres” and “PostgreSQL” are trademarked, but even if they were not, this approach would be dishonest and ethically wrong.
-
-So too are the attempts to re-brand “open source” in a manner which is more commercially exploitable for an individual person or organization equally dishonest. The industry has an orthodox understanding of the meaning of “open source”, i.e. that defined by the Open Source Initiative, which is generally well-understood through the proliferation of software licenses which are compatible with the OSD. When a project describes itself as “open source”, this is a useful short-hand for understanding that the project adheres to a specific set of values and offers a specific set of rights to its users and contributors. When those rights are denied or limited, the OSD no longer applies and thus neither does the term “open source”. To disregard this in the interests of a financial incentive is dishonest, much like I would be dishonest for selling “cakes” and fulfilling orders with used car tires with “cake” written on them instead.
-
-Critics of the OSD frequently point out that the OSI failed to register a trademark on the term “open source”, but a trademark is not necessary for this argument to hold. Language is defined by its usage, and the OSD is the popular usage of the term “open source”, without relying on the trademark system. The existence of a trademark on a specific term is not required for language which misuses that term to be dishonest.
-
-As language is defined by its usage, some may argue that they are as entitled as anyone else to put forward an alternative usage. This is how language evolves. They are not wrong, though I might suggest that their alternative usage of “open source” requires a substantial leap in understanding which might not be as agreeable to those who don’t stand to benefit financially from that leap. Even so, I argue that the mainstream definition of open source, that forwarded by the OSI, is a useful term that is worth preserving in its current form. It is useful to quickly understand the essential values and rights associated with a piece of software as easily as stating that it is “open source”. I am not prepared to accept a new definition which removes or reduces important rights in service of your private financial interests.
-
-The mainstream usage of “open source” under the OSD is also, in my opinion, morally just. You may feel a special relationship with the projects you start and invest into, and a sense of ownership with them, but they are not rightfully yours once you receive outside contributions. The benefit of open source is in the ability for the community to contribute directly to its improvements — and once they do, the project is the sum of your efforts and the efforts of the community. Thus, is it not right that the right to commercial exploitation of the software is shared with that community? In the absence of a CLA,² contributors retain their copyright as well, and the software is legally jointly owned by the sum of its contributors. And beyond copyright, the success of the software is the sum of its code along with the community who learns about and deploys it, offers each other support, writes blog posts and books about it, sells consulting services for it, and together helps to popularize it. If you wish to access all of these benefits of the open source model, you must play by the open source rules.
-
-It’s not surprising that this would become a matter of contention among certain groups within the industry. Open source is not just eating the world, but has eaten the world. Almost all software developed today includes substantial open source components. The open source brand is very strong, and there are many interests who would like to leverage that brand without meeting its obligations. But the constraints of the open source definition are important, played a critical role in the ascension of open source in the software market, and worth preserving into the future.
-
-That’s not to say that there isn’t room for competing ideologies. If you feel that the open source model does not work for you, then that’s a valid opinion to hold. I only ask that you market your alternative model honestly by using a different name for it. Software for which the source code is available, but which does not meet the requirements of the open source definition, is rightfully called “source available”. If you want a sexier brand for it, make one! “Open core” is also popular, though not exactly the same. Your movement has as much right to success as the open source movement, but you need to earn that success independently of the open source movement. Perhaps someday your alternative model will supplant open source! I wish you the best of luck in this endeavour.
-
----
-
-¹ Am I similarly biased? I also make my living from open source software, but I take special care to place the community’s interests above my own. I advocate for open source and free software principles in all software, including software I don’t personally use or benefit from, and in my own software I don’t ask contributors to sign a CLA — keeping the copyrights collectively held by the community at large, and limiting my access to commercialization to the same rules of open source that are granted to all contributors to and users of the software I use, write, and contribute to.
-
-² Such CLAs are also unjust in my view. Tools like the Developer Certificate of Origin are better for meeting the need to establish the legitimate copyright of open source software without denying rights to its community.
diff --git a/content/blog/Open-source-is-defined-by-the-OSD.md b/content/blog/Open-source-is-defined-by-the-OSD.md
@@ -1,7 +1,6 @@
---
title: Open Source is defined by the OSI's Open Source Definition
date: 2022-03-01
-outputs: [html, gemtext]
---
The [Open Source Initiative] (OSI) publishes a document called the [Open Source
diff --git a/content/blog/Our-self-hosted-parser-design.gmi b/content/blog/Our-self-hosted-parser-design.gmi
@@ -1,566 +0,0 @@
-One of the things we’re working on in my new programming language is a self-hosting compiler. Having a self-hosted compiler is a critical step in the development of (some) programming languages: it signals that the language is mature enough to be comfortably used to implement itself. While this isn’t right for some languages (e.g. shell scripts), for a systems programming language like ours, this is a crucial step in our bootstrapping plan. Our self-hosted parser design was completed this week, and today I’ll share some details about how it works and how it came to be.
-
-This is the third parser which has been implemented for this language. We wrote a sacrificial compiler prototype upfront to help inform the language design, and that first compiler used yacc for its parser. Using yacc was helpful at first because it makes it reasonably simple to iterate on the parser when the language is still undergoing frequent and far-reaching design changes. Another nice side-effect starting with a yacc parser is that it makes it quite easy to produce a formal grammar when you settle on the design. Here’s a peek at some of our original parser code:
-
-```
-struct_type
- : T_STRUCT '{' struct_fields '}' {
- $$.flags = 0;
- $$.storage = TYPE_STRUCT;
- allocfrom((void **)&$$.fields, &$3, sizeof($3));
- }
- | T_UNION '{' struct_fields '}' {
- $$.flags = 0;
- $$.storage = TYPE_UNION;
- allocfrom((void **)&$$.fields, &$3, sizeof($3));
- }
- ;
-
-struct_fields
- : struct_field
- | struct_field ',' { $$ = $1; }
- | struct_field ',' struct_fields {
- $$ = $1;
- allocfrom((void **)&$$.next, &$3, sizeof($3));
- }
- ;
-
-struct_field
- : T_IDENT ':' type {
- $$.name = $1;
- allocfrom((void**)&$$.type, &$3, sizeof($3));
- $$.next = NULL;
- }
- ;
-```
-
-This approach has you writing code which is already almost a formal grammar in its own right. If we strip out the C code, we get the following:
-
-```
-struct_type
- : T_STRUCT '{' struct_fields '}'
- | T_UNION '{' struct_fields '}'
- ;
-
-struct_fields
- : struct_field
- | struct_field ','
- | struct_field ',' struct_fields
- ;
-
-struct_field
- : T_IDENT ':' type
- ;
-```
-
-This gives us a reasonably clean path to writing a formal grammar (and specification) for the language, which is what we did next.
-
-> 6.1.15: Struct and union types
->
-> struct-union-type:
-> struct { struct-union-fields }
-> union { struct-union-fields }
->
-> struct-union-fields:
-> struct-union-field ,[opt]
-> struct-union-field , struct-union-fields
->
-> struct-union-field:
-> offset-specifier[opt] name : type
-> offset-specifier[opt] struct-union-type
-> offset-specifier[opt] identifier
->
-> offset-specifier:
-> @offset ( expression )
-
-All of these samples describe a struct type. The following example shows what this grammar looks like in real code — starting from the word “struct” and including up to the “}” at the end.
-
-```
-type coordinates = struct {
- x: int,
- y: int,
- z: int,
-};
-```
-
-In order to feed our parser tokens to work with, we also need a lexer, or a lexical analyzer. This turns a series of characters like “struct” into a single token, like the T_STRUCT we used in the yacc code. Like the original compiler used yacc as a parser generator, we also used lex as a lexer generator. It’s simply a list of regexes and the names of the tokens that match those regexes, plus a little bit of extra code to do things like turning “1234” into an int with a value of 1234. Our lexer also kept track of line and column numbers as it consumed characters from input files.
-
-```
-"struct" { _lineno(); return T_STRUCT; }
-"union" { _lineno(); return T_UNION; }
-"{" { _lineno(); return '{'; }
-"}" { _lineno(); return '}'; }
-
-[a-zA-Z][a-zA-Z0-9_]* {
- _lineno();
- yylval.sval = strdup(yytext);
- return T_IDENTIFIER;
-}
-```
-
-After we settled on the design with our prototype compiler, which was able to compile some simple test programs to give us a feel for our language design, we set it aside and wrote the specification, and, alongside it, a second compiler. This new compiler was written in C — the language was not ready to self-host yet — and uses a hand-written recursive descent parser.
-
-To simplify the parser, we deliberately designed a context-free LL(1) grammar, which means it (a) can parse an input unambiguously without needing additional context, and (b) only requires one token of look-ahead. This makes our parser design a lot simpler, which was a deliberate goal of the language design. Our hand-rolled lexer is slightly more complicated: it requires two characters of lookahead to distinguish between the “.”, “..”, and “…” tokens.
-
-I’ll skip most of the design for this parser, because the hosted parser is pretty similar and more interesting. Let’s start by taking a look at our hosted lexer. The goal of our lexer is to initialize it with an input source (e.g. a file), from which it can read a stream of characters. Then, each time we need a token, we’ll ask it to read the next one out. It will read as many characters as it needs to unambiguously identify the next token, then hand it up to the caller.
-
-Our specification provides some information to guide the lexer design:
-
-> A token is the smallest unit of meaning in the **** grammar. The lexical analysis phase processes a UTF-8 source file to produce a stream of tokens by matching the terminals with the input text.
->
-> Tokens may be separated by white-space characters, which are defined as the Unicode code-points U+0009 (horizontal tabulation), U+000A (line feed), and U+0020 (space). Any number of whitespace characters may be inserted between tokens, either to disambiguate from subsequent tokens, or for aesthetic purposes. This whitespace is discarded during the lexical analysis phase.
->
-> Within a single token, white-space is meaningful. For example, the string-literal token is defined by two quotation marks " enclosing any number of literal characters. The enclosed characters are considered part of the string-literal token and any whitespace therein is not discarded.
->
-> The lexical analysis process consumes Unicode characters from the source file input until it is exhausted, performing the following steps in order. At each step, it shall consume and discard white-space characters until a non-white-space characters is found, then consume the longest sequence of characters which constitutes a token and emit it to the token stream.
-
-There are a few different kinds of tokens our lexer is going to need to handle: operators, like “+” and “-"; keywords, like “struct” and “return”; user-defined identifiers, like variable names; and constants, like string and numeric literals.
-
-In short, given the following source code:
-
-```
-fn add2(x: int, y: int) int = x + y;
-```
-
-We need to return the following sequence of tokens:
-
-```
-fn (keyword)
-add2 (identifier)
-( (operator)
-x
-:
-int
-,
-y
-int
-)
-int
-=
-x
-+
-y
-;
-```
-
-This way, our parser doesn’t have to deal with whitespace, or distinguishing “int” (keyword) from “integer” (identifier), or handling invalid tokens like “$”. To actually implement this behavior, we’ll start with an initialization function which populates a state structure.
-
-```
-// Initializes a new lexer for the given input stream. The path is borrowed.
-export fn init(in: *io::stream, path: str, flags: flags...) lexer = {
- return lexer {
- in = in,
- path = path,
- loc = (1, 1),
- un = void,
- rb = [void...],
- };
-};
-
-export type lexer = struct {
- in: *io::stream,
- path: str,
- loc: (uint, uint),
- rb: [2](rune | io::EOF | void),
-};
-```
-
-This state structure holds, respectively:
-
-* The input I/O stream
-* The path to the current input file
-* The current (line, column) number
-* A buffer of un-read characters from the input, for lookahead
-
-The main entry point for doing the actual lexing will look like this:
-
-```
-// Returns the next token from the lexer.
-export fn lex(lex: *lexer) (token | error);
-
-// A single lexical token, the value it represents, and its location in a file.
-export type token = (ltok, value, location);
-
-// A token value, used for tokens such as '1337' (an integer).
-export type value = (str | rune | i64 | u64 | f64 | void);
-
-// A location in a source file.
-export type location = struct {
- path: str,
- line: uint,
- col: uint
-};
-
-// A lexical token class.
-export type ltok = enum uint {
- UNDERSCORE,
- ABORT,
- ALLOC,
- APPEND,
- AS,
- // ... continued ...
- EOF,
-};
-```
-
-The idea is that when the caller needs another token, they will call lex, and receive either a token or an error. The purpose of our lex function is to read out the next character and decide what kind of tokens it might be the start of, and dispatch to more specific lexing functions to handle each case.
-
-```
-export fn lex(lex: *lexer) (token | error) = {
- let loc = location { ... };
- let rn: rune = match (nextw(lex)?) {
- _: io::EOF => return (ltok::EOF, void, mkloc(lex)),
- rl: (rune, location) => {
- loc = rl.1;
- rl.0;
- },
- };
-
- if (is_name(rn, false)) {
- unget(lex, rn);
- return lex_name(lex, loc, true);
- };
- if (ascii::isdigit(rn)) {
- unget(lex, rn);
- return lex_literal(lex, loc);
- };
-
- let tok: ltok = switch (rn) {
- * => return syntaxerr(loc, "invalid character"),
- '"', '\'' => {
- unget(lex, rn);
- return lex_rn_str(lex, loc);
- },
- '.', '<', '>' => return lex3(lex, loc, rn),
- '^', '*', '%', '/', '+', '-', ':', '!', '&', '|', '=' => {
- return lex2(lex, loc, rn);
- },
- '~' => ltok::BNOT,
- ',' => ltok::COMMA,
- '{' => ltok::LBRACE,
- '[' => ltok::LBRACKET,
- '(' => ltok::LPAREN,
- '}' => ltok::RBRACE,
- ']' => ltok::RBRACKET,
- ')' => ltok::RPAREN,
- ';' => ltok::SEMICOLON,
- '?' => ltok::QUESTION,
- };
- return (tok, void, loc);
-};
-```
-
-Aside from the EOF case, and simple single-character operators like “;”, both of which this function handles itself, its role is to dispatch work to various sub-lexers.
-
-Each sub-lexer handles a more specific case. The lex_name function handles things which look like identifiers, including keywords; the lex_literal function handles things which look like literals (e.g. “1234”); lex_rn_str handles rune and string literals (e.g. “hello world” and ‘\n’); and lex2 and lex3 respectively handle two- and three-character operators like “&&” and “>>=”. The rest of this switch statement handles single-character operators like “;” directly.
-
-lex_name is the most complicated of these. Because the only thing which distinguishes a keyword from an identifier is that the former matches a specific list of strings, we start by reading a “name” into a buffer, then binary searching against a list of known keywords to see if it matches something there. To facilitate this, “bmap” is a pre-sorted array of keyword names.
-
-```
-const bmap: [_]str = [
- // Keep me alpha-sorted and consistent with the ltok enum.
- "_",
- "abort",
- "alloc",
- "append",
- "as",
- "assert",
- "bool",
- // ...
-];
-
-fn lex_name(lex: *lexer, loc: location, keyword: bool) (token | error) = {
- let buf = strio::dynamic();
- match (next(lex)) {
- r: rune => {
- assert(is_name(r, false));
- strio::appendrune(buf, r);
- },
- _: (io::EOF | io::error) => abort(), // Invariant
- };
-
- for (true) match (next(lex)?) {
- _: io::EOF => break,
- r: rune => {
- if (!is_name(r, true)) {
- unget(lex, r);
- break;
- };
- strio::appendrune(buf, r);
- },
- };
-
- let name = strio::finish(buf);
- if (!keyword) {
- return (ltok::NAME, name, loc);
- };
-
- return match (sort::search(bmap[..ltok::LAST_KEYWORD+1],
- size(str), &name, &namecmp)) {
- null => (ltok::NAME, name, loc),
- v: *void => {
- defer free(name);
- let tok = v: uintptr - &bmap[0]: uintptr;
- tok /= size(str): uintptr;
- (tok: ltok, void, loc);
- },
- };
-};
-```
-
-The rest of the code is more of the same, but I’ve put it up here if you want to read it:
-
-=> https://paste.sr.ht/~sircmpwn/25871787b0d41db2b0af573ba1c93e1b6438b942 lex.ha
-
-Let’s move on to parsing: we need to turn this one dimensional stream of tokens into an structured form: the Abstract Syntax Tree. Consider the following sample code:
-
-```
-let x: int = add2(40, 2);
-```
-
-Our token stream looks like this:
-
-```
-let x : int = add2 ( 40 , 2 ) ;
-```
-
-But what we need is something more structured, like this:
-
-```
-binding
- name="x"
- type="int"
- initializer=call-expression
- => func="add2"
- parameters
- constant value="40"
- constant value="2"
-```
-
-We know at each step what kinds of tokens are valid in each situation. After we see “let”, we know that we’re parsing a binding, so we look for a name (“x”) then a colon token, a type for the variable, an equals sign, and an expression which initializes it. To parse the initializer, we see an identifier, “add2”, then an open parenthesis, so we know we’re in a call expression, and we can start parsing arguments.
-
-To make our parser code expressive, and to handle errors neatly, we’re going to implement a few helper function that lets us describe these states in terms of what the parser wants from the lexer. We have a few functions to accomplish this:
-
-```
-// Requires the next token to have a matching ltok. Returns that token, or an error.
-fn want(lexer: *lex::lexer, want: lex::ltok...) (lex::token | error) = {
- let tok = lex::lex(lexer)?;
- if (len(want) == 0) {
- return tok;
- };
- for (let i = 0z; i < len(want); i += 1) {
- if (tok.0 == want[i]) {
- return tok;
- };
- };
-
- let buf = strio::dynamic();
- defer io::close(buf);
- for (let i = 0z; i < len(want); i += 1) {
- fmt::fprintf(buf, "'{}'", lex::tokstr((want[i], void, mkloc(lexer))));
- if (i + 1 < len(want)) {
- fmt::fprint(buf, ", ");
- };
- };
- return syntaxerr(mkloc(lexer), "Unexpected '{}', was expecting {}",
- lex::tokstr(tok), strio::string(buf));
-};
-
-// Looks for a matching ltok from the lexer, and if not present, unlexes the
-// token and returns void. If found, the token is consumed from the lexer and is
-// returned.
-fn try(
- lexer: *lex::lexer,
- want: lex::ltok...
-) (lex::token | error | void) = {
- let tok = lex::lex(lexer)?;
- assert(len(want) > 0);
- for (let i = 0z; i < len(want); i += 1) {
- if (tok.0 == want[i]) {
- return tok;
- };
- };
- lex::unlex(lexer, tok);
-};
-
-// Looks for a matching ltok from the lexer, unlexes the token, and returns
-// it; or void if it was not a ltok.
-fn peek(
- lexer: *lex::lexer,
- want: lex::ltok...
-) (lex::token | error | void) = {
- let tok = lex::lex(lexer)?;
- lex::unlex(lexer, tok);
- if (len(want) == 0) {
- return tok;
- };
- for (let i = 0z; i < len(want); i += 1) {
- if (tok.0 == want[i]) {
- return tok;
- };
- };
-};
-```
-
-Let’s say we’re looking for a binding like our sample code to show up next. The grammar from the spec is as follows:
-
-> 6.6.44: Variable binding
->
-> binding-list:
-> static[opt] let bindings
-> static[opt] const bindings
->
-> bidings:
-> binding ,[opt]
-> binding , bindings
->
-> binding:
-> name = expression
-> name : type = expression
-
-And here’s the code that parses that:
-
-```
-fn binding(lexer: *lex::lexer) (ast::expr | error) = {
- const is_static: bool = try(lexer, ltok::STATIC)? is lex::token;
- const is_const = switch (want(lexer, ltok::LET, ltok::CONST)?.0) {
- ltok::LET => false,
- ltok::CONST => true,
- };
-
- let bindings: []ast::binding = [];
- for (true) {
- const name = want(lexer, ltok::NAME)?.1 as str;
- const btype: nullable *ast::_type =
- if (try(lexer, ltok::COLON)? is lex::token) {
- alloc(_type(lexer)?);
- } else null;
- want(lexer, ltok::EQUAL)?;
- const init = alloc(expression(lexer)?);
- append(bindings, ast::binding {
- name = name,
- _type = btype,
- init = init,
- });
- match (try(lexer, ltok::COMMA)?) {
- _: void => break,
- _: lex::token => void,
- };
- };
-
- return ast::binding_expr {
- is_static = is_static,
- is_const = is_const,
- bindings = bindings,
- };
-};
-```
-
-Hopefully the flow of this code is fairly apparent. The goal is to fill in the following AST structure:
-
-```
-// A single variable biding. For example:
-//
-// foo: int = bar
-export type binding = struct {
- name: str,
- _type: nullable *_type,
- init: *expr,
-};
-
-// A variable binding expression. For example:
-//
-// let foo: int = bar, ...
-export type binding_expr = struct {
- is_static: bool,
- is_const: bool,
- bindings: []binding,
-};
-```
-
-The rest of the code is pretty similar, though some corners of the grammar are a bit hairier than others. One example is how we parse infix operators for binary arithmetic expressions (such as “2 + 2”):
-
-```
-fn binarithm(
- lexer: *lex::lexer,
- lvalue: (ast::expr | void),
- i: int,
-) (ast::expr | error) = {
- // Precedence climbing parser
- // https://en.wikipedia.org/wiki/Operator-precedence_parser
- let lvalue = match (lvalue) {
- _: void => cast(lexer, void)?,
- expr: ast::expr => expr,
- };
-
- let tok = lex::lex(lexer)?;
- for (let j = precedence(tok); j >= i; j = precedence(tok)) {
- const op = binop_for_tok(tok);
-
- let rvalue = cast(lexer, void)?;
- tok = lex::lex(lexer)?;
-
- for (let k = precedence(tok); k > j; k = precedence(tok)) {
- lex::unlex(lexer, tok);
- rvalue = binarithm(lexer, rvalue, k)?;
- tok = lex::lex(lexer)?;
- };
-
- let expr = ast::binarithm_expr {
- op = op,
- lvalue = alloc(lvalue),
- rvalue = alloc(rvalue),
- };
- lvalue = expr;
- };
-
- lex::unlex(lexer, tok);
- return lvalue;
-};
-
-fn precedence(tok: lex::token) int = switch (tok.0) {
- ltok::LOR => 0,
- ltok::LXOR => 1,
- ltok::LAND => 2,
- ltok::LEQUAL, ltok::NEQUAL => 3,
- ltok::LESS, ltok::LESSEQ, ltok::GREATER, ltok::GREATEREQ => 4,
- ltok::BOR => 5,
- ltok::BXOR => 6,
- ltok::BAND => 7,
- ltok::LSHIFT, ltok::RSHIFT => 8,
- ltok::PLUS, ltok::MINUS => 9,
- ltok::TIMES, ltok::DIV, ltok::MODULO => 10,
- * => -1,
-};
-```
-
-I don’t really grok this algorithm, to be honest, but hey, it works. Whenever I write a precedence climbing parser, I’ll stare at the Wikipedia page for 15 minutes, quickly write a parser, and then immediately forget how it works. Maybe I’ll write a blog post about it someday.
-
-Anyway, ultimately, this code lives in our standard library and is used for several things, including the our (early in development) self-hosted compiler. Here’s an example of its usage, taken from our documentation generator:
-
-```
-fn scan(path: str) (ast::subunit | error) = {
- const input = match (os::open(path)) {
- s: *io::stream => s,
- err: fs::error => fmt::fatal("Error reading {}: {}",
- path, fs::strerror(err)),
- };
- defer io::close(input);
- const lexer = lex::init(input, path, lex::flags::COMMENTS);
- return parse::subunit(&lexer)?;
-};
-```
-
-Where the “ast::subunit” type is:
-
-```
-// A sub-unit, typically representing a single source file.
-export type subunit = struct {
- imports: []import,
- decls: []decl,
-};
-```
-
-Pretty straightforward! Having this as part of the standard library should make it much easier for users to build language-aware tooling with the language itself. We also plan on having our type checker in the stdlib as well. This is something that I drew inspiration for from Golang — having a lot of their toolchain components in the standard library makes it really easy to write Go-aware tools.
-
-So, there you have it: the next stage in the development of our language. I hope you’re looking forward to it!
diff --git a/content/blog/Our-self-hosted-parser-design.md b/content/blog/Our-self-hosted-parser-design.md
@@ -1,7 +1,6 @@
---
title: "Parsers all the way down: writing a self-hosting parser"
date: 2021-04-22
-outputs: [html, gemtext]
---
One of the things we're working on in [my new programming language][0] is a
diff --git a/content/blog/Pinebook-Pro-review.gmi b/content/blog/Pinebook-Pro-review.gmi
@@ -1,21 +0,0 @@
-I received the original Pinebook for free from the good folks at Pine64 a few years ago, when I visited Berlin to work with the KDE developers. Honestly, I was underwhelmed. The performance was abysmal and ARM is a nightmare to work with. For these reasons, I was skeptical when I bought the Pinebook Pro. I have also spoken of my disdain for modern laptops in general before: the state of laptops in $CURRENTYEAR is abysmal. As such, I have been using a ThinkPad X200, an 11 year old laptop, as my sole laptop for several years now.
-
-=> https://drewdevault.com/2020/02/18/Fucking-laptops.html Previously: Fucking laptops
-
-I am pleased to share that the Pinebook Pro is a pleasure to use, and is likely to finally replace the old ThinkPad for most of my needs.
-
-Let me get the bad parts out of the way upfront: ARM is still a nightmare to work with. I really hate this architecture. Alpine Linux’s upstream aarch64 doesn’t work with this laptop, so I have to use postmarketOS, an Alpine derivative, instead. I do like pmOS — on phones — but I would definitely prefer to use Alpine upstream for a laptop use-case. That being said, the Pine community has been doing a very good job of working on getting support for their devices upstream, and the situation has been steadily improving. I expect that one of the next batches of PBPs will include an updated u-Boot payload which will make UEFI booting possible, and Linux distros with the necessary kernel patches upstreamed will be shipping in the foreseeable future. This will alleviate most of my ARM-based grievances.
-
-=> https://postmarketos.org PostmarketOS
-
-The built-in speakers are also pretty tinny and weak. It has a headphone port which works fine, though. Configuring ALSA is a chore; these SoCs tend to have rather complicated audio setups. I have not been able to get the webcam working (some kernel option is missing, my contact at pmOS is working on it), but I understand that the quality is pretty poor. It can supposedly be configured to work with a USB-C dock for an external display, but I have never got it working and I understand that there are some kernel bits missing for this as well. The touchpad is also pretty bad, but thankfully I use mainly keyboard-driven software. The built-in eMMC storage is pretty small, though it can be upgraded and I understand that there is an option to install an NVMe — at the expense of your battery life.
-
-Cons aside, what do I like about it? Well, many things. It’s lightweight and thin (1.3kg), but has a nice 14" screen that feels like the right size for me. The screen looks really nice, too. The colors look good, it works well at any brightness level, and in most lighting situations. It’s definitely better than the old X200 display. The keyboard is not as nice as the ThinkPad (a high bar to meet), but it’s pretty comfortable for extended use. The two USB-3 ports and the sole USB-C port are also nice to have. It can charge via USB-C, or via an included DC wall wart and barrel plug. The battery lasts for 6-8 hours: way better than my old ThinkPad.
-
-It is an ARM machine, so the performance is not competitive with modern x86_64 platforms. It is somewhat faster than my 11-year-old previous machine, though. It has six cores and any parallelizable job (like building code) works acceptably fast, at least for the languages I primarily use (i.e. not Rust or C++). It can also play back 1080p video with a little bit of stuttering, and 720p video flawlessly. Browsing the web is a bit of a chore, but it always was. Sourcehut works fine.
-
-=> https://sourcehut.org/blog/2021-05-08-sourcehut-is-the-fastest-who-cares Sourcehut is the fastest. So what?
-
-The device is user-servicable, which I appreciate very much. It’s very easy to take apart (a small Phillips head screwdriver is sufficient) and you can buy individual parts from the Pine64 store to do replacements yourself.
-
-In short, it checks most of my boxes, which is something no other laptop has even come remotely close to in the past ten years. It is the only laptop I have ever used which makes a substantial improvement on the circa-2010 state of the art. Because ARM is a nightmare, I’m still likely to use the old ThinkPads for some use-cases, namely for hobby OS development and running niche operating systems. But my Pinebook Pro is here to stay.
diff --git a/content/blog/Pinebook-Pro-review.md b/content/blog/Pinebook-Pro-review.md
@@ -1,7 +1,6 @@
---
title: Pinebook Pro review
date: 2021-05-14
-outputs: [html, gemtext]
---
I received the original Pinebook for free from the good folks at Pine64 a few
diff --git a/content/blog/Praise-for-Alpine-Linux.gmi b/content/blog/Praise-for-Alpine-Linux.gmi
@@ -1,35 +0,0 @@
-The traits I prize most in an operating system are the following:
-
-* Simplicity
-* Stability
-* Reliability
-* Robustness
-
-=> https://drewdevault.com/2020/10/09/Four-principles-of-software-engineering.html Previously: Four principles of software engineering
-
-As a bonus, I'd also like to have:
-
-* Documentation
-* Professionalism
-* Performance
-* Access to up-to-date software
-
-Alpine meets all of the essential criteria and most of the optional criteria (documentation is the weakest link), and far better than any other Linux distribution.
-
-In terms of simplicity, Alpine Linux is unpeered. Alpine is the only Linux distribution that fits in my head. The pieces from which it is built from are simple, easily understood, and few in number, and I can usually predict how it will behave in production. The software choices, such as musl libc, are highly appreciated in this respect as well, lending a greater degree of simplicity to the system as a whole.
-
-Alpine also meets expectations in terms of stability, though it is not alone in this respect. Active development is done in an "edge" branch, which is what I run on my main workstation and laptops. Every six months, a stable release is cut from this branch and supported for two years, so four releases are supported at any given moment. This strikes an excellent balance: two years is long enough that the system is stable and predictable for a long time, but short enough to discourage you from letting the system atrophy. An outdated system is not a robust system.
-
-In terms of reliability, I can be confident that an Alpine system will work properly for an extended period of time, without frequent hands-on maintenance or problem solving. Upgrading between releases almost always goes off without a hitch (and usually the hitch was documented in the release notes, if you cared to read them), and I've never had an issue with patch releases. Edge is less reliable, but only marginally: it's much more stable than, say, Arch Linux.
-
-The last of my prized traits is robustness, and Alpine meets this as well. The package manager, apk, is seriously robust. It expresses your constraints, and the constraints of your desired software, and solves for a system state which is always correct and consistent. Alpine's behavior under pathological conditions is generally predictable and easily understood. OpenRC is not as good, but thankfully it's slated to be replaced in the foreseeable future.
-
-In these respects, Alpine is unmatched, and I would never dream of using any other Linux distribution in production.
-
-Documentation is one of Alpine's weak points. This is generally offset by Alpine's simplicity — it can usually be understood reasonably quickly and easily even in the absence of documentation — but it remains an issue. That being said, Alpine has shown consistent progress in this respect in the past few releases, shipping more manual pages, improving the wiki, and standardizing processes for matters like release notes.
-
-I also mostly appreciate Alpine's professionalism. It is a serious project and almost everyone works with the level of professionalism I would expect from a production operating system. However, Alpine lacks strong leadership, some trolling and uncooperative participants go unchecked, and political infighting has occurred on a few occasions. This is usually not an impedance to getting work done, but it is frustrating nevertheless. I always aim to work closely with upstream on any of the projects that I use, and a professional upstream team is a luxury that I very much appreciate when I can find it.
-
-Alpine excels in my last two criteria: performance and access to up-to-date software. apk is simply the fastest package manager available. It leaves apt and dnf in the dust, and is significantly faster than pacman. Edge updates pretty fast, and as a package maintainer it's usually quite easy to get new versions of upstream software in place quickly even for someone else's package. I can expect upstream releases to be available on edge within a few days, if not a few hours. Access to new software in stable releases is reasonably fast, too, with less than a six month wait for systems which are tracking the latest stable Alpine release.
-
-In summary, I use Alpine Linux for all of my use-cases: dedicated servers and virtual machines in production, on my desktop workstation, on all of my laptops, and on my PinePhone (via postmarketOS). It is the best Linux distribution I have used to date. I maintain just under a hundred Alpine packages upstream, three third-party package repositories, and several dozens of Alpine systems in production. I highly recommend it.
diff --git a/content/blog/RE-Is-this-aggregator-idea-good.gmi b/content/blog/RE-Is-this-aggregator-idea-good.gmi
@@ -1,12 +0,0 @@
-=> gemini://tilde.team/~ew0k/is-this-aggregator-idea-good.gmi ew0k is a Teddybear - Is This Aggregator Idea Good?
-
-CAPCOM is open source:
-
-=> https://tildegit.org/solderpunk/CAPCOM CAPCOM
-
-and, in any case, trivially implemented from scratch. I was thinking it would be cool to encourage people to start hosting CAPCOM on their own capsules as well, with a curated list of what feeds interest them. We could start adding CAPCOM links to our capsule's index pages to make it easier to move around in such a system.
-
-Pros: requires very little effort or bespoke software development
-Cons: requires very little effort or bespoke software development :^)
-
-And an aside: I generally dislike this approach of using our gemlogs to have discussions via "RE: post title" and such-with. Isn't this better suited to the mailing list?
diff --git a/content/blog/RE-Is-this-aggregator-idea-good.md b/content/blog/RE-Is-this-aggregator-idea-good.md
@@ -1,6 +0,0 @@
----
-title: "RE: Is This Aggregator Idea Good?"
-date: 2020-11-15
-outputs: [gemtext]
-nohtml: true
----
diff --git a/content/blog/Reflection.gmi b/content/blog/Reflection.gmi
@@ -1,333 +0,0 @@
-Note: this is a redacted copy of a blog post published on the internal development blog of a new systems programming language. The name of the project and further details are deliberately being kept in confidence until the initial release. You may be able to find it if you look hard enough — you have my thanks in advance for keeping it to yourself. For more information, see “We are building a new systems programming language”.
-
-I’ve just merged support for reflection in ****. Here’s how it works!
-
-## Background
-
-“Reflection” refers to the ability for a program to examine the type system of its programming language, and to dynamically manipulate types and their values at runtime. You can learn more at Wikipedia:
-
-=> https://en.wikipedia.org/wiki/Reflective_programming Reflective programming
-
-## Reflection from a user perspective
-
-Let’s start with a small sample program:
-
-```
-use fmt;
-use types;
-
-export fn main() void = {
- const my_type: type = type(int);
- const typeinfo: *types::typeinfo = types::reflect(my_type);
- fmt::printfln("int\nid: {}\nsize: {}\nalignment: {}",
- typeinfo.id, typeinfo.sz, typeinfo.al)!;
-};
-```
-
-Running this program produces the following output:
-
-```
-int
-id: 1099590421
-size: 4
-alignment: 4
-```
-
-This gives us a simple starting point to look at. We can see that “type” is used as the type of the “my_type” variable, and initialized with a “type(int)” expression. This expression returns a type value for the type given in the parenthesis — in this case, for the “int” type.
-
-To learn anything useful, we have to convert this to a “types::typeinfo” pointer, which we do via types::reflect. The typeinfo structure looks like this:
-
-```
-type typeinfo = struct {
- id: uint,
- sz: size,
- al: size,
- flags: flags,
- repr: repr,
-};
-```
-
-The ID field is the type’s unique identifier, which is universally unique and deterministic, and forms part of ****’s ABI. This is derived from an FNV-32 hash of the type information. You can find the ID for any type by modifying our little example program, or you can use the helper program in the cmd/****type directory of the **** source tree.
-
-Another important field is the “repr” field, which is short for “representation”, and it gives details about the inner structure of the type. The repr type is defined as a tagged union of all possible type representations in the **** type system:
-
-```
-type repr = (alias | array | builtin | enumerated | func | pointer | slice | struct_union | tagged | tuple);
-```
-
-In the case of the “int” type, the representation is “builtin”:
-
-```
-type builtin = enum uint {
- BOOL, CHAR, F32, F64, I16, I32, I64, I8, INT, NULL, RUNE, SIZE, STR, U16, U32,
- U64, U8, UINT, UINTPTR, VOID, TYPE,
-};
-```
-
-builtin::INT, in this case. The structure and representation of the “int” type is defined by the **** specification and cannot be overridden by the program, so no further information is necessary.
-
-More information is provided for more complex types, such as structs.
-
-```
-use fmt;
-use types;
-
-export fn main() void = {
- const my_type: type = type(struct {
- x: int,
- y: int,
- });
- const typeinfo: *types::typeinfo = types::reflect(my_type);
- fmt::printfln("id: {}\nsize: {}\nalignment: {}",
- typeinfo.id, typeinfo.sz, typeinfo.al)!;
- const st = typeinfo.repr as types::struct_union;
- assert(st.kind == types::struct_kind::STRUCT);
- for (let i = 0z; i < len(st.fields); i += 1) {
- const field = st.fields[i];
- assert(field.type_ == type(int));
- fmt::printfln("\t{}: offset {}", field.name, field.offs)!;
- };
-};
-```
-
-The output of this program is:
-
-```
-id: 2617358403
-size: 8
-alignment: 4
- x: offset 0
- y: offset 4
-```
-
-Here the “repr” field provides the “types::struct_union” structure:
-
-```
-type struct_union = struct {
- kind: struct_kind,
- fields: []struct_field,
-};
-
-type struct_kind = enum {
- STRUCT,
- UNION,
-};
-
-type struct_field = struct {
- name: str,
- offs: size,
- type_: type,
-};
-```
-
-Makes sense? Excellent. So how does it all work?
-
-## Reflection internals
-
-Let me first draw the curtain back from the magic “types::reflect” function:
-
-```
-// Returns [[typeinfo]] for the provided type.
-export fn reflect(in: type) const *typeinfo = in: *typeinfo;
-```
-
-It simply casts the “type” value to a pointer, which is what it is. When the compiler sees an expression like let x = type(int), it statically allocates the typeinfo data structure into the program and returns a pointer to it, which is then wrapped up in the opaque “type” meta-type. The “reflect” function simply converts it to a useful pointer. Here’s the generated IR for this:
-
-```
-%binding.4 =l alloc8 8
-storel $rt.builtin_int, %binding.4
-```
-
-A clever eye will note that we initialize the value to a pointer to “rt.builtin_int”, rather than allocating a typeinfo structure here and now. The runtime module provides static typeinfos for all built-in types, which look like this:
-
-```
-export const @hidden builtin_int: types::typeinfo = types::typeinfo {
- id = 1099590421,
- sz = 4, al = 4, flags = 0,
- repr = types::builtin::INT,
-};
-```
-
-These are an internal implementation detail, hence “@hidden”. But many types are not built-in, so the compiler is required to statically allocate a typeinfo structure:
-
-```
-export fn main() void = {
- let x = type(struct { x: int, y: int });
-};
-```
-
-```
-data $strdata.7 = section ".data.strdata.7" { b "x" }
-
-data $strdata.8 = section ".data.strdata.8" { b "y" }
-
-data $sldata.6 = section ".data.sldata.6" {
- l $strdata.7, l 1, l 1, l 0, l $rt.builtin_int,
- l $strdata.8, l 1, l 1, l 4, l $rt.builtin_int,
-}
-
-data $typeinfo.5 = section ".data.typeinfo.5" {
- w 2617358403, z 4,
- l 8,
- l 4,
- w 0, z 4,
- w 5555256, z 4,
- w 0, z 4,
- l $sldata.6, l 2, l 2,
-}
-
-export function section ".text.main" "ax" $main() {
-@start.0
- %binding.4 =l alloc8 8
-@body.1
- storel $typeinfo.5, %binding.4
-@.2
- ret
-}
-```
-
-This has the unfortunate effect of re-generating all of these typeinfo structures every time someone uses type(struct { x: int, y: int }). We still have one trick up our sleeve, though: type aliases! Most people don’t actually use anonymous structs like this often, preferring to use a type alias to give them a name like “coords”. When they do this, the situation improves:
-
-```
-type coords = struct { x: int, y: int };
-
-export fn main() void = {
- let x = type(coords);
-};
-```
-
-```
-data $strdata.1 = section ".data.strdata.1" { b "coords" }
-
-data $sldata.0 = section ".data.sldata.0" { l $strdata.1, l 6, l 6 }
-
-data $strdata.4 = section ".data.strdata.4" { b "x" }
-
-data $strdata.5 = section ".data.strdata.5" { b "y" }
-
-data $sldata.3 = section ".data.sldata.3" {
- l $strdata.4, l 1, l 1, l 0, l $rt.builtin_int,
- l $strdata.5, l 1, l 1, l 4, l $rt.builtin_int,
-}
-
-data $typeinfo.2 = section ".data.typeinfo.2" {
- w 2617358403, z 4,
- l 8,
- l 4,
- w 0, z 4,
- w 5555256, z 4,
- w 0, z 4,
- l $sldata.3, l 2, l 2,
-}
-
-data $type.1491593906 = section ".data.type.1491593906" {
- w 1491593906, z 4,
- l 8,
- l 4,
- w 0, z 4,
- w 3241765159, z 4,
- l $sldata.0, l 1, l 1,
- l $typeinfo.2
-}
-
-export function section ".text.main" "ax" $main() {
-@start.6
- %binding.10 =l alloc8 8
-@body.7
- storel $type.1491593906, %binding.10
-@.8
- ret
-}
-```
-
-The declaration of a type alias provides us with the perfect opportunity to statically allocate a typeinfo singleton for it. Any of these which go unused by the program are automatically stripped out by the linker thanks to the --gc-sections flag. Also note that a type alias is considered a distinct representation from the underlying struct type:
-
-```
-type alias = struct {
- ident: []str,
- secondary: type,
-};
-```
-
-This explains the differences in the structure of the “type.1491593906” global. The struct { x: int, y: int } type is the “secondary” field of this type.
-
-## Future improvements
-
-This is just the first half of the equation. The next half is to provide useful functions to work with this data. One such example is “types::strenum”:
-
-```
-// Returns the value of the enum at "val" as a string. Aborts if the value is
-// not present. Note that this does not work with enums being used as a flag
-// type, see [[strflag]] instead.
-export fn strenum(ty: type, val: *void) str = {
- const ty = unwrap(ty);
- const en = ty.repr as enumerated;
- const value: u64 = switch (en.storage) {
- case builtin::CHAR, builtin::I8, builtin::U8 =>
- yield *(val: *u8);
- case builtin::I16, builtin::U16 =>
- yield *(val: *u16);
- case builtin::I32, builtin::U32 =>
- yield *(val: *u32);
- case builtin::I64, builtin::U64 =>
- yield *(val: *u64);
- case builtin::INT, builtin::UINT =>
- yield switch (size(int)) {
- case 4 =>
- yield *(val: *u32);
- case 8 =>
- yield *(val: *u64);
- case => abort();
- };
- case builtin::SIZE =>
- yield switch (size(size)) {
- case 4 =>
- yield *(val: *u32);
- case 8 =>
- yield *(val: *u64);
- case => abort();
- };
- case => abort();
- };
-
- for (let i = 0z; i < len(en.values); i += 1) {
- if (en.values[i].1.u == value) {
- return en.values[i].0;
- };
- };
-
- abort("enum has invalid value");
-};
-```
-
-This is used like so:
-
-```
-use types;
-use fmt;
-
-type watchmen = enum {
- VIMES,
- CARROT,
- ANGUA,
- COLON,
- NOBBY = -1,
-};
-
-export fn main() void = {
- let officer = watchmen::ANGUA;
- fmt::println(types::strenum(type(watchmen), &officer))!; // Prints ANGUA
-};
-```
-
-Additional work is required to make more useful tools like this. We will probably want to introduce a “value” abstraction which can store an arbitrary value for an arbitrary type, and helper functions to assign to or read from those values. A particularly complex case is likely to be some kind of helper for calling a function pointer via reflection, which we I may cover in a later article. There will also be some work to bring the “types” (reflection) module closer to the ****::* namespace, which already features ****::ast, ****::parse, and ****::types, so that the parser, type checker, and reflection systems are interopable and work together to implement the **** type system.
-
-Want to help us build this language? We are primarily looking for help in the following domains:
-
-* Architectures or operating systems, to help with ports
-* Compilers & language design
-* Cryptography implementations
-* Date & time implementations
-* Unix
-
-If you’re an expert in a domain which is not listed, but that you think we should know about, then feel free to reach out. Experts are perferred, motivated enthusiasts are acceptable. Send me an email if you want to help!
diff --git a/content/blog/Reflection.md b/content/blog/Reflection.md
@@ -1,7 +1,6 @@
---
title: How reflection works in ****
date: 2021-10-05
-outputs: [html, gemtext]
---
*Note: this is a redacted copy of a blog post published on the internal
diff --git a/content/blog/Reframing-gemini.gmi b/content/blog/Reframing-gemini.gmi
@@ -1,13 +0,0 @@
-I think there is a simple misconception about Gemini which can be corrected with a reframing of our thinking around what the protocol is and is not designed for. Let me introduce the idea briefly. What is Gemini? Here is my proposed answer:
-
-> Gemini is a read-only protocol for hyperlinked content distribution.
-
-I think this "read-only" nature is something we've been missing from how we present Gemini to people, and confusion about this property is the source of a lot of the frustrating proposals we see for how Gemini should be extended.
-
-Gemini is not a protocol for publishing. We use a different protocol for that, like git or rsync. It's not interactive, either, at least not in the same sense as the web is. It is a protocol for consumption: for reading hyperlinked Gemtext documents.
-
-This quickly rules out a lot of the feature requests which have frustrated Gemini enthusiasts since the dawn of the protocol. Forms? No, Gemini is read-only. File uploads? No, Gemini is read-only. Commenting systems? No, Gemini is read-only. BBS? No, Gemini is read-only.
-
-This also gives us a way to interpret the protocol design and figure out what each feature is meant to do. The only detail in this respect which I would like to make note of right now is the "INPUT" response code. Its purpose is to find content to read. It's for searching! Gemini is read-only, and this just helps you narrow down what to read. It is not for, say, writing comments. It also tells us that 11, SENSITIVE INPUT, should be removed -- something which has already been proposed.
-
-For what it's worth, this framing communicates a lot about what the protocol itself is for, but isn't much help for understanding the document format. Questions like inline images, inline formatting syntax, and other extensions to Gemtext are not neatly resolved by this philosophy. I have my opinions on these, but when it comes to framing them succinctly, I'll leave the problem to future Gemini philosophers.
diff --git a/content/blog/Reframing-gemini.md b/content/blog/Reframing-gemini.md
@@ -1,5 +0,0 @@
----
-title: Reframing the philosophy of Gemini
-date: 2021-11-15
-outputs: [gemtext]
----
diff --git a/content/blog/Shell-literacy.gmi b/content/blog/Shell-literacy.gmi
@@ -1,115 +0,0 @@
-Shell literacy is one of the most important skills you ought to possess as a programmer. The Unix shell is one of the most powerful ideas ever put to code, and should be second nature to you as a programmer. No other tool is nearly as effective at commanding your computer to perform complex tasks quickly — or at storing them as scripts you can use later.
-
-In my workflow, I use Vim as my editor, and Unix as my “IDE”. I don’t trick out my vimrc to add a bunch of IDE-like features — the most substantial plugin I use on a daily basis is Ctrl+P, and that just makes it easier to open files. Being Vim literate is a valuable skill, but an important detail is knowing when to drop it. My daily workflow involves several open terminals, generally one with Vim, another to run builds or daemons, and a third which just keeps a shell handy for anything I might ask of it.
-
-=> https://git.sr.ht/~sircmpwn/dotfiles/tree/master/.vimrc My .vimrc
-
-The shell I keep open allows me to perform complex tasks and answer complex questions as I work. I find interesting things with git grep, perform bulk find-and-replace with sed, answer questions with awk, and perform more intricate tasks on-demand with ad-hoc shell commands and pipelines. I have the freedom to creatively solve problems without being constrained to the rails laid by IDE designers.
-
-=> https://git-scm.com/docs/git-grep git grep
-=> gemini://drewdevault.com/cgi-bin/man.sh/1/sed sed
-=> gemini://drewdevault.com/cgi-bin/man.sh/1/awk awk
-
-Here’s an example of a problem I encountered recently: I had a bunch of changes in a git repository. I wanted to restore deleted files without dropping the rest of my changes, but there were hundreds of these. How can I efficiently address this problem?
-
-Well, I start by getting a grasp of the scale of the issue with git status, which shows hundreds of deleted files that need to be restored. This scale is beyond the practical limit of manual intervention, so I switch to git status -s to get a more pipeline-friendly output.
-
-```
-$ git status -s
- D main/a52dec/APKBUILD
- D main/a52dec/a52dec-0.7.4-build.patch
- D main/a52dec/automake.patch
- D main/a52dec/fix-globals-test-x86-pie.patch
- D main/aaudit/APKBUILD
- D main/aaudit/aaudit
- D main/aaudit/aaudit-common.lua
- D main/aaudit/aaudit-repo
- D main/aaudit/aaudit-server.json
- D main/aaudit/aaudit-server.lua
- ...
-```
-
-I can work with this. I add grep '^ D' to filter out any entries which were not deleted, and pipe it through awk '{ print $2 }' to extract just the filenames. I’ll often run the incomplete pipeline just to check my work and catch my bearings:
-
-```
-$ git status -s | grep '^ D' | awk '{ print $2 }'
-main/a52dec/APKBUILD
-main/a52dec/a52dec-0.7.4-build.patch
-main/a52dec/automake.patch
-main/a52dec/fix-globals-test-x86-pie.patch
-main/aaudit/APKBUILD
-main/aaudit/aaudit
-main/aaudit/aaudit-common.lua
-main/aaudit/aaudit-repo
-main/aaudit/aaudit-server.json
-main/aaudit/aaudit-server.lua
-...
-```
-
-Very good — we have produced a list of files which we need to address. Note that, in retrospect, I could have dropped the grep and just used awk to the same effect:
-
-```
-$ git status -s | awk '/^ D/ { print $2 }'
-main/a52dec/APKBUILD
-main/a52dec/a52dec-0.7.4-build.patch
-main/a52dec/automake.patch
-main/a52dec/fix-globals-test-x86-pie.patch
-main/aaudit/APKBUILD
-main/aaudit/aaudit
-main/aaudit/aaudit-common.lua
-main/aaudit/aaudit-repo
-main/aaudit/aaudit-server.json
-main/aaudit/aaudit-server.lua
-...
-```
-
-However, we’re just writing an ad-hoc command here to solve a specific, temporary problem — finesse is not important. This command isn’t going to be subjected to a code review. Often my thinking in these situations is to solve one problem at a time: “filter the list” and “reword the list”. Anyway, the last step is to actually use this list of files to address the issue, with the help of xargs.
-
-=> gemini://drewdevault.com/cgi-bin/man.sh/1/xargs xargs
-
-```
-$ git status -s | awk '/^ D/ { print $2 }' | xargs git checkout --
-```
-
-Let’s look at some more examples of interesting ad-hoc shell pipelines. Naturally, I wrote a shell pipeline to find some:
-
-```
-$ history | cut -d' ' -f2- | awk -F'|' '{ print NF-1 " " $0 }' | sort -n | tail
-```
-
-Here’s the breakdown:
-
-* history prints a list of my historical shell commands.
-* cut -d' ' -f2- removes the first field from each line, using space as a delimiter. history numbers every command, and this removes the number.
-* awk -F'|' '{ print NF-1 " " $0 } tells awk to use | as the field delimiter for each line, and print each line prefixed with the number of fields. This prints every line of my history, prefixed with the number of times the pipe operator appears in that line.
-* sort -n numerically sorts this list.
-* tail prints the last 10 items.
-
-This command, written in the moment, finds, characterizes, filters, and sorts my shell history by command complexity. Here are a couple of the cool shell commands I found:
-
-Play the 50 newest videos in a directory with mpv:
-
-```
-$ ls -tc | head -n50 | tr '\n' '\0' | xargs -0 mpv
-```
-
-I use this command all the time. If I want to watch a video later, I will touch the file so it appears at the top of this list. Another command transmits a tarball of a patched version of Celeste to a friend using netcat, minus the (large) game assets, with a progress display via pv:
-
-```
-$ find . ! -path './Content/*' | xargs tar -cv | pv | zstd | nc 204:fbf5:... 12345
-```
-
-=> pv http://www.ivarch.com/programs/pv.shtml
-
-And on my friend’s end:
-
-```
-$ nc -vll :: 12345 | zstdcat | pv | tar -xv
-```
-
-tar, by the way, is an under-rated tool for moving multiple files through a pipeline. It can read and write tarballs to stdin and stdout!
-
-I hope that this has given you a tantalizing taste of the power of the Unix shell. If you want to learn more about the shell, I can recommend shellhaters.org as a great jumping-off point into various shell-related parts of the POSIX specification. Don’t be afraid of the spec — it’s concise, comprehensive, comprehensible, and full of examples. I would also definitely recommend taking some time to learn awk in particular.
-
-=> http://shellhaters.org/ Shell haters
-=> https://ferd.ca/awk-in-20-minutes.html A brief awk tutorial
diff --git a/content/blog/Shell-literacy.md b/content/blog/Shell-literacy.md
@@ -1,7 +1,6 @@
---
date: 2020-12-12
title: Become shell literate
-outputs: [html, gemtext]
---
Shell literacy is one of the most important skills you ought to possess as a
diff --git a/content/blog/Spamtoberfest.gmi b/content/blog/Spamtoberfest.gmi
@@ -1,83 +0,0 @@
-As I’ve written before, the best contributors to a FOSS project are intrinsically motivated to solve problems in your software. This sort of contribution is often fixing an important problem and places a smaller burden on maintainers to spend their time working with the contributor. I’ve previously contrasted this with the “I want to help out!” contributions, where a person just has a vague desire to help out. Those contributions are, generally, less valuable and place a greater burden on the maintainer. Now, DigitalOcean has lowered the bar even further with Hacktoberfest.
-
-=> https://drewdevault.com/2020/08/10/How-to-contribute-to-FOSS.html Previously: I want to contribute to your project, how do I start?
-
-Disclaimer: I am the founder of a FOSS project hosting company similar to GitHub.
-
-As I write this, a Digital Ocean-sponsored and GitHub-enabled Distributed Denial of Service (DDoS) attack is ongoing, wasting the time of thousands of free software maintainers with an onslaught of meaningless spam. Bots are spamming tens of thousands of pull requests like this:
-
-```
-Subject: [PATCH] Update README.md
-
----
- README.md | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/README.md b/README.md
-index cafbfd0..327c4d4 100644
---- a/README.md
-+++ b/README.md
-@@ -1,4 +1,4 @@
--# Hundred Rabbits
-+# Hundred Rabbits #
-```
-
-=> https://github.com/search?q=amazing+project+is:pr&type=Issues Search "amazing project" on GitHub for more
-
-The official response from both Digital Ocean and GitHub appears to be passing the buck. Digital Ocean addresses spam in their FAQ, putting the burden of dealing with it entirely on the maintainers:
-
-> Spammy pull requests can be given a label that contains the word “invalid” or “spam” to discount them. Maintainers are faced with the majority of spam that occurs during Hacktoberfest, and we dislike spam just as much as you. If you’re a maintainer, please label any spammy pull requests submitted to the repositories you maintain as “invalid” or “spam”, and close them. Pull requests with this label won’t count toward Hacktoberfest.
-
-=> https://hacktoberfest.digitalocean.com/details via the Hacktoberfest FAQ
-
-Here’s GitHub’s response:
-
-> The content and activity you are reporting appears to be related to Hacktoberfest. Please keep in mind that GitHub Staff is not enforcing Hacktoberfest rules; we will, however, enforce our own Acceptable Use Policies. According to the Hacktoberfest FAQ… [same quote as given above]
-
-=> https://twitter.com/kyleknighted/status/1311685461828612097 via @kyleknighted on Twitter
-
-So, according to these two companies, whose responsibility is it to deal with the spam that they’ve created? The maintainers, of course! All for a T-Shirt.
-
-Let’s be honest. Hacktoberfest has never generated anything of value for open source. It’s a marketing stunt which sends a deluge of low-effort contributions to maintainers, leaving them to clean up the spam. I’ve never been impressed with Hacktoberfest contributions, even the ones which aren’t obviously written by a bot:
-
-```
-Subject: [PATCH] Improve indentation and add some comments
-
----
- styles.css | 6 +++++-
- 1 file changed, 5 insertions(+), 1 deletion(-)
-
-diff --git a/styles.css b/styles.css
-index 928298b9f2..63e4787c44 100644
---- a/styles.css
-+++ b/styles.css
-@@ -6,7 +6,7 @@
- width: auto;
-- padding: 0.5em 0.75em;
-+ padding: 0.5em 0.75em; /*top and botton: 0.5rem , right and left: 0.75rem*/
- font: small Helvetica Neue, sans-serif, Droid Sans Fallback;
-@@ -14,7 +14,9 @@
- }
- .dfnPanel * { margin: 0; padding: 0; font: inherit; text-indent: 0; }
- .dfnPanel :link, .dfnPanel :visited { color: black; cursor: pointer; }
-+
- /* Delicate specificity wars to pretend isolation from pre:hover rules elsewhere... */
-+
- .dfnPanel *, pre:hover .dfnPanel * { text-decoration: none; }
-```
-
-Hacktoberfest is, and has always been, about one thing: marketing for Digital Ocean.
-
-> In 2017, I asked them to stop sending people to one of my (joke) repos because we also got flooded with PRs. They answered they couldn't do anything on their end. In 2019, they sent us a ticket accusing us of enabling cheaters or something, but kindly "allowed" us to keep the repo.
-
-=> https://oc.todon.fr/@val/104960502585461740 via @val@todon.fr
-
-This is what we get with corporate-sponsored “social coding”, brought to you by Digital Ocean and GitHub and McDonalds, home of the Big Mac™. When you build the Facebook of coding, you get the Facebook of coding. We don’t need to give away T-Shirts to incentivize drive-by drivel from randoms who will never get any closer to open source than a +1/-1 README.md change.
-
-What would actually benefit FOSS is to enable the strong mentorship necessary to raise a new generation of software engineers under the tutelage of maintainers who can rely on a strong support system to do their work. Programs like Google Summer of Code do this better. Programs where a marketing department spends $5,000 on T-Shirts to flood maintainers with garbage and clothe people in ads are doing the opposite: hurting open source.
-
-=> https://twitter.com/shitoberfest Check out @shitoberfest on Twitter for more Hacktoberfest garbage.
-
-Update 2020-10-03: Digital Ocean has updated their rules, among other things asking maintainers to opt-in, to reduce spam.
-
-=> https://hacktoberfest.digitalocean.com/hacktoberfest-update Digital Ocean: Hacktoberfest update
diff --git a/content/blog/Spamtoberfest.md b/content/blog/Spamtoberfest.md
@@ -1,7 +1,6 @@
---
title: Spamtoberfest
date: 2020-10-01
-outputs: [html, gemtext]
---
As I've [written before][0], the best contributors to a FOSS project are
diff --git a/content/blog/Spooky-code-at-a-distance.gmi b/content/blog/Spooky-code-at-a-distance.gmi
@@ -1,49 +0,0 @@
-Einstein famously characterized the strangeness of quantum mechanics as “spooky action at a distance”, which, if I had to pick one phrase about physics to be my favorite, would be a strong contender. I like to relate this to programming language design: there are some language features which are similarly spooky. Perhaps the most infamous of these is operator overloading. Consider the following:
-
-```
-x + y
-```
-
-If this were written in C, without knowing anything other than the fact that this code compiles correctly, I can tell you that x and y are numeric types, and the result is their sum. I can even make an educated guess about the CPU instructions which will be generated to perform this task. However, if this were a language with operator overloading… who knows? What if x and y are some kind of some Vector class? It could compile to this:
-
-```
-Vector::operator_plus(x, y)
-```
-
-The performance characteristics, consequences for debugging, and places to look for bugs are considerably different than the code would suggest on the surface. This function call is the “spooky action” — and the distance between the “+” operator and the definition of its behavior is the “distance”.
-
-Also consider if x and y are strings: maybe “+” means concatenation? Concatenation often means allocation, which is a pretty important side-effect to consider. Are you going to thrash the garbage collector by doing this? Is there a garbage collector, or is this going to leak? Again, using C as an example, this case would be explicit:
-
-```
-char *new = malloc(strlen(x) + strlen(y) + 1);
-strcpy(new, x);
-strcat(new, y);
-```
-
-If the filename of the last file you had open in your text editor ended in .rs, you might be frothing at the mouth after reading this code. Strictly for the purpose of illustrating my point, however, consider that everything which happens here is explicit, opt-in to the writer, and obvious to the reader.
-
-That said, C doesn’t get off scott-free in this article. Consider the following code:
-
-```
-int x = 10, y = 20;
-int z = add(x, y);
-printf("%d + %d = %d\n", x, y, z);
-```
-
-You may expect this to print out 10 + 20 = 30, and you would be forgiven for your naivety.
-
-```
-$ cc -o test test.c
-$ ./test
-30 + 20 = 30
-```
-
-The savvy reader may have already figured out the catch: add is not a function.
-
-```
-#define add(x, y) x += y
-```
-
-The spooky action is the mutation of x, and the distance is between the apparent “callsite” and the macro definition. This is spooky because it betrays the reader’s expectations: it looks and smells like a function call, but it does something which breaks the contract of function calls. Some languages do this better, by giving macros an explicit syntax like name!(args...), but, personally, I still don’t like it.
-
-Language features like this are, like all others, a trade-off. But I’m of the opinion that this trade is unwise: you’re wagering readability, predictability, debuggability, and more. These features are toxic to anyone seeking stable, robust code. They certainly have no place in systems programming.
diff --git a/content/blog/Spooky-code-at-a-distance.md b/content/blog/Spooky-code-at-a-distance.md
@@ -1,7 +1,6 @@
---
title: Spooky action at a distance
date: 2021-01-19
-outputs: [html, gemtext]
---
Einstein famously characterized the strangeness of quantum mechanics as "spooky
diff --git a/content/blog/Spooky.gmi b/content/blog/Spooky.gmi
@@ -1,17 +0,0 @@
-The fleeting full moon peeks through a gap in dense clouds, illuminating the derelict mansion before you. A wolf sounds a howl in the distance, and a flash of lightning precedes a tremendous thunderclap. You've struggled through this terrible storm for a full two days, and seek shelter within this remote and ancient house. The moon recedes, and a fierce wind bites at the trees, rattling the autumn leaves and slamming the door shut behind you. The angry shout of the door, driven into its frame with great force, drives you deeper into the building.
-
-Before you lies the great hall, spacious and opulent, though the walls press in despite the grandeur. Each step on its cold tile floors echoes back as if returned from an empty room twice its size. In the hearth, the last embers of a recent fire reach futile red fingers into deep wells of darkness throughout the room, scantly lighting scores of gold, silver, and porcelain fixtures and house-things, neglected beneath a deep layer of dust and cobwebs. All the while, the rain batters at the window panes, and the wind pesters the shutters, and lighting casts brief and creeping silhouettes through the windows, and as each strike wanes, low thunder rumbles in alongside the returning darkness.
-
-You take a step towards the fire, intending to restrengthen it with the wood lying near the hearth, and freeze — as from deep within, the bellows of an organ sound forth, coming at once from the heart of the building, but seemingly also from the very walls and floors. Expert fingers play a grand and eerie concerto, and though it must be without accompaniment, you hear, in your head alone, the ghostly sounds of an unseen orchestra playing its part. The thunder crashes along, its storming percussions sounding in perfect time. And as the music reaches its crescendo, the rain and winds and thunder retreats, and the world seems to hold its breath in silence. Your very heartbeat stills, afraid to break the spell.
-
-A deep, old, coarse voice pours out from all around you.
-
-“In this place,
-it's very scary;
-and what lies within,
-is old and damp,
-and very hairy.”
-
-The last of the fire burns out, the air chills and a moist breath upon your nape sends a creep up your spine, and something right behind you whispers into your ear…
-
-“Burma Shave”
diff --git a/content/blog/Spooky.md b/content/blog/Spooky.md
@@ -1,5 +0,0 @@
----
-title: On a dark and stormy night...
-date: 2021-10-31
-outputs: [gemtext]
----
diff --git a/content/blog/Starship.gmi b/content/blog/Starship.gmi
@@ -1,48 +0,0 @@
-I worked briefly at SpaceX in the early 2010's. It didn't end up being the right job for me, but I have maintained a lifelong interest in space and in the development of humanity's relationship with it that persists to this day. Though I have always been excited about new developments in space exploration during my lifetime -- Mars Curiosity, New Horizons, the Falcon 9 program, and so on -- nothing has excited me more than the Starship program. Let me explain why.
-
-What is happening today which will be taught in schools 1,000 years from now? If I were to come up with a list, it might look like the following:
-
-* Climate change and the Holocene mass extinction
-* US decline and the rise of China
-* The origins of the Internet
-* COVID-19
-
-Your list may look a bit different, but I think this is the crux of it. This is what's going on today which will be immortalized in the history books, and which could have consequences felt by the students who learn about them.
-
-Oh, and one more thing:
-
-* The SpaceX Starship program
-
-If Starship works, it will be taught in schools not just here on Earth, but in classrooms around the solar system, to children whose ancestors have lived away from Earth for hundreds of years. It will be taught 1,000 years from now, 10,000 years from now, and 100,000 years from now, as one of the defining moments of human history.
-
-That may sound a bit over-dramatic, but bear with me and I might just convince you.
-
-The key figure for understanding space exploration is the cost per kilogram launched to orbit. Adjusted for inflation, the Saturn-V rocket which NASA used for the Apollo program could deliver payloads to low Earth orbit (LEO) for about $12,000 per kilogram, and could deliver a total of 118 tons to LEO, or 41 tons to the lunar surface. Europe's Ariane 5 rocket, the one that just launched the James Webb Space Telescope, costs about $9,000 per kilo up to 16 tons, and the most cost-effective, reliable, and popular rocket in service today, the SpaceX Falcon 9, costs about $2,700 per kilo up to 15 tons.
-
-The Starship, as designed, will be able to launch somewhere between 100 and 150 tons in a single launch for about $20 per kilogram. What would you put in space if you could do it for $20/kg?
-
-Another important figure for understanding space exploration is the "Delta-V" budget, which refers to the amount of "change in velocity" that is required to move from one place to another, and is correlated with fuel usage, engine efficiency, and the famous rocket equation, which is often famously called "the tyrannical rocket equation", which accounts for the fact that you have to use some of your fuel to push the rest of your fuel around. Almost all of your Delta-V is spent getting from Earth's surface to Earth orbit, fighting your way through the atmosphere, and relatively little fuel is needed to do any remaining maneuvers to get to your destination.
-
-After the Starship makes orbit, additional Starships full of fuel also launch, meet it, and refuel it. This solves the rocket equation.
-
-In effect, this means that a Starship doesn't just bring its 150 ton payload to Earth orbit: it brings a 150 ton payload to anywhere in the solar system.
-
-NASA's Artemis program selected the Starship for their upcoming manned lunar missions, which means that SpaceX will essentially land a 15-story building on the Moon with a payload mass equal to 50 adult elephants, or, perhaps more aptly, seven Caterpillar D5 Medium Bulldozers. The internal pressurized volume of the Starship is greater than the pressurized volume of the International Space Station, which required over 40 launches to assemble in space. Oh, and the latest Starship booster prototype was apparently built in about 6 weeks.
-
-This is the first time in human history that there has been a promising program to develop a spacecraft which not only has the potential to bring humans to other planets, but to bring heavy equipment with which to build a permanent human presence there. It will do so for 1% of the cost of the most cost effective rocket in service today -- another SpaceX rocket which is 3× cheaper than the competition and has already revolutionized the space industry.
-
-That's why I'm excited about Starship.
-
-Alas, the program is not without its problems. The greatest challenge it faces now is that it must be done within a regulatory framework which is concerned with its impact on the environment near the launch site. I'm on the books as being staunchly against the "move fast and break laws" approach favored by Uber et al, so I hope very much that SpaceX is able to pull it off within the constraints of the law. The FAA just announced a delay in their verdict on the environmental impact of the program to the end of February, which will delay the first test launches until March at the earliest, but could also, given a negative review, delay it for years or longer. I really hope that it works out for SpaceX, because I want to live to see humans on other planets, and we need this program to start launching soon for that to happen.
-
-It's also worth addressing the weird relationship that the Internet has with SpaceX. I think this has a lot to do with the asshole who acts as the public face of the company: Elon Musk. I find this really unfortunate, because I believe that this program is profoundly important regardless of Musk's involvement, and Musk has very little to do with what's going on at SpaceX, despite appearances. The company is chiefly led by its COO, Gwynne Shotwell, and the actual work is undertaken by thousands of extremely talented and passionate engineers -- not by Musk. It's also unfortunate that, because of Musk's apparent position at the helm, much of the enthusiasm for the project is tied up to some extent with the same cult-like Musk fanboys who fawn over every tweet and try to scam each other into buying some cryptocurrency garbage.
-
-A lot of negativity also probably rubs off from Jeff Bozos, Blue Origin, and the dick rocket, or Richard Brandson's severely out-of-touch Virgin Galactic, both of which are vanity projects of rich assholes and not serious space projects. Neither of their vehicles are capable of what I consider to be the minimum threshold for being taken seriously as a space company, which is making it to LEO. Space is just 100km straight up -- it's not that hard to get there. What's difficult is not just going 100km straight up, but also going several kilometers per second sideways, so that when the Earth pulls you back down you fall over the horizon and miss. Neither of these two rich space clowns has accomplished this feat, but SpaceX did it 26 times in 2020 - fully half of all orbital launches globally in that year - and did it for a fraction of the cost of anyone else. Not only is SpaceX a serious player in the space industry, they are the 500 pound gorilla in the space industry. Blue Origin and Virgin Galactic are a joke.
-
-I think that what bothers me the most about the SpaceX rhetoric is seeing Musk and Bezos damned in the same breath as Amazon and SpaceX. You have to understand, I'm not just a space enthusiast, but I'm also a card carrying socialist - that is, I literally have a DSA card in my wallet. Companies like Amazon are deliberately dismantling our society in the name of profit in a way that SpaceX really isn't. Amazon is a hydra which puts its fingers into hundreds of pies and starts shoving. Consolidation, diversification, regulatory capture, market manipulation, predatory acquisitions, these are the bread and butter of evil capitalist enterprise, and as far as I can tell none of it is going on at SpaceX.
-
-SpaceX has a singular mission, which is to make life multi-planetary. They have made a small number of on-topic acquisitions to facilitate this, but limit themselves to the very specific markets which are relevant to this goal. Oh, they have problems -- Musk's spats with the SEC and his posse of cryptocurrency idiots, union busting behavior, workplace safety issues at Tesla, plus overwork and turnover issues at SpaceX -- these are all important criticisms of SpaceX. But is this really at the same level as the companies who are hacking society to create nation-states-tier levels of wealth through rampant unregulated abuse? I don't think so, not at all.
-
-In my opinion, Musk is nothing but a liability for SpaceX. I honestly find the idea of him as one of the world's richest people pretty laughable, too, given how much of his money exists only on paper, much of which comes from the grossly inflated Tesla stock price. I wish people were able to see past the clown he is to the company behind him, a company which, for all of its faults, is full of passionate people doing amazing work which could profoundly impact the human condition forever, and which is not busy destroying the world in the same sense that the biggest criminals of capitalism are hard at work doing.
-
-It's my sincere hope that you will join me in seeing past all of the dick-rocket memes and Musk's ridiculous cult following for the true potential of the Starship program. This is a profound effort which offers the only chance we've ever seen to undertake what might be one of the most important changes in human history: the development of a space-faring, multi-planetary society.
diff --git a/content/blog/Starship.md b/content/blog/Starship.md
@@ -1,5 +0,0 @@
----
-title: Thoughts on the SpaceX Starship program
-date: 2021-12-28
-outputs: [gemtext]
----
diff --git a/content/blog/Status-update-April-2021.gmi b/content/blog/Status-update-April-2021.gmi
@@ -1,40 +0,0 @@
-Another month goes by! I’m afraid that I have very little to share this month. You can check out the sourcehut “what’s cooking” post for sourcehut news, but outside of that I have focused almost entirely on the programming language project this month, for which the details are kept private.
-
-The post calling for contributors led to a lot of answers and we’ve brought several new people on board — thanks for answering the call! I’d like to narrow the range of problems we still need help with. If you’re interested in (and experienced in) the following problems, we need your help:
-
-* Cryptography
-* Date/time support
-* Networking (DNS is up next)
-
-Shoot me an email (sir@cmpwn.com) if you want to help. We don’t have the bandwidth to mentor inexperienced programmers right now, so please only reach out if you have an established background in systems programming.
-
-Here’s a teaser of one of the stdlib APIs written by our new contributors, unix::passwd:
-
-```
-// A Unix-like group file entry.
-export type grent = struct {
- // Name of the group
- name: str,
- // Optional encrypted password
- password: str,
- // Numerical group ID
- gid: uint,
- // List of usernames that are members of this group, comma separated
- userlist: str,
-};
-
-// Reads a Unix-like group entry from a stream. The caller must free the result
-// using [grent_finish].
-export fn nextgr(stream: *io::stream) (grent | io::EOF | io::error | invalid);
-
-// Frees resources associated with [grent].
-export fn grent_finish(ent: grent) void;
-
-// Looks up a group by name in a Unix-like group file. It expects a such file at
-// /etc/group. Aborts if that file doesn't exist or is not properly formatted.
-//
-// See [nextgr] for low-level parsing API.
-export fn getgroup(name: str) (grent | void);
-```
-
-That’s all for now. These updates might be light on details for a while as we work on this project. See you next time!
diff --git a/content/blog/Status-update-April-2021.md b/content/blog/Status-update-April-2021.md
@@ -1,7 +1,6 @@
---
title: Status update, April 2021
date: 2021-04-15
-outputs: [html, gemtext]
---
Another month goes by! I'm afraid that I have very little to share this month.
diff --git a/content/blog/Status-update-December-2020.gmi b/content/blog/Status-update-December-2020.gmi
@@ -1,17 +0,0 @@
-Happy holidays! I hope everyone’s having a great time staying at home and not spending any time with your families. It’s time for another summary of the month’s advances in FOSS development. Let’s get to it!
-
-One of my main focuses has been on sourcehut’s API 2.0 planning. This month, the meta.sr.ht and git.sr.ht GraphQL APIs have shipped feature parity with the REST APIs, and the RFC 6749 compatible OAuth 2.0 implementation has shipped. I’ve broken ground on the todo.sr.ht GraphQL API — it’ll be next. Check out the GraphQL docs on man.sr.ht if you want to kick the tires.
-
-=> https://man.sr.ht/graphql.md GraphQL docs on man.sr.ht
-
-I also wrote a little tool this month called mkproof, after brainstorming some ways to allow sourcehut signups over Tor without enabling abuse. The idea is that you can generate a challenge (mkchallenge), give it to a user who generates a proof for that challenge (mkproof), and then verify their proof is correct. Generating the proof is computationally expensive and resistant to highly parallel attacks (e.g. GPUs), and takes tens of minutes of work — making it unpractical for spammers to register accounts in bulk, while still allowing Tor users to register with their anonymity intact.
-
-=> https://git.sr.ht/~sircmpwn/mkproof mkproof on sourcehut
-
-On the Gemini front, patches from Mark Dain, William Casarin, and Eyal Sawady have improved gmnisrv in several respects — mainly bugfixes — and gmnlm has grown the “<n>|” command, which pipes the Nth link into a shell command. Thanks are due to Alexey Yerin as well, who sent a little bugfix with redirect handling.
-
-The second draft of the BARE specification was submitted to the IETF this month. Will revisit it again in several weeks. John Mulligan has also sent several patches improving go-bare — thanks!
-
-scdoc 1.11.0 was released this month, with only minor bug fixes.
-
-That’s all for now! I’ll see you in a month.
diff --git a/content/blog/Status-update-December-2020.md b/content/blog/Status-update-December-2020.md
@@ -1,7 +1,6 @@
---
title: Status update, December 2020
date: 2020-12-15
-outputs: [html, gemtext]
---
Happy holidays! I hope everyone's having a great time staying at home and not
diff --git a/content/blog/Status-update-December-2021.gmi b/content/blog/Status-update-December-2021.gmi
@@ -1,13 +0,0 @@
-Greetings! It has been a cold and wet month here in Amsterdam, much like the rest of them, as another period of FOSS progress rolls on by. I have been taking it a little bit easier this month, and may continue to take some time off in the coming weeks, so I can have a bit of a rest for the holidays. However, I do have some progress to report, so let’s get to it.
-
-In programming language progress, we’ve continued to see improvement in cryptography, with more AES cipher modes and initial work on AES-NI support for Intel processors, as well as support for HMAC and blake2b. Improved support for linking with C libraries has also landed, which is the basis of a few third-party libraries which are starting to appear, such as bindings to libui. I have also started working on bindings to SDL2, which I am using to make a little tetromino game.
-
-I am developing this to flesh out the SDL wrapper and get a feel for game development in the new language, but I also intend to take it on as a serious project to make a game which is fun to play. I also started working on an IRC protocol library for our language, but this does not link to C.
-
-Also, the reflection support introduced a few months ago has been removed.
-
-My other main focus has been SourceHut, where I have been working on todo.sr.ht’s GraphQL API. This one ended up being a lot of work. I expect to require another week or two to finish it.
-
-visurf also enjoyed a handful of improvements this month, thanks to some contributors, the most prolific of whom was Pranjal Kole. Thanks Pranjal! Improvements landed this month include tab rearranging, next and previous page navigation, and an improvement to all of the new-tab logic, along with many bug fixes and smaller improvements. I also did some of the initial work on command completions, but there is a lot left to do in this respect.
-
-That’s all for today. Thanks for your continued support! Until next time.
diff --git a/content/blog/Status-update-December-2021.md b/content/blog/Status-update-December-2021.md
@@ -1,7 +1,6 @@
---
title: Status update, December 2021
date: 2021-12-15
-outputs: [html, gemtext]
---
Greetings! It has been a cold and wet month here in Amsterdam, much like the
diff --git a/content/blog/Status-update-February-2021.gmi b/content/blog/Status-update-February-2021.gmi
@@ -1,111 +0,0 @@
-Salutations! It's officially a year of pandemic life. I hear the vaccine distribution is going well, so hopefully there won't be another year of this. In the meanwhile, I've been working hard on free software, what with having little else to do. However, I'm afraid I cannot tell you about most of it!
-
-I've been working on todo.sr.ht's GraphQL API, and it's going quite well. I hope to ship a working read-only version later this month. There have been a number of bug fixes and rote maintenance work on sr.ht as well, but nothing particularly exciting. We did upgrade everything for Alpine 3.13, which went off without a hitch. Anyway, I'll go over the minor details in the sr.ht "what's cooking" post later today.
-
-The rest of the progress was made in secret. Secret! You will have to live in ignorance for now. Sorry!
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Here's a peek at our progress:
-
-```
-use fmt;
-use io;
-use os;
-
-export fn main() void = {
- if (len(os::args) == 1) match (io::copy(os::stdout, os::stdin)) {
- err: io::error => fmt::fatal("Error copying <stdin>: {}",
- io::errstr(err)),
- size => return,
- };
-
- for (let i = 1z; i < len(os::args); i += 1) {
- let f = match (os::open(os::args[i], io::mode::RDONLY)) {
- s: *io::stream => s,
- err: io::error => fmt::fatal("Error opening {}: {}",
- os::args[i], io::errstr(err)),
- };
- defer io::close(f);
-
- match (io::copy(os::stdout, f)) {
- err: io::error => fmt::fatal("Error copying {}: {}",
- os::args[i], io::errstr(err)),
- size => void,
- };
- };
-};
-```
-
-I'm looking for a few volunteers to get involved and help flesh out the standard library. If you are interested, please email sir@cmpwn.com to express your interest, along with your sr.ht username and a few words about your systems programming experience — languages you're comfortable with, projects you've worked on, platforms you grok, etc.
diff --git a/content/blog/Status-update-February-2021.html b/content/blog/Status-update-February-2021.html
@@ -1,7 +1,6 @@
---
title: Status update, February 2021
date: 2021-02-15
-outputs: [html, gemtext]
---
<p>
diff --git a/content/blog/Status-update-February-2022.md b/content/blog/Status-update-February-2022.md
@@ -1,7 +1,6 @@
---
title: Status update, February 2022
date: 2022-02-15
-#outputs: [html, gemtext]
---
Hello once again! Another month of free software development goes by with lots
diff --git a/content/blog/Status-update-January-2021.gmi b/content/blog/Status-update-January-2021.gmi
@@ -1,12 +0,0 @@
-Hello from the future! My previous status update was last year, but it feels like it was only a month ago. I hope you didn't miss my crappy jokes too much during the long wait.
-
-One of the advancements that I would like to mention this month is the general availability of godocs.io, which is a replacement for the soon-to-be-obsolete godoc.org, based on a fork of their original codebase. Our fork has already attracted interest from many contributors who wanted to work on godoc.org, but found the Google CLA distasteful. We've been hard at work excising lots of Google crap, rewriting the indexer to use PostgreSQL instead of GCP, and making the little JavaScript bits more optional & more conservative in their implementation. We also plan to update it with first-class support for Go modules, which was never added to the upstream gddo codebase. Beyond this, we do not plan on making any large-scale changes: we just want godoc.org to keep being a thing. Enjoy!
-
-=> https://godocs.io godocs.io
-=> https://sr.ht/~sircmpwn/godocs.io Our fork of gddo
-
-On SourceHut, the first point of note is the new dark theme, which is automatically enabled when your user-agent configures prefers-color-scheme: dark. It has gone through a couple of iterations of refinement, and I have a few more changes queued up for my next round of improvements. Please let me know if you notice anything unusual! Additionally, I broke ground on the todo.sr.ht API 2.0 implementation this month. It required some minor changes to our underlying GraphQL approach, but in general it should be fairly straightforward — albeit time consuming — to implement. Ludovic has also started working on an API 2.0 branch for hg.sr.ht, which I plan on reviewing shortly.
-
-Small projects have enjoyed some improvements as well. mkproof grew multi-processor support and had its default difficulty tweaked accordingly — thanks, Tom! Zach DeCook and Nolan Prescott also sent some bugfixes for gmnisrv, and René Wagner and Giuseppe Lumia both helped fix some issues with gmni as well. Jason Phan sent an improvement for dowork which adds random jitter to the exponential backoff calculation. Thanks to all of these folks for their help!
-
-That's all for today. Thanks again for your support and attention, and I'll see you again soon!
diff --git a/content/blog/Status-update-January-2021.html b/content/blog/Status-update-January-2021.html
@@ -1,7 +1,6 @@
---
title: Status update, January 2021
date: 2021-01-15
-outputs: [html, gemtext]
---
<p>
diff --git a/content/blog/Status-update-January-2022.gmi b/content/blog/Status-update-January-2022.gmi
@@ -1,19 +0,0 @@
-Happy New Year! I had a lovely time in Amsterdam. No one had prepared me for the (apparently infamous) fireworks culture of the Netherlands. I thought it was really cool.
-
-Our programming language continues to improve apace. Our cryptography suite now includes Argon2, Salsa20/XSalsa20, ChaCha20/XChaCha20, and Poly1305, and based on these functions we have added libsodium-style high-level cryptographic utilities for AEAD and key derivation, with stream encryption, message signing and verification, and key exchange coming soon. We have also laid out the priorities for future crypto work towards supporting TLS, and in the on the way we expect to have ed25519/x25519 and Diffie-Hellman added soon. Perhaps enough to implement an SSH client?
-
-I also implemented an efficient path manipulation module for the standard library (something I would really have liked to have in C!), progress continues on date/time support. We also have a new MIME module (just for Media Types, not all of MIME) and I expect a patch implementing net::uri to arrive in my inbox soon. I also finished up cmsg support (for sendmsg and recvmsg), which is necessary for the Wayland implementation I’m working on (and was a major pain in the ass). I helped another collaborator who is developing a RISC-V kernel in our language as well, implementing a serial driver for the SiFive UART, plus improving the device tree loader and UEFI support.
-
-One of the standard library contributors also wrote a side-project to implement Ray Tracing in One Weekend in our language.
-
-=> https://raytracing.github.io/ Ray Tracing in One Weekend
-=> https://git.sr.ht/~turminal/raytracing/blob/master/example.png A sample render
-
-In other words, language development has been very busy in the past few weeks. Another note: I have prepared a lightning talk for FOSDEM which talks about the backend that we’re using: qbe. Check it out!
-
-=> https://fosdem.org/2022/schedule/event/lg_qbe/ My lightning talk on the FOSDEM schedule
-=> https://c9x.me/compile More about qbe
-
-In SourceHut news, we have brought on a new full-time contributor, Adnan Maolood, thanks to a generous grant from NLNet. We also have another full-time software engineer starting on February 1st (on our own dime), so I’m very much looking forward to that. Adnan will be helping us with the GraphQL work, and the new engineer will be working similarly to Simon and I on FOSS projects generally (and, hopefully, with GraphQL et al as well). Speaking of GraphQL, I’m putting the finishing touches on the todo.sr.ht writable API this week: legacy webhooks. These are nearly done, and following this we need to do the security review and acceptance testing, then we can ship. Adnan has been hard at work on adding GraphQL-native webhooks to git.sr.ht, which should also ship pretty soon.
-
-That’s all for today. Thanks for reading! I’ll see you again in another month.
diff --git a/content/blog/Status-update-January-2022.md b/content/blog/Status-update-January-2022.md
@@ -1,7 +1,6 @@
---
title: Status update, January 2022
date: 2022-01-17
-outputs: [html, gemtext]
---
Happy New Year! I had a lovely time in Amsterdam. No one had prepared me for the
diff --git a/content/blog/Status-update-July-2021.gmi b/content/blog/Status-update-July-2021.gmi
@@ -1,42 +0,0 @@
-Hallo uit Nederland! I’m writing to you from a temporary workstation in Amsterdam, pending the installation of a better one that I’ll put together after I visit a furniture store today. I’ve had to slow a few things down somewhat while I prepare for this move, and I’ll continue to be slower for some time following it, but things are moving along regardless.
-
-One point of note is that the maintainer for aerc, Reto Brunner, has stepped down from his role. I’m looking for someone new to fill his shoes; please let me know if you are interested.
-
-As far as the language project is concerned, there has been some significant progress. We’ve broken ground on the codegen rewrite, and it’s looking much better than its predecessor. I expect progress on this front to be fairly quick. In the meanwhile, a new contributor has come onboard to help with floating-point math operations, and I merged their first patch this morning — adding math::abs, math::copysign, etc. Another contributor has been working in a similar space, and sent in an f32-to-string function last week. I implemented DNS resolution and a “dial” function as well, which you can read about in my previous post about a finger client.
-
-=> gemini://drewdevault.com/2021/06/24/finger-client.gmi Previously: A finger client
-
-I also started writing some POSIX utilities in the new language for fun:
-
-```
-use fmt;
-use fs;
-use getopt;
-use io;
-use main;
-use os;
-
-export fn utilmain() (io::error | fs::error | void) = {
- const cmd = getopt::parse(os::args);
- defer getopt::finish(&cmd);
-
- if (len(cmd.args) == 0) {
- io::copy(os::stdout, os::stdin)?;
- return;
- };
-
- for (let i = 0z; i < len(cmd.args); i += 1z) {
- const file = match (os::open(cmd.args[i])) {
- err: fs::error => fmt::fatal("Error opening '{}': {}",
- cmd.args[i], fs::strerror(err)),
- file: *io::stream => file,
- };
- defer io::close(file);
- io::copy(os::stdout, file)?;
- };
-};
-```
-
-We’re still looking for someone to contribute in cryptography, and in date/time support — please let me know if you want to help.
-
-In SourceHut news, I have mostly been focused on writing the GraphQL API for lists.sr.ht. I have made substantial progress, and I had hoped to ship the first version before publishing today’s status updates, but I was delayed due to concerns with the move abroad. I hope to also have sr.ht available for Alpine 3.14 in the near future.
diff --git a/content/blog/Status-update-July-2021.md b/content/blog/Status-update-July-2021.md
@@ -1,7 +1,6 @@
---
title: Status update, July 2021
date: 2021-07-15
-outputs: [html, gemtext]
---
Hallo uit Nederland! I'm writing to you from a temporary workstation in
diff --git a/content/blog/Status-update-March-2021.gmi b/content/blog/Status-update-March-2021.gmi
@@ -1,99 +0,0 @@
-After the brief illusion of spring, this morning meets us with a cold apartment indoors and fierce winds outdoors. Today concludes a productive month, mainly for the secret project and for sourcehut, but also marked by progress in some smaller projects as well. I’ll start with those smaller projects.
-
-I have written a feed reader for Gemini, which is [1] free software, and [2] available as a free hosted service. Big thanks to adnano, the author of the go-gemini library, which has been very helpful for many of my Gemini-related exploits, and who has been a great collaborator. I also used it to provide Gemini support for the new pages.sr.ht, which offers static web and gemini hosting for sr.ht users. I also updated gmni to use BearSSL instead of OpenSSL this month.
-
-=> https://sr.ht/~sircmpwn/gemreader [1]
-=> gemini://feeds.drewdevault.com [2]
-
-godocs.io has been enjoying continued improvements, mainly thanks again to adnano. Heaps of obsolete interfaces and cruft have been excised, not only making it lighter for godocs.io, but also making our gddo fork much easier for you to run yourself. Adnan hopes to have first-class support for Go modules working soon, which will bring us up to feature parity with pkg.go.dev.
-
-There’s some sourcehut news as well, but I’ll leave that for the “What’s cooking” later today. Until next time!
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Progress on the secret project has been phenomenal. In the last month, the standard library has doubled in size, and this weekend, we finished the self-hosted build driver. We are about 1,000 lines of code shy of having more code written in xxxx than in C. Here’s the build driver compiling and running itself several times:
-
-```
-$ •••• run ./cmd/•••• run ./cmd/•••• run -h
-run: compiles and runs •••• programs
-
-Usage: run [-v]
- [-D <ident:type=value>]
- [-j <jobs>]
- [-l <name>]
- [-T <tags...>]
- [-X <tags...>]
- path args...
-
--v: print executed commands
--D <ident:type=value>: define a constant
--j <jobs>: set parallelism for build
--l <name>: link with a system library
--T <tags...>: set build tags
--X <tags...>: unset build tags
-```
-
-The call for help last month was swiftly answered, and we have 7 or 8 new people working on the project now. We’ve completed enough work to unblock many workstreams, which will allow these new contributors to work in parallel on different areas of interest, which should substantially speed up progress.
diff --git a/content/blog/Status-update-March-2021.md b/content/blog/Status-update-March-2021.md
@@ -1,7 +1,6 @@
---
title: Status update, March 2021
date: 2021-03-15
-outputs: [html, gemtext]
---
After the brief illusion of spring, this morning meets us with a cold apartment
diff --git a/content/blog/Status-update-May-2021.gmi b/content/blog/Status-update-May-2021.gmi
@@ -1,45 +0,0 @@
-Hello! This update is a bit late. I was travelling all day yesterday without internet, so I could not prepare these. After my sister and I got vaccinated, I took a trip to visit her at her home in beautiful Hawaii — it felt great after a year of being trapped within these same four walls. I hope you get that vaccine and things start to improve for you, too!
-
-In SourceHut news, I’ve completed and shipped the first version of the builds.sr.ht GraphQL API. Another update, implementing the write functionality, will be shipping shortly, once the code review is complete. The next one up for a GraphQL API will probably be lists.sr.ht. After that it’s just man.sr.ht, paste.sr.ht, and dispatch.sr.ht — all three of which are pretty small. Then we’ll implement a few extra features like GraphQL-native webhooks and we’ll be done!
-
-Adnan Maolood has also been hard at work improving godocs.io, including the now-available gemini version. I wrote a post just about godocs.io earlier this month.
-
-=> gemini://godocs.io godocs.io via Gemini
-
-Here’s some secret project code I’ve been working on recently:
-
-```
-use errors;
-use fmt;
-use linux::io_uring::{setup_flags};
-use linux::io_uring;
-use strings;
-
-export fn main() void = {
- let params = io_uring::params { ... };
- let ring = match (io_uring::setup(32, ¶ms)) {
- err: io_uring::error => fmt::fatal(io_uring::strerror(err)),
- ring: io_uring::io_uring => ring,
- };
- defer io_uring::finish(&ring);
-
- let sqe = match (io_uring::get_sqe(&ring)) {
- null => abort(),
- sqe: *io_uring::sqe => sqe,
- };
- let buf = strings::toutf8("Hello world!\n");
- io_uring::write(sqe, 1, buf: *[*]u8, len(buf));
- io_uring::submit_wait(&ring, 1)!;
- let cqe = match (io_uring::get_cqe(&ring, 0, 0)) {
- err: errors::opaque =>
- fmt::fatal("Error: {}", errors::strerror(err)),
- cqe: nullable *io_uring::cqe => {
- assert(cqe != null);
- cqe: *io_uring::cqe;
- },
- };
- fmt::errorfln("result: {}", cqe.res)!;
-};
-```
-
-The API here is a bit of a WIP, and it won’t be available to users, anyway — the low-level io_uring API will be wrapped by a portable event loop interface (tentatively named “iobus”) in the standard library. I’m planning on using this to write a finger server.
diff --git a/content/blog/Status-update-May-2021.md b/content/blog/Status-update-May-2021.md
@@ -1,7 +1,6 @@
---
title: Status update, May 2021
date: 2021-05-16
-outputs: [html, gemtext]
---
Hello! This update is a bit late. I was travelling all day yesterday without
diff --git a/content/blog/Status-update-November-2020.gmi b/content/blog/Status-update-November-2020.gmi
@@ -1,100 +0,0 @@
-Greetings, humanoids! Our fleshy vessels have aged by 2.678×10⁶ seconds, and you know what that means: time for another status update! Pour a cup of your favorite beverage stimulant and gather 'round for some news.
-
-First off, today is the second anniversary of SourceHut's alpha being opened to the public, and as such, I've prepared a special blog post for you to read. I'll leave the sr.ht details out of this post and just send you off to read about it there.
-
-=> https://sourcehut.org/blog/2020-11-15-sourcehut-2-year-alpha/ SourceHut's second year in alpha
-
-What else is new? Well, a few things. For one, I've been working more on Gemini. I added CGI support to gmnisrv and wrote a few CGI scripts to do neato Gemini things with. I've also added regexp routing and URL rewriting support. We can probably ship gmnisrv 1.0 as soon as the last few bugs are flushed out, and a couple of minor features are added, and we might switch to another SSL implementation as well. Thanks to the many contributors who've helped out: William Casarin, Tom Lebreux, Kenny Levinsen, Eyal Sawady, René Wagner, dbandstra, and mbays.
-
-=> https://git.sr.ht/~sircmpwn/cgi-scripts CGI scripts
-
-In BARE news: Elm, Erlang, Java, and Ruby implementations have appeared, and I have submitted a draft RFC to the IETF for standardization.
-
-=> https://datatracker.ietf.org/doc/draft-devault-bare/ IETF: draft-ddevault-bare
-
-Finally, I wrote a new Wayland server for you. Its only dependencies are a POSIX system and a C11 compiler — and it works with Nvidia GPUs, or even systems without OpenGL support at all. Here's the code:
-
-```
-#include <poll.h>
-#include <signal.h>
-#include <stdint.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-#include <sys/ioctl.h>
-#include <sys/mman.h>
-#include <sys/socket.h>
-#include <sys/un.h>
-#include <time.h>
-#include <unistd.h>
-
- typedef int16_t i16; typedef int32_t i32; typedef uint16_t u16;
- typedef uint32_t u32; typedef uint8_t u8; typedef int I; typedef size_t S;
- typedef struct{int b;u32 a,c,d;char*e;i32 x,y,w,h,s,fmt;}O;
- typedef struct{char*a;u32 b;I(*c)(I,I,u32,u16);}G;
-
- struct pollfd fds[33];char a[0xFFFF];I b=0,c=1,d=-1;u32 e=0;
- O f[128][128];G g[];
-
-#define AR I z,I y,u32 x,u16 c
-#define SN(n) (((n)%4)==0?(n):(n)+(4-((n)%4)))
-
-void u8w(FILE*f,u32 ch){char b[4];for (I i=2;i>0;--i){b[i]=(ch&0x3f)|0x80;ch>>=6
-;}b[0]=ch|0xE0;fwrite(b,1,3,f);}void xrgb(u32*data,i32 w,i32 h,i32 s){struct
-winsize sz;ioctl(0,TIOCGWINSZ,&sz);--sz.ws_row;printf("\x1b[H\x1b[2J\x1b[3J");
-for(I y=0;y<sz.ws_row;++y){for(I x=0;x<sz.ws_col;++x){I i=0;u32 c = 0x2800;const
-I f[]={0,3,1,4,2,5,6,7};for(I my=0;my<4;++my)for(I mx=0;mx<2;++mx){u32 p=data[((
-y*4+my)*h)/(sz.ws_row*4)*(s/4)+((x*2+mx)*w)/(sz.ws_col*2)];u8 avg=((p&0xFF)+((p
->>8)&0xFF)+((p>>16)&0xFF))/3;if(avg>0x80)c|=1<<f[i++];}u8w(stdout,c);}putchar(
-'\n');}fflush(stdout);}O*ao(I z,u32 y,I x){for(S i=0;i<128;++i)if(f[z][i].a==0){
-f[z][i].a=y;f[z][i].b=x;return &f[z][i];}return 0;}O*go(I z,u32 y){for(S i=0;i<
-128;++i)if(f[z][i].a==y)return &f[z][i];return 0;}void wh(I z,i32 y,i16 x, i16 w
-){write(z,&y,4);i32 u=((w+8)<<16)|x;write(z,&u,4);}void ws(I z,char*y){i32 l=
-strlen(y)+1;write(z,&l,4);l=SN(l);write(z,y,l);}I rs(I z){u32 l;read(z,&l,4);l=
-SN(l);read(z,a,l);return l+4;}I ga(I z,I y,u32 x,u16 w){u32 b,u,t,s=0;read(y,&u,
-4);I sz=rs(y)+12;read(y,&b,4);read(y,&t,4);--u;ao(z,t,u);switch(u){case 1:++s;wh
-(y,t,0,4);write(y,&s,4);--s;break;case 6:s=0;break;default:return sz;}wh(y,t,0,4
-);write(y,&s,4);return sz;}I gb(AR){u32 w,u;read(y,&w,4);I t=d;d=-1;read(y,&u,4);
-O *o=ao(z,w,15);o->e=mmap(0,u,PROT_READ,MAP_PRIVATE,t,0);return 8;}I gc(AR){u32
-w;read(y,&w,4);ao(z,w,8+c);return 4;}I gd(AR){O*o,*r,*t;i32 u,w;switch(c){case 1
-:read(y,&u,4);read(y,&w,4);read(y,&w,4);go(z,x)->d=u;if(!u==0){O *b=go(z,u);xrgb
-((u32*)b->e,b->w,b->h,b->s);}wh(y,u,0,0);return 12;case 2:read(y,&u,4);read(y, &
-u, 4);read(y, &u, 4);read(y, &u, 4);return 16;case 3:read(y,&w,4);struct timespec
-ts={.tv_sec=0,.tv_nsec=1.6e6};nanosleep(&ts,0);wh(y,w,0,4);write(y,&w,4);return
-4;case 6:o=go(z,x);r=go(z,o->c);if(r&&r->b==12){t=go(z,r->c);if(t->b==13){u32
-s=0; wh(y,t->a,0,12);write(y,&s,4);write(y,&s,4);write(y,&s,4);}wh(y,r->a,0,4);
-write(y,&e,4);++e;}break;}return 0;}I ge(AR){u32 w,u;if(c==2){read(y,&w,4);read(
-y,&u,4);O*o=ao(z,w,12);o->d=u;go(z,u)->c=w;return 8;}return 0;}I gf(AR){u32 w,ae
-;switch(c){case 1:read(y,&w,4);O*obj=ao(z,w,13);obj->d=x;go(z,x)->c=w;return 4;
-case 4:read(y,&ae,4);return 4;}return 0;}I gg(AR){I sz;switch(c){case 2:sz=rs(y)
-;return sz;}return 0;}I gh(AR){i32 w,u;switch(c){case 0:read(y,&w,4);read(y,&u,4
-);O*b=ao(z,w,10);b->e=&go(z,x)->e[u];read(y,&b->w,4);read(y,&b->h,4);read(y,&b->
- s,4);read(y,&b->fmt,4);return 24;}return 0;}
-
-G g[]={{0,0,ga},{"wl_shm",1,gb},{"wl_compositor",1,gc},{"wl_subcompositor",1,0},
-{"wl_data_device_manager",3,0},{"wl_output",3,0},{"wl_seat",7,0},{"xdg_wm_base",
-2,ge},{0,0,gd},{0,0,0},{0,0,0},{0,0,0},{0,0,gf},{0,0,gg},{0,0,0},{0,0,gh}};
-
-void gi(AR){u32 w;switch(c){case 0:read(y,&w,4);wh(y,w,0,4);write(y,&w,4);break;
-case 1:read(y,&w,4);ao(z,w,0);for(S i=0;i<sizeof(g)/sizeof(g[0]);){G*z=&g[i++];
-if(!z->b)continue;I gl=strlen(z->a)+1;wh(y,w,0,4+SN(gl)+4+4);write(y,&i,4);ws(y,
- z->a);write(y,&z->b,4);}break;}}void si(){c=0;}
-
-I main(I _1,char**_2){I z=socket(AF_UNIX,SOCK_STREAM,0);struct sockaddr_un y={.
-sun_family=AF_UNIX};char *x=getenv("XDG_RUNTIME_DIR");if(!x)x="/tmp";do{sprintf(
- y.sun_path,"%s/wayland-%d",x,c++);}while(access(y.sun_path,F_OK)==0);bind(z, (
-struct sockaddr *)&y,sizeof(y));listen(z,3);for(S i=0;i<sizeof(fds);++i){fds[i].
-events=POLLIN|POLLHUP;fds[i].revents=0;fds[i].fd=0;}fds[b++].fd=z;signal(SIGINT,
-si);memset(&f,0,sizeof(f));while(poll(fds,b,-1)!=-1&&c){if(fds[0].revents){I u=
-accept(z,0,0);fds[b++].fd=u;}for(I i=1;i<b;++i){if(fds[i].revents&POLLHUP){
-memmove(&fds[i],&fds[i+1],32-i);memset(f[i-1],0,128*sizeof(**f));--b;continue;}
-else if(!fds[i].revents){continue;}I u=i-1;I t=fds[i].fd;u32 s,r;char q[
-CMSG_SPACE(8)];struct cmsghdr *p;struct iovec n={.iov_base=&s,.iov_len=4};struct
-msghdr m={0};m.msg_iov=&n;m.msg_iovlen=1;m.msg_control=q;m.msg_controllen=sizeof
-(q);recvmsg(t,&m,0);p=CMSG_FIRSTHDR(&m);if(p){d=*(I *)CMSG_DATA(p);}read(t,&r,4)
-;u16 o=((r>>16)&0xFFFF)-8,c=r&0xFFFF;if(s==1){gi(u,t,s,c);}else{for(S j=0;j<128;
-++j){if(f[u][j].a==s&&g[f[u][j].b].c){o-=g[f[u][j].b].c(u,t,s,c);break;}}if(o>0)
- {read(t,a,o);}}}}unlink(y.sun_path);}
-```
-
-You're welcome!
diff --git a/content/blog/Status-update-November-2020.md b/content/blog/Status-update-November-2020.md
@@ -1,7 +1,6 @@
---
title: Status update, November 2020
date: 2020-11-15
-outputs: [html, gemtext]
---
Greetings, humanoids! Our fleshy vessels have aged by 2.678×10⁶ seconds, and you
diff --git a/content/blog/Status-update-November-2021.gmi b/content/blog/Status-update-November-2021.gmi
@@ -1,11 +0,0 @@
-Hello again! Following a spooky month, we find ourselves again considering the progress of our eternal march towards FOSS world domination.
-
-I’ll first address SourceHut briefly: today is the third anniversary of the opening of the public alpha! I have written a longer post for sourcehut.org which I encourage you to read for all of the details.
-
-=> https://sourcehut.org/blog/2021-11-15-sourcehuts-third-year/ SourceHut's third year
-
-In other news, we have decided to delay the release of our new programming language, perhaps by as much as a year. We were aiming for February ‘22, but slow progress on some key areas such as cryptography and the self-hosting compiler, plus the looming necessity of the full-scale acceptance testing of the whole language and standard library, compound to make us unsure about meeting the original release plans. However, progress is slow but moving. We have incorporated the first parts of AES support in our cryptography library, and ported the language to FreeBSD. A good start on date/time support has been under development and I’m pretty optimistic about the API design we’ve come up with. Things are looking good, but it will take longer than expected.
-
-visurf has enjoyed quite a bit of progress this month, thanks in large part to the help of a few new contributors. Nice work, everyone! We could still use more help, so please swing by the #netsurf channel on Libera Chat if you’re interested in participating. Improvements which landed this month include configuration options, url filtering via awk scripts, searching through pages, and copying links on the page with the link following tool.
-
-Projects which received minor updates this month include scdoc, gmni, kineto, and godocs.io. That’s it for today! My focus for the next month will be much the same as this month: SourceHut GraphQL work and programming language work. See you in another month!
diff --git a/content/blog/Status-update-November-2021.md b/content/blog/Status-update-November-2021.md
@@ -1,7 +1,6 @@
---
title: Status update, November 2021
date: 2021-11-15
-outputs: [html, gemtext]
---
Hello again! Following a spooky month, we find ourselves again considering the
diff --git a/content/blog/Status-update-October-2020.gmi b/content/blog/Status-update-October-2020.gmi
@@ -1,28 +0,0 @@
-I'm writing this month's status update from a brand-new desktop workstation (well, I re-used the GPU), my first new workstation in about 10 years. I hope this new one lasts for another decade! I aimed for something smaller and lightweight this time — it's a Mini-ITX build. I've only been running this for a few days, so let me tell you about the last few accomplishments which are accountable to my venerable workstation's final days of life.
-
-First, there's been a ton of important work completed for SourceHut's API 2.0 plans. All of the main blockers for the first version of meta.sr.ht's writable GraphQL API are resolved, and after implementing a few more resolvers it should be in a shippable state. This included riggings for database transactions, simplification of the mini-"ORM" I built, and support for asyncronous work like delivering webhooks. The latter called for a new library, dowork, which you're free to reuse to bring asyncronous work processing to your Go programs.
-
-=> https://sr.ht/~sircmpwn/dowork/ dowork
-
-I also built a new general-purpose daemon for SourceHut called chartsrv, which can be used to generate graphs from Prometheus data.
-
-=> https://sr.ht/~sircmpwn/chartsrv/ chartsrv
-=> https://prometheus.io/ prometheus
-
-I've been getting more into Gemini this month, and have completed three (or four?) whole projects for it:
-
-=> https://sr.ht/~sircmpwn/gmni/ gmni and gmnlm: a client implementation and line-mode browser
-=> https://sr.ht/~sircmpwn/gmnisrv/ gmnisrv: a server implementation
-=> https://sr.ht/~sircmpwn/kineto/ kineto: an HTTP->Gemini portal
-
-The (arguably) fourth project is the completion of a Gemini version of this blog, which is available at gemini://drewdevault.com, or via the kineto portal at https://portal.drewdevault.com. I'll be posting some content exclusively on Gemini (and I already have!), so get yourself a client if you want to tune in.
-
-I have also invested some effort into himitsu, a project I shelved for so long that you probably don't remember it. Worry not, I have rewritten the README.md to give you a better introduction to it.
-
-=> https://git.sr.ht/~sircmpwn/himitsu himitsu
-
-Bonus update: two new BARE implementations have appeared: OCaml and Java.
-
-=> https://baremessages.org BARE
-
-That's all for now! I'll see you for the next update soon. Thanks for your support!
diff --git a/content/blog/Status-update-October-2020.md b/content/blog/Status-update-October-2020.md
@@ -1,7 +1,6 @@
---
title: Status update, October 2020
date: 2020-10-15
-outputs: [html, gemtext]
---
I'm writing this month's status update from a brand-new desktop workstation
diff --git a/content/blog/Status-update-October-2021.gmi b/content/blog/Status-update-October-2021.gmi
@@ -1,19 +0,0 @@
-On this dreary morning here in Amsterdam, I’ve made my cup of coffee and snuggled my cat, and so I’m pleased to share some FOSS news with you. Some cool news today! We’re preparing for a new core product launch at sr.ht, cool updates for our secret programming language, plus news for visurf.
-
-Simon Ser has been hard at work on expanding his soju and gamja projects for the purpose of creating a new core sourcehut product: chat.sr.ht. We’re rolling this out in a private beta at first, to seek a fuller understanding of the system’s performance characteristics, to make sure everything is well-tested and reliable, and to make plans for scaling, maintenance, and general availability. In short, chat.sr.ht is a hosted IRC bouncer which is being made available to all paid sr.ht users, and a kind of webchat gateway which will be offered to unpaid and anonymous users. I’m pretty excited about it, and looking forward to posting a more detailed announcement in a couple of weeks. In other sourcehut news, work on GraphQL continues, with paste.sr.ht landing and todo.sr.ht’s writable API in progress.
-
-Our programming langauge project grew some interesting features this month as well, the most notable of which is probably reflection. I wrote an earlier blog post which goes over this in some detail. There’s also ongoing work to develop the standard library’s time and date support, riscv64 support is essentially done, and we’ve overhauled the grammar for switch and match statements to reduce a level of indentation for typical code. In the coming weeks, I hope to see date/time support and reflection fleshed out much more, and to see some more development on the self-hosted compiler.
-
-=> gemini://drewdevault.com/2021/10/05/Reflection.gmi Earlier: How reflection works in ****
-
-Work has also continued apace on visurf, which is a project I would love to have your help with — drop me a note on #netsurf on libera.chat if you’re interested. Since we last spoke, visurf has gained support for readline-esque keybindings on the exline, a “follow” mode for keyboard navigation, Wayland clipboard support, and a few other features besides. Please help! This project will need a lot of work to complete, and much of that work is very accessible to programmers of any skill level.
-
-Also on the subject of Netsurf and Netsurf-adjacent work, I broke ground on antiweb this month. The goal of this project is to provide a conservative CSS toolkit which allows you to build web interfaces which are compatible with marginalized browsers like Netsurf and Lynx. I should be able to migrate my blog to this framework in the foreseeable future, and ultimately the sourcehut frontend will be overhauled with this framework.
-
-And a collection of minor updates:
-
-* I have been working on Alpine Linux for RISC-V again, and have upstreamed the necessary patches to get u-Boot to bootstrap UEFI into grub for a reasonably sane boot experience. Next up will be getting this installed onto the onboard SPI flash so that it works more like a native firmware.
-* I have tagged versions 1.0 of gmnisrv and gmni.
-* Adnan Maolood has been hard at work on godocs.io and we should soon expect a 1.0 of our gddo fork as well, which should make it more or less plug-and-play to get a working godocs instance on localhost from your local Go module cache.
-
-That’s all for today! Take care, and thank you as always for your continued support. I’ll see you next month!
diff --git a/content/blog/Status-update-October-2021.md b/content/blog/Status-update-October-2021.md
@@ -1,7 +1,6 @@
---
title: Status update, October 2021
date: 2021-10-15
-outputs: [html, gemtext]
---
On this dreary morning here in Amsterdam, I've made my cup of coffee and
diff --git a/content/blog/Status-update-September-2021.gmi b/content/blog/Status-update-September-2021.gmi
@@ -1,16 +0,0 @@
-It’s a quiet, foggy morning here in Amsterdam, and here with my fresh mug of coffee and a cuddly cat in my lap, I’d like to share the latest news on my FOSS efforts with you. Grab yourself a warm drink and a cat of your own and let’s get started.
-
-First, a new project: visurf. I announced this a few days ago, but the short of it is that I am building a minimal Wayland-only frontend for the NetSurf web browser which uses vi-inspired keybindings. Since the announcement there has been some good progress: touch support, nsvirc, tabs, key repeat, and so on. Some notable medium-to-large efforts ahead of us include a context menu on right click, command completion and history, kinetic scrolling via touch, pinch-to-zoom, clipboard support, and a readability mode. Please help! It’s pretty easy to get involved: join the IRC channel at #netsurf on libera.chat and ask for something to do.
-
-=> https://sr.ht/~sircmpwn/visurf visurf project information
-=> /2021/09/11/visurf-announcement.gmi Project announcement
-
-The programming language is also doing well. Following the codegen rewrite we have completed some long-pending refactoring to parts of the language design, which we intend to keep working on with further refinements in the coming weeks and months. We also developed a new frontend for reading the documentation in your terminal:
-
-=> https://asciinema.org/a/q53ZaG138sp89gKYqo1fui9Qj Asciinema recording demostrating the new documentation tool
-
-Other improvements include the addition of parametric format modifiers (fmt::printfln("{%}", 10, &fmt::modifiers { base = strconv::base::HEX, ... })), fnmatch, and (WIP) design improvements to file I/O, the latter relying on new struct subtyping semantics. I’m hoping that we’ll have improvements to the grammar and semantics of match expressions and tagged unions in the near future, and we are also looking into some experiments with reflection.
-
-Many improvements have landed for SourceHut. lists.sr.ht now has a writable GraphQL API, along with the first implementation of GraphQL-native webhooks. Thanks to a few contributors, you can also now apply custom sorts to your search results on todo.sr.ht, and builds.sr.ht has grown Rocky Linux support. More details to follow in the “What’s cooking” post for the SourceHut blog.
-
-That’s all for today! Thanks for tuning in for this update, and thanks for continuing to support our efforts. Have a great day!
diff --git a/content/blog/Status-update-September-2021.md b/content/blog/Status-update-September-2021.md
@@ -1,7 +1,6 @@
---
title: Status update, September 2021
date: 2021-09-15
-outputs: [html, gemtext]
---
It's a quiet, foggy morning here in Amsterdam, and here with my fresh mug of
diff --git a/content/blog/Stepping-away-from-Gemini.gmi b/content/blog/Stepping-away-from-Gemini.gmi
@@ -1,13 +0,0 @@
-I'm going to wind down my gemlog. I'm not going to take it offline, but I am going to stop writing content for it.
-
-I have enjoyed writing on my gemlog, especially about things which aren't necessarily a good fit for my HTTP blog. I definitely got over the idea of dual-publishing; I prefer to keep the two mediums mostly separate now. Things like short stories, anime reviews, and other off-color content have been a good fit for my Gemini posts. It's nice to have a space for that, and I will miss it.
-
-The reason I'm going to stop posting is because I don't find myself reading things in Geminispace anymore. The kind of content which is usually posted here just tends to not align with my interest areas. I check Antenna fairly often, but it's been months since I saw a post there which really resonated with me. That's not a problem with what everyone is writing about, but has more to do with what I want to be reading.
-
-I like to read medium- to long-form technical content. All of my favorite authors for this post only on HTTP, and I read them through a web-based RSS reader. Most of the technical content on Gemini is focused on Gemini itself, and I think we could have closed the book on Gemini-the-technology a long time ago. Gemini is supposed to be small and simple, and at some point "done", which means we can't milk that content cow forever.
-
-If I don't want to read Gemini, I don't want to write Gemini either. I don't think that's fair to the Gemini community. It would be better to participate from both ends, and if I only find myself enjoying the writing side of it, then I don't think that forms a good relationship with the community.
-
-I still think Gemini is a cool technology, though much like everyone else I have my own list of complaints about the it (mainly having to do with TLS). I like writing Gemini-related technology; gmnisrv, gmni, and kineto are all projects I enjoyed writing, and I'm glad that they've proven useful to the Gemini community. I hope they continue to do so. There are just shy of 400 capsules on pages.sr.ht as well, whom I am pleased to provide hosting for. Content like godocs.io and the future SourceHut Gemini frontend are still on the table, and I also intend to make a capsule for the new language I'm working on. The only thing that's stopping is the gemlog posts.
-
-I may return as Gemini continues to grow and a greater variety of content is available to peruse. In the meantime, thanks for reading! Maybe we'll meet again.
diff --git a/content/blog/Stepping-away-from-Gemini.md b/content/blog/Stepping-away-from-Gemini.md
@@ -1,5 +0,0 @@
----
-title: Stepping away from Gemini
-date: 2022-04-16
-outputs: [gemtext]
----
diff --git a/content/blog/Sustainable-creativity-post-copyright.gmi b/content/blog/Sustainable-creativity-post-copyright.gmi
@@ -1,24 +0,0 @@
-I don’t believe in copyright. I argue that we need to get rid of copyright, or at least dramatically reform it. The public domain has been stolen from us, and I want it back. Everyone reading this post has grown up in a creative world defined by capitalism, in which adapting and remixing works — a fundamental part of the creative process — is illegal. The commons is dead, and we suffer for it. But, this is all we’ve ever known. It can be difficult to imagine a world without copyright.
-
-When I present my arguments on the subject, the most frequent argument I hear in response is something like the following: “artists have to eat, too”. The answer to this argument is so mind-bogglingly obvious that, in the absence of understanding, it starkly illuminates just how successful capitalism has been in corrupting a broad human understanding of empathy. So, I will spell the answer out: why do we have a system which will, for any reason, deny someone access to food? How unbelievably cruel is a system which will let someone starve because they cannot be productive within the terms of capitalism?
-
-My argument is built on the more fundamental understanding that the access to fundamental human rights such as food, shelter, security, and healthcare are not contingent on their ability to be productive under the terms of capitalism. And I emphasize the “terms of capitalism” here deliberately: how much creativity is stifled because it cannot be expressed profitably? The system is not just cruel, but it also limits the potential of human expression, which is literally the only thing that creative endeavours are concerned with.
-
-The fact that the “starving artist” is such a common trope suggests to us that artists aren’t putting food on the table under the copyright regime, either. Like in many industries under capitalism, artists are often not the owners of the products of their labor. Copyright protects the rights holder, not the author. The obscene copyright rules in the United States, for example, are not doing much benefit for the artist when the term ends 70 years after their death. Modern copyright law was bought, paid for, and written by corporate copyright owners, not artists. What use is the public domain to anyone when something published today cannot be legally remixed by even our great-great-grandchildren?
-
-Assume that we address both of these problems: we create an empathetic system which never denies a human being of their fundamental right to live, and we eliminate copyright. Creativity will thrive under these conditions. How?
-
-Artists are free to spend their time at their discretion under the new copyright-free regime. They can devote themselves to their work without concern for whether or not it will sell, opening up richer and more experimental forms of expression. Their peers will be working on similar terms, freeing them to more frequent collaborations of greater depth. They will build upon each other’s work to create a rich commons of works and derivative works.
-
-There’s no escaping the fact that derivation and remixing is a fundamental part of the creative process, and that copyright interferes with this process. Every artist remixes the works of other artists: this is how art is made. Under the current copyright regime, this practice ranges from grey-area to illegal, and because money makes right, rich and powerful artists aggressively defend their work, extracting rent from derivative works, while shamelessly ripping off works from less powerful artists who cannot afford to fight them in court. Eliminating copyright rids us of this mess and acknowledges that remixing is part of the creative process, freeing artists to build on each other’s work.
-
-This is not a scenario in which artists stop making money, or in which the world grinds to a halt because no one is incentivized to work anymore. The right to have your fundamental needs met does not imply that we must provide everyone with a luxurious lifestyle. If you want a nicer house, more expensive food, to go out to restaurants and buy fancy clothes — you need to work for it. If you want to commercialize your art, you can sell CDs and books, prints or originals, tickets to performances, and so on. You can seek donations from your audience through crowdfunding platforms, court wealthy patrons of the arts, or take on professional work making artistic works like buildings and art installations for public and private sector. You could even get a side job flipping burgers or take on odd jobs to cover the costs of materials like paint or musical instruments — but not your dinner or apartment. The money you earn stretches longer, not being eaten away by health insurance or rent or electricity bills. You invest your earnings into your art, not into your livelihood.
-
-Copyright is an absurd system. Ideas do not have intrinsic value. Labor has value, and goods have value. Ideas are not scarce. By making them artificially so, we sabotage the very process by which ideas are made. Copyright is illegitimate, and we can, and ought to, get rid of it.
-
---
-
-Aside: I came across a couple of videos recently that I thought were pretty interesting and relevant to this topic. Check them out:
-
-=> https://yewtu.be/watch?v=MZ2GuvUWaP8 Everything is a Remix Part 1 (2021), by Kirby Ferguson
-=> https://yewtu.be/watch?v=ZZ3F3zWiEmc The Art Market is a Scam (And Rich People Run It)
diff --git a/content/blog/Sustainable-creativity-post-copyright.md b/content/blog/Sustainable-creativity-post-copyright.md
@@ -1,7 +1,6 @@
---
title: Sustainable creativity in a world without copyright
date: 2021-12-23
-outputs: [html, gemtext]
---
I don't believe in copyright. I argue that we need to get rid of copyright, or
diff --git a/content/blog/Terminal-emulation-legacy.gmi b/content/blog/Terminal-emulation-legacy.gmi
@@ -1,105 +0,0 @@
-Modern terminal emulators are the tip of an iceberg of legacy which may form the longest-lived legacy system in active use. As anyone who has worked with legacy systems before might predict, the accumulated hindsight of an ancient legacy system makes programming terminals a pretty miserable experience today 😅 How far back does it go, and how have the decisions of our ancestors affected the system as it appears today?
-
-Prepare yourself.
-
-=> https://en.wikipedia.org/wiki/House_of_Leaves See also: House of Leaves - Mark Z. Danielewski
-
-On Unix, the terminal emulator manages a resource called a pty, or "pseudoterminal". Check out "man pty" for the comprehensive details. These pseudoterminals enable communication between a master and slave process (the terminal emulator and the programs running in it, respectively). Two APIs emerged for dealing with ptys on Unix: System V (or "UNIX 98") and BSD. Only the former is recommended for modern applications.
-
-These Unix flavors came up with pseudoterminal to provide a pseudo- version of the real thing: a terminal. Yes, the terminal emulator, as the name might suggest, is but an emulator of a very real object, in the same sense as a GameBoy emulator is emulating the behavior of a real system that money can buy. Now obsolete, terminals were dedicated devices which provided a screen and connected to a minicomputer, mainframe, or modem, usually via a serial cable, and provided text-based input and output capabilities much like what we see in our terminal emulators today. Your terminal emulator is usually emulating, at a bare minimum, the DEC VT100 terminal, which looked like this:
-
-=> /DEC_VT100_terminal.jpg Picture of a DEC VT100. Photo credit: Jason Scott
-
-Your Unix system today still includes a state machine which aims to reproduce the behavior of this device, in concert with your terminal emulator. The TTY subsystem provides for the configuration of traits like baud rate, parity configuration, and other signalling concerns. Yes, your terminal emulator has a baud rate. You can find out what it is like so:
-
-```
-#include <assert.h>
-#include <fcntl.h>
-#include <stdio.h>
-#include <sys/stat.h>
-#include <termios.h>
-#include <unistd.h>
-
-int main(void) {
- int fd = open("/dev/tty", O_RDWR);
- assert(fd >= 0);
- struct termios t;
- int r = tcgetattr(fd, &t);
- assert(r >= 0);
- int rate = 0;
- switch (cfgetospeed(&t)) {
- case B0: rate = 0; break;
- case B50: rate = 50; break;
- case B75: rate = 75; break;
- case B110: rate = 110; break;
- case B134: rate = 134; break;
- case B150: rate = 150; break;
- case B200: rate = 200; break;
- case B300: rate = 300; break;
- case B600: rate = 600; break;
- case B1200: rate = 1200; break;
- case B1800: rate = 1800; break;
- case B2400: rate = 2400; break;
- case B4800: rate = 4800; break;
- case B9600: rate = 9600; break;
- case B19200: rate = 19200; break;
- case B38400: rate = 38400; break;
- default: rate = -1; break;
- }
- printf("baud rate: %d\n", rate);
- close(fd);
-}
-```
-
-There's a lot of other state that's not really being used, but is nevertheless part of the system today. On Linux, "man ioctl_tty" will satisfy the curious reader, complete with an explanation of the differences between SVr4, UnixWare, Solaris, DG/UX, AIX, HP-UX, and Tru64. Pop over to "man termios" for more, like the configurations specific to terminals connected to modems, the readline-like behavior which is built in to the Linux TTY subsystem, or the fun considerations every new development like O_NONBLOCK had to make for this system. Fun fact: early versions of io_uring caused kernel lock-ups when writing to a TTY device.
-
-SVR4 brings us to 1988, so our legacy counter is at 33 years. Can we push it further? We could follow the Unix lineage back, or go straight to the serial communications these configuration options are manipulating - and we will go there - but I'd like to take a detour. All of the complexity thus far serves to manage the connection between the running program and the terminal displaying it. However, the data carried over this connection also features in-band signalling to control terminal features, in the form of ANSI escape codes. You have probably at least seen a snippet like this:
-
-```
-printf '\e[31mRed text\e[m\n'
-```
-
-The \e character here is the "escape" character in ASCII, which has codepoint 1B (or 033). This signals that a sequence of characters follows which signals the terminal to change its behavior. A variety of commands are available, but the [ character following ESC indicates a Control Sequence Introducer, which is the most common case. These sequences can, as in this example, control the text color, but can also move the cursor around the screen to print text in a non-linear fashion. These are the building blocks of "TUI", or Text User Interface, applications such as vi. This standard was established in 1976 as ECMA-48, adding another 12 years to our dive through history (now 45 years).
-
-This standard is itself derivative of earlier works. The ANSI standard was established to address the growing capabilities of so-called video terminals, but these terminals themselves were recent innovations in their time, and included backwards compatibility with earlier technology: the teletype, also called... wait for it... a TTY. I could just tell you when these were introduced and wrap this article up now, but I'd like to draw a direct connection from these machines to their living legacy in your terminal emulator today.
-
-The latest models of teletypes were essentially typewriters connected to a spool of paper which it could autonomously print type onto based on incoming electrical signals. This enforced certain limitations, one in particular: once printed, the ink could not be erased. However, interactive programs were still made under these conditions, including the famous ed, which is the standard Unix edi
-?
-?
-i
-tor.
-.
-Unix was written on one such device, using a similar editor. Also note that these interfaces saw the first computer games - that is, distinct from "video" games - which were primarily text-based adventure games, the most famous of which is aptly called "Adventure".
-
-Programs like these embraced their medium with as much enthusiasm as later programs did for later mediums. The medium even offered some advantages which video terminals lost, such as the ability to tear off the page you were working on and send it to your mate, or to write notes directly on the terminal output with a pencil. Like the ANSI escape codes that came later, the electronic signals the teletypes used also provided in-band signalling functionality to allow programs to perform complex output operations and fully utilize the capabilities of these devices, primarily through a standard called ASCII.
-
-Like the control sequences of video terminals, ASCII provides its own set of control sequences, some of which you have probably used. The "Line Feed" character, or LF (you may know it as '\n'), literally fed a new line into the teletype by engaging the spool motor. Carriage Return, or CR (or '\r'), literally returns the teletype's mechanical type carriage to its starting position on the line. Other characters in ASCII are assigned to signalling purposes, such as EOT, or end of transmission, which is assigned to character 4. Your terminal today supports all of these same features, and many programs take advantage of them - the most common modern use is perhaps the use of \r to easily move the cursor back to the start of a line.
-
-```
-#!/bin/sh
-i=0
-while [ $i -lt 10 ]
-do
- printf '\rProgress: %d%%' "$((i*10))"
- i=$((i+1))
- sleep 1
-done
-printf '\rProgress: 100%%\n'
-printf 'Done.\n'
-```
-
-The ASCII character set was designed to facilitate computer communications, and was established in 1963 for this purpose: 58 years ago. This essentially brings us to the dawn of the computer age, just 4 years after the development of the MOSFET transistor, which is the single most important invention for the explosion of computers into general use.
-
-Early computers prior to this point did not establish much in the way of legacy standards that are still a part of modern computing, so our story ends here. The programmers of this time lived in a world where communications were governed by telephones, and, before that, the telegraph.
-
-Hang on, though. These ASCII computers repurposed teletypes, an existing technology, so, naturally, their communication model was based on whatever those devices understood. Can we go back even further? The primary protocol for electronic signalling at this time was standardized as the International Telegraph Alphabet No. 2 (ITA2). If we look inside, we find all of the English letters, and also... null, carriage return, line feed, and bell.
-
-All of these characters were added to ASCII for backwards compatibility with ITA2, which was introduced in 1924, and are implemented by your terminal emulator today. 97 years of backwards compatibility. But, look carefully: why the "2" in ITA2?
-
-ITA2 is derived from the so-called "Murray Code", developed by Donald Murray in 1901. It introduced the first control characters, carriage return, line feed, and bell, which "dings" an audible bell upon receipt by the remote teleprinter, and might ding your terminal emulator if you run "printf '\a'". Our legacy now extends through an entire century, and beyond living memory. The Murray Code is not ITA1.
-
-ITA1 is the name which was ultimately given to the Baudot code, originally patented by Émile Baudot in 1872. It defines fewer control sequences: just "delete", which lives on in your terminal as ASCII character DEL (0x7F). However, it does provide us with one additional important link to the present: Baudot's name gave us the "baud" rate that our little C program printed out earlier. One continuous connection from past to present: 149 years of legacy code.
-
----
-
-P.S. A fun fact I learned which does not have any discernable connection to modern terminal emulators is that the fax machine was invented in 1846 and were first made commercially available in 1865: 11 years before the invention of the telephone.
diff --git a/content/blog/Terminal-emulation-legacy.md b/content/blog/Terminal-emulation-legacy.md
@@ -1,5 +0,0 @@
----
-title: The distant legacy of terminal emulators
-date: 2021-10-05
-outputs: [gemtext]
----
diff --git a/content/blog/The-Netherlands.gmi b/content/blog/The-Netherlands.gmi
@@ -1,25 +0,0 @@
-I had been planning a move to the Netherlands for a while, at least until a large COVID-shaped wrench was thrown into the gears. However, I was fully vaccinated by early April, and there are signs of the border opening up now, so my plans have been slowly getting back on track. I sent off my visa application today, and assuming I can navigate the pandemic-modified procedures, I should be able to make my move fairly soon. It’s a little bit intimidating, but I am looking forward to it!
-
-> Quick note: I am looking for temporary housing in NL; somewhere I can stay for 6-12 months with a permanent mailing address for receiving immigration-related documents. I would prefer to rent out a room than to use some kind of commercial accommodation, to be certain that I can receive mail from the immigration services for the duration of the process. Please shoot me an email if you’ve got a lead! I’d rather meet someone through the FOSS community than dig through Craigslist^WMarktplaats from overseas.
-
-=> mailto:sir@cmpwn.com Email me
-
-I have felt a kind of dissonance with my home country of the United States for a long time now, and I have found it very difficult to resolve. I am not of one mind with my peers in this country on many issues; social, economic, and political. Even limiting this inquiry to matters related to FOSS, it’s quite clear that the FOSS community in Europe is much stronger than in America. In the United States, capitalism is the secular religion, and my values, in FOSS and otherwise, are incompatible with the American ethos.¹
-
-Leaving the US is a selfish choice. I could stay here to get involved in solving these problems, but I chose to leave for a place which has already made much more progress on them. Ultimately, this is the only life I’m gonna get, and I have decided not to spend it on politics. I’ll spare you from the rest of the details. I’ll also acknowledge that I’m very privileged to even have this choice at all. Because I know how difficult it is to leave, for reasons unique to each person’s own situation, I don’t hold anyone who stays behind accountable for their country’s cruelties.²
-
-So, why the Netherlands? I considered many options. For instance, I am fluent in Japanese, have an existing social network there, and understand their immigration laws and process. However, as much as I love to visit, I’m not on their cultural wavelength. Integration would pose a challenge. That said, I have also spent a lot of time in the EU, which is a hot spot for the FOSS ecosystem. Access to any EU country with a path to citizenship opens up access to the rest of the EU, making it a very strong choice with lots of second choices easily available.
-
-The Netherlands is an attractive place in these respects. It is relatively easy for me to obtain a visa there, for one, but it also ranks very highly in numerous respects: social, economic, political, and basic happiness. I have many friends in Europe and I won’t have to worry too much about establishing a new social network there.
-
-There are also some risks. Housing is expensive and only getting more so. Also, like the rest of the world, how NL will emerge from the crises of the pandemic remains to be seen, and many countries are likely to suffer from long-term consequences in all aspects of life. They are also already dealing with an influx of immigrants, and it’s quite possible that I will face some social and legal challenges in the future.
-
-Despite these and other risks, I am optimistic about this change. The path to citizenship takes only five years, and after many careful inquiries into the process, I believe my plans for getting there are on very solid footing. I have been studying Dutch throughout much of the pandemic, and I’m not having much trouble with it — I intend to achieve fluency. Integration is well within my grasp. I expect to look back on this transition with confidence in a decision well-made.
-
-Leuk jullie te ontmoeten, Nederland!
-
-Oh, and yes: I will be re-locating SourceHut, the incorporated entity, to the Netherlands, and gradually moving our infrastructure over the pond. Details regarding these plans will eventually appear on the ops wiki. Users can expect little to no disruption from the change.
-
-¹ Bonus: as I'm writing this I can literally hear gunfire a couple of neighborhoods away. There have been an average of 1½ gun-related homicides per day in Philadelphia since January 1st, 2021.
-
-² To be clear, I’m under no illusions that the Netherlands is some kind of utopian place. I am well versed in the social, economic, and political issues they face. I’m also aware of the declining state of democracy and political unity throughout the world, which affects the EU as well. But, by my reckoning, it’s a hell of a lot better than the US, and will remain so for the foreseeable future. At least I’ll be able to sleep at night if the world goes tits up.
diff --git a/content/blog/The-Netherlands.md b/content/blog/The-Netherlands.md
@@ -1,7 +1,6 @@
---
title: I will be moving to the Netherlands
date: 2021-06-07
-outputs: [html, gemtext]
---
I had been planning a move to the Netherlands for a while, at least until a
diff --git a/content/blog/The-next-YAML.gmi b/content/blog/The-next-YAML.gmi
@@ -1,49 +0,0 @@
-YAML is both universally used, and universally reviled. It has a lot of problems, but it also is so useful in solving specific tasks that it’s hard to replace. Some new kids on the block (such as TOML) have successfully taken over a portion of its market share, but it remains in force in places where those alternatives show their weaknesses.
-
-I think it’s clear to most that YAML is in dire need of replacement, which is why many have tried. But many have also failed. So what are the key features of YAML which demonstrate its strengths, and key weaknesses that could be improved upon?
-
-Let’s start with some things that YAML does well, which will have to be preserved.
-
-* Hierarchical relationships emphasized with whitespace. There is no better way of representing a hierarchical data structure than by actually organizing your information visually. Note that semantically meaningful whitespace is not actually required — the use of tokens like { is acceptable — so long as, by convention, hierarchies are visually apparent.
-* Defined independently of its implementation. There should not be a canonical implementation of the format (though a reference implementation is, perhaps, acceptable). It should not be defined as “a config library for $language”. Interoperability is key. It must have a specification.
-* Easily embeds documents written in other formats. This is the chief reason that YAML still dominates in CI configuration: the ability to trivially write scripts directly into config file, without escaping anything or otherwise molesting the script.
-
-```
-tasks:
-- configure: |
-jit_flags=""
-if [ "$(uname -m)" != "x86_64" ]
-then
- jit_flags=--without-jit
-fi
-./configure \
- --prefix=/usr \
- $jit_flags
-- build: |
-make
-- test: |
-make check
-```
-
-* Both machine- and human-editable. It’s very useful for both humans and machines to collaborate on a YAML file. For instance, humans write build manifests for their git.sr.ht repos, and then the project hub adds steps to download and apply patches from mailing lists before submitting them to the build driver. For the human’s part, the ability to easily embed scripts (see above) and write other config parameters conveniently is very helpful — everyone hates config.json.
-* Not a programming language. YAML entities are a problem, but we’ll talk about that separately. In general, YAML files are not programs. They’re just data. This is a good thing. If you want, you can use a separate pre-processor, like jsonnet.
-
-What needs to be improved upon?
-
-* A much simpler grammar. No more billion laughs, please. Besides this, 90% of YAML’s features go un-used, which increases the complexity of implementations, not to mention their attack surface, for little reason.
-* A means of defining a schema, which can influence the interpretation of the input. YAML does this poorly.
-
-Consider the following YAML list:
-
-```
-items:
-- hello
-- 24
-- world
-```
-
-Two of these are strings, and one is a number. Representing numbers and strings plainly like this makes it easier for humans to write, though requiring humans to write their values in a format which provides an unambiguous type is not so inconvenient as to save this trait from the cutting room floor. Leaving the ambiguity in place, without any redress, provides a major source of bugs in programs that consume YAML.
-
-* I don’t care about JSON interoperability. Being a superset of JSON is mildly useful, but not so much so as to compromise any other features or design. I’m prepared to yeet it at the first sign of code smells.
-
-Someday I may design something like this myself, but I’m really hoping that someone else does it instead. Good luck!
diff --git a/content/blog/The-next-YAML.md b/content/blog/The-next-YAML.md
@@ -1,7 +1,6 @@
---
title: My wish-list for the next YAML
date: 2021-07-28
-outputs: [html, gemtext]
---
[YAML](http://yaml.org) is both universally used, and universally reviled. It
diff --git a/content/blog/The-next-chat-app.gmi b/content/blog/The-next-chat-app.gmi
@@ -1,23 +0,0 @@
-As you’re surely aware, Signal has officially jumped the shark with the introduction of cryptocurrency to their chat app. Back in 2018, I wrote about my concerns with Signal, and those concerns were unfortunately validated by this week’s announcement. Moxie’s insistence on centralized ownership, governance, and servers for Signal puts him in a position of power which is easily, and inevitably, abused. In that 2018 article, and in articles since, I have spoken about the important of federation to address these problems. In addition to federation, what else does a chat app need?
-
-Well, first, the next chat app should be a protocol, not just an app. A lush ecosystem of client and server implementations, along with bots and other integrations, adds a tremendous amount of value and longevity to a system. A chat app which has only one implementation and a private protocol can only ever meet the needs that its developers (1) foresee, (2) care about, and (3) have the capacity to address; thus, such a protocol cannot be ubiquitous. I would also recommend that this protocol is not needlessly stapled to the beached whale that is the web: maybe JSON can come, but if it’s served with HTTP polling to appease our Android overlords I will be very cross with you. JSON also offers convenient extensibility, and a protocol designer who limits extensibility is a wise one.
-
-Crucially, that protocol must be federated. This is Signal’s largest failure. We simply cannot trust a single entity, even you, dear reader, to have such a large degree of influence over the ecosystem.1 I do not trust you not to add some crypto Ponzi scheme of your own 5 years from now. A federated system allows multiple independent server operators to stand up their own servers which can communicate with each other and exchange messages on behalf of their respective users, which distributes ownership, responsibility, and governance within the community at large, making the system less vulnerable to all kinds of issues. You need to be prepared to relinquish control to the community. Signal wasn’t, and has had problems ranging from 502 Server Gone errors to 404 Ethics Not Found errors, both of which are solved by federation.
-
-The next chat app also needs end-to-end encryption. This should be fairly obvious, but it’s worth re-iterating because this will occupy a majority of the design work that goes into the app. There are complex semantics involved in encrypting user-to-user chats, group chats (which could add or remove users at any time), perfect forward secrecy, or multiple devices under one account; many of these issues have implications for the user experience. This is complicated further by the concerns of a federated design, and if you want to support voice or video chat (please don’t), that’ll complicate things even more. You’ll spend the bulk of your time solving these problems. I would advise, however, that you let users dial down the privacy (after explaining to them the trade-offs) in exchange for convenience. For instance, to replace IRC you would need to support channels which anyone can join at any time and which might make chat logs available to the public.
-
-A new chat app also needs anonymity. None of this nonsense where users have to install your app and give you their phone number to register. In fact, you should know next to nothing about each user, given that the most secure data is the data you don’t have. This is made more difficult when you consider that you’ll also strive to provide an authentic identity for users to establish between themselves — but not with you. Users should also be able to establish a pseudonymous identity, or wear multiple identities. You need to provide (1) a strong guarantee of consistent identity from session to session, (2) without sharing that guarantee with your servers, and (3) offer the ability to able to change to a new identity at will. The full implications of anonymity are a complex issue which is out of scope for this article, but for now it suffices to say that you should at least refrain from asking for the user’s phone number.
-
-Finally, it needs to be robust, reliable, and performant. Focus on the basics: delivering messages quickly and reliably. The first thing you need to produce is a reliable messenger which works in a variety of situations, on a variety of platforms, in various network conditions, and so on, with the underlying concerns of federation, end-to-end encryption, protocol standardization, group and individual chats, multi-device support, and so on, in place and working. You can try to deliver this in a moderately attractive interface, but sinking a lot of time into fancy animations, stickers, GIF lookups, typing notifications and read receipts — all of this is a distraction until you get the real work done. You can have all of these things, but if you don’t have a reliable system underlying them, the result is worthless.
-
-I would also recommend leaving a lot of those features at the door, anyway. Typing notifications and read receipts are pretty toxic, if you examine them critically. A lot of chat apps have a problem with cargo-culting bad ideas from each other. Try to resist that. Anyway, you have a lot of work to do, so I’ll leave you to it. Let me know what you’re working on when you’ve got something to show for it.
-
-And don't put a fucking cryptocurrency in it.
-
-Regarding Matrix, IRC, etc:
-
-Let's quickly address the present state of the ecosystem. Matrix rates well in most of these respects, much better than others. However, their software is way too complicated. They are federated, but the software is far from reliable or robust, so the ecosystem tends to be centralized because Matrix.org are the only ones who have the knowledge and bandwidth to keep it up and running. The performance sucks, client and server both, and their UX for E2EE is confusing and difficult to use.
-
-It's a good attempt, but too complex and brittle. Also, their bridge is a major nuisance to IRC, which biases me against them. Please don't integrate your next chat app with IRC; just leave us alone, thanks.
-
-Speaking of IRC, it is still my main chat program, and has been for 15+ years. The lack of E2EE, which is unacceptable for any new protocol, is not important enough to get me to switch to anything else until it presents a compelling alternative to IRC.
diff --git a/content/blog/The-next-chat-app.md b/content/blog/The-next-chat-app.md
@@ -1,7 +1,6 @@
---
title: What should the next chat app look like?
date: 2021-04-07
-outputs: [html, gemtext]
---
As you're surely aware, Signal has officially jumped the shark with the
diff --git a/content/blog/The-worlds-dumbest-IRC-bot.gmi b/content/blog/The-worlds-dumbest-IRC-bot.gmi
@@ -1,97 +0,0 @@
-I’m an IRC power user, having been hanging out in 200+ channels on 10+ networks 24/7 for the past 10 years or so. Because IRC is standardized and simple, a common pastime for IRC enthusiasts is the creation of bots. In one of the social channels I hang out in, we’ve spent the past 6 years gradually building the world’s stupidest IRC bot: wormy.
-
-For a start, wormy is highly schizophrenic. Though it presents itself as a single bot, it is in fact a bouncer which combines the connections of 7 independent bots. At one point, this number was higher — as many as 11 — but some bots were consolidated.
-
-```
-<@sircmpwn> .bots
-<wormy> Serving text/html since 2017, yours truly ["ps"] For a list of commands, try `.help`
-<wormy> minus' parcel tracking bot r10.b563abc (built on 2020-06-06T12:02:13Z, https://git.sr.ht/~minus/parcel-tracking-bot)
-<wormy> minus' dice bot r16.498a0b8 (built on 2020-02-04T20:16:14Z, https://git.sr.ht/~minus/dice-irc-bot)
-<wormy> Featuring arbitrary code execution by design and buffer overflows by mistake, jsbot checking in
-<wormy> Radiobot coming to you live from The Internet, taking listener requests at 1-800-GUD-SONGS
-<wormy> urlbot: live streaming moe directly to your eyeballs
-<wormy> o/ SirCmpwn made me so he wouldn't forget shit so much
-```
-
-These bots provide a variety of features for channel members, such as checking tracking numbers for parcels out for delivery, requesting songs for our private internet radio, reading out the mimetypes and titles of URLs mentioned in the channel, or feeding queries into Wolfram Alpha.
-
-```
-<wormy> Now playing: 8369492 小さき者への贖罪の為のソナタ by ALI PROJECT from 禁書 (4m42s FLAC)
-<wormy> Now playing: 1045361 アキノサクラ by Wakana from magic moment (5m0s FLAC) #live ♥ minus
-<wormy> Now playing: d0b1cb3 Forevermore by F from Cafe de Touhou 3 (4m9s FLAC) ♥ hummer12007
-<wormy> Now playing: 0911e90 Moeru San Shimai by Iwasaki Taku from Tengen Toppa Gurren Lagann Original Soundtrack - CD01 (3m3s FLAC)
-<wormy> Now playing: ac1a17e rebellion anthem by Yousei teikoku from rebellion anthem (5m15s MP3) ♥ minus
-<wormy> Now playing: a5ab39a Desirable Dream by GET IN THE RING from Aki-秋- (4m38s FLAC) ♥ minus
-```
-
-Things really took off with the introduction of a truly stupid bot last year: jsbot. This bot adds a .js command which executes arbitrary JavaScript (using Fabrice Bellard’s quickjs) expressions, and sending their stringified result to the channel.
-
-```
-<@sircmpwn> .js Array(16).join("wat" - 1) + " Batman!"
-<wormy> => NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN Batman!
-```
-
-We soon realized, however, that what we had effectively created was a persistent JavaScript environment which was connected to IRC. This has made it possible to write even more IRC bots in the least practical manner imaginable: by writing JavaScript statements, one line at a time, into IRC messages, and hoping it works.
-
-This has not been an entirely smart move.
-
-One “feature”, inspired by Bryan Cantrill, records every time the word “fuck” is used in the channel. Then, whenever anyone says “wtf”, the bot helpfully offers up an example of the usage of the word “fuck” by printing one of the recorded messages. Here’s how it was made:
-
-```
-<sircmpwn> .js let wtf = [];
-<wormy> => undefined
-<sircmpwn> .js on(/fuck/, msg => wtf.push(msg.text))
-<wormy> => 25
-<sircmpwn> .js on(/^what the fuck$/, msg => msg.reply(wtf[Math.floor(Math.random() * wtf.length)]))
-<wormy> => 26
-```
-
-Here’s one which records whenever someone says “foo++” or “foo--” and keeps track of scores:
-
-```
-.js on(/^([a-zA-Z0-9_]+)(\+\+|--)$/, (msg, thing, op) => { if (typeof scores[thing] === "undefined") scores[thing] = 0; scores[thing] += op === "++" ? 1 : -1; msg.reply(`${thing}: ${scores[thing]}`) });
-.js on(/\.score (.*)/, (msg, item) => msg.reply(scores[item]));
-.js let worst = () => Object.entries(scores).sort((a, b) => a[1] - b[1]).slice(0, 5).map(s => `${s[0]}: ${s[1]}`).join(", ");
-.js let best = () => Object.entries(scores).sort((a, b) => b[1] - a[1]).slice(0, 5).map(s => `${s[0]}: ${s[1]}`).join(", ");
-.js on(/^.worst$/, msg => msg.reply(worst()));
-.js on(/^.best$/, msg => msg.reply(best()));
-```
-
-Other “features” written in horrible one-liners include SI unit conversions, rewriting undesirable URLs (e.g. m.wikipedia.org => en.wikipedia.org), answering “wormy you piece of shit” with “¯\_(ツ)_/¯”, and giving the obvious response to “make me a sandwich”.
-
-Eventually it occurred to us that we had two dozen stupid IRC bots storing not only their state, but their code, in a single long-lived process on some server. For a while, the answer to this was adding “don’t reboot this server kthx” to the MotD, but eventually we did some magic nonsense to make certain variables persistent:
-
-```
-let persistent = {};
-function writePersistent() {
- let fd = std.open("persist.json", "w");
- fd.puts(JSON.stringify(persistent));
- fd.close();
-}
-
-let persist_handler = {
- set: (obj, prop, val) => {
- obj[prop] = val;
- writePersistent();
- },
-};
-
-let p = std.loadFile("persist.json");
-if (p !== null) {
- persistent = JSON.parse(p);
- Object.keys(persistent).map(key => {
- let proxy = new Proxy(persistent[key], persist_handler);
- persistent[key] = proxy;
- exports[key] = proxy;
- });
-}
-
-exports.persist = (name, obj) => {
- let proxy = new Proxy(obj, persist_handler);
- persistent[name] = proxy;
- writePersistent();
- return proxy;
-};
-```
-
-Anyway, there’s no moral to this story. We just have a silly IRC bot and I thought I’d share that with you. If you want a stupid IRC bot for your own channel, jsbot is available on sourcehut. I highly disrecommend it and disavow any responsibility for the consequences.
diff --git a/content/blog/The-worlds-dumbest-IRC-bot.md b/content/blog/The-worlds-dumbest-IRC-bot.md
@@ -1,7 +1,6 @@
---
title: The world's stupidest IRC bot
date: 2021-03-29
-outputs: [html, gemtext]
---
I'm an [IRC](https://en.wikipedia.org/wiki/Internet_Relay_Chat) power user,
diff --git a/content/blog/To-make-money-in-FOSS-build-a-business.gmi b/content/blog/To-make-money-in-FOSS-build-a-business.gmi
@@ -1,13 +0,0 @@
-I’ve written about making money in free and open source software before, but it’s a deep topic that merits additional discussion. While previously I focused on what an individual can do in order to build a career in FOSS, but today I want to talk about how you can build a sustainable business in FOSS.
-
-=> gemini://drewdevault.com/2020/11/20/A-few-ways-to-make-money-in-FOSS.gmi Previously: A few ways to make money in FOSS
-
-It’s a common mistake to do this the wrong way around: build the software, then the business. Because FOSS requires you to surrender your sole monetization rights, building the software first and worrying about the money later puts you at a huge risk of losing your first-mover advantage. If you’re just making a project which is useful to you and you don’t want the overhead of running a business, then that may be totally okay — you can just build the software without sweating the business issues. If you choose this path, however, be aware that the promise of free and open source software entitles anyone else to build that business without you. If you lapse in your business-building efforts and your software project starts making someone else money, then they’re not at fault for “taking your work” — you gave it to them.
-
-=> gemini://drewdevault.com/2021/01/20/FOSS-is-to-surrender-your-monopoly.gmi Previously: Open source means surrendering your monopoly over commercial exploitation
-
-I’ve often said that you can make money in FOSS, but not usually by accident. Don’t just build your project and wait for the big bucks to start rolling in. You need to take the business-building seriously from the start. What is the organization of your company? Who will you work with? What kind of clients or customers will you court? Do you know how to reach them? How much they’re willing to pay? What you will sell? Do you have a budget? If you want to make money from your project, sit down and answer these questions seriously.
-
-Different kinds of software projects make money in different ways. Some projects with enterprise-oriented software may be able to sell support contracts. Some can sell consultants to work on integration and feature development. Maybe you can write books about your software, or teach courses on it. Perhaps your software, like the kind my company builds, is well-suited to being sold as a service. Some projects simply solicit donations, but this is the most difficult approach.
-
-Whatever you choose to do, you need to choose it deliberately. You need to incorporate your business, hire an accountant, and do a lot of boring stuff which has nothing to do with the software you want to write. And if you skip this step, someone else is entitled to do all of this boring work, then stick your software on top of it and make a killing without you.
diff --git a/content/blog/To-make-money-in-FOSS-build-a-business.md b/content/blog/To-make-money-in-FOSS-build-a-business.md
@@ -1,7 +1,6 @@
---
title: To make money in FOSS, build a business first
date: 2021-03-03
-outputs: [html, gemtext]
---
I've [written about][0] making money in free and open source software before,
diff --git a/content/blog/Use-me-as-a-resource.gmi b/content/blog/Use-me-as-a-resource.gmi
@@ -1,14 +0,0 @@
-I write a lot of blog posts about my ideas, some of which are even good ideas. Some of these ideas stick, and many readers have attempted to put them into practice, taking on challenges like starting a business in FOSS or stepping up to be leaders in their communities. It makes me proud to see the difference you’re making, and I’m honored to have inspired many of you.
-
-I’m sitting here on my soapbox shouting into the void, but I also want to work with you one-on-one. Here are some things people have reached out to me for:
-
-* Pitching their project/business ideas for feedback
-* Cc’ing me in mailing list discussions, GitHub/GitLab threads, etc, for input
-* Clarifying finer points in my blog posts
-* Asking for feedback on drafts of their own blog posts
-* Offering philosophical arguments about FOSS
-* Asking for advice on dealing with a problem in their community
-
-I have my own confidants that I rely on for these same problems. None of us goes it alone, and for this great FOSS experiment to succeed, we need to rely on each other.
-
-I want to be personally available to you. My email address is sir@cmpwn.com. I read every email I receive, and try to respond to most of them, though it can sometimes take a while. Please consider me a resource for your work in FOSS. I hope I can help!
diff --git a/content/blog/Use-me-as-a-resource.md b/content/blog/Use-me-as-a-resource.md
@@ -1,7 +1,6 @@
---
title: Please use me as a resource
date: 2021-12-25
-outputs: [html, gemtext]
---
I write a lot of blog posts about my ideas,[^1] some of which are even good
diff --git a/content/blog/Use-open-platforms-or-else.gmi b/content/blog/Use-open-platforms-or-else.gmi
@@ -1,13 +0,0 @@
-The ongoing events around /r/wallstreetbets teaches us, once again, about the value of open platforms, and the tremendous risk involved in using proprietary platforms. The economic elites who control those proprietary platforms, backed by their venture capital interests, will shut us down if we threaten them. We’re taking serious risk by casting our lot with them.
-
-Discord, a proprietary instant messaging and VoIP platform, kicked out the /r/WSB community yesterday. They claimed it was due to spam and abuse from bots. These are convenient excuses when considered in the broader context of Discord’s conflict of interest, between its retail investor users and its wall-street investor backers. However, even if we take their explanation at face value, we can easily question Discord’s draconian policies about its proprietary chat protocol. They have a history of cracking down on third-party bots and clients with the same excuses of preventing spam and abuse. If Discord accepts responsibility for preventing spam and abuse, then why are they deplatforming users when they, Discord, failed to prevent it?
-
-It’s all a lie. They use a proprietary protocol and crack down on third-party implementations because they demand total control over their users. They deplatformed /r/WSB because they were financially threatened by them. Discord acts in their own interests, including when they are against the interests of their users. In the words of Rohan Kumar, they’re trying to domesticate their users. It’s the same with every corporate-operated platform. Betting that Reddit will ultimately shut down /r/WSB is probably a stronger bet than buying GME!
-
-=> gemini://seirdy.one/2021/01/27/whatsapp-and-the-domestication-of-users.gmi Rohan Kumar: WhatsApp and the domestication of users
-
-But there is another way: free and open platforms, protocols, and standards. Instead of Discord, I could recommend Matrix, IRC, or Mumble. These are not based on central corporate ownership, but instead on publicly available standards that anyone can build on top of. The ownership of these platforms is distributed between its users, and thus aligned with their incentives.
-
-Federation is also a really compelling solution. Unlike Discord and Reddit, which are centrally owned and operated, federated software calls for many independent server operators to run instances which are responsible for tens or hundreds of users each. Each of these servers then use standardized protocols to communicate with each other, forming one cohesive, distributed social network. Matrix and IRC are both federated protocols, for example. Others include Mastodon, which is similar to Twitter in function; PeerTube, for hosting videos and live streams; and Lemmy, which is a federated equivalent of Reddit.
-
-These are the alternatives. These platforms lack that crucial conflict of interest which is getting us kicked off of the corporate owned platforms. These are the facts: open platforms are the only ones align with the interests of their users, and closed platforms exploit their users. Once you recognize this, you should jump ship before you’re deplatformed, or else you’re risking your ability to organize yourselves to move to another platform. Use open platforms — or else. Do it today.
diff --git a/content/blog/Use-open-platforms-or-else.md b/content/blog/Use-open-platforms-or-else.md
@@ -1,7 +1,6 @@
---
title: Use open platforms — or else
date: 2021-01-28
-outputs: [html, gemtext]
---
The ongoing events around [/r/wallstreetbets][0] teaches us, once again, about
diff --git a/content/blog/Utility-vs-usability.gmi b/content/blog/Utility-vs-usability.gmi
@@ -1,17 +0,0 @@
-In many fields, professional-grade tooling requires a high degree of knowledge and training to use properly, usually more than is available to the amateur. The typical mechanic's tool chest makes my (rather well-stocked, in my opinion) tool bag look quite silly. A racecar driver is using a vehicle which is much more complex than, say, the soccer mom's mini-van. Professional-grade tools are, necessarily, more complex and require skill to use.
-
-There are two attributes to consider when classifying these tools: utility and usability. These are not the same thing. Some tools have both high utility and high usability, such as a pencil. Some are highly usable, but of low utility, such as a child's tricycle. Tools of both low-utility and low-usability are uncommon, but I'm sure you can think of a few examples from your own experiences :)
-
-When designing tools, it is important to consider both of these attributes, and it helps to keep the intended audience in mind. I think that many programmers today are overly concerned with usability, and insufficiently concerned with utility. Some programmers (although this sort prefers "developer") go so far as to fetishize usability at the expense of utility.
-
-In some cases, sacrificing utility in favor of usability is an acceptable trade-off. In the earlier example's case, it's unlikely that anyone would argue that the soccer mom should be loading the tots into an F1 racecar. However, it's equally absurd to suppose that the F1 driver should bring a mini-van to the race track. In the realm of programming, this metaphor speaks most strongly to me in the design of programming tools.
-
-I argue that most programmers are professionals who are going to invest several years into learning the craft. This is the audience for whom I design my tools. What trouble is it to spend an extra hour learning a somewhat less intuitive code review tool when the programming language whose code you're reviewing required months to learn and years to master?
-
-=> https://xkcd.com/1205/ xkcd: Is It Worth the Time?
-
-I write tools to maximize the productivity of professional programmers. Ideally, we can achieve both usability and utility, and often we do just that. But, sometimes, these tools require a steeper learning curve. If they are more useful in spite of that, they will usually save heaps of time in the long run.
-
-Instead of focusing on dumbing down our tools, maximizing usability at the expense of utility, we should focus on making powerful tools and fostering a culture of mentorship. Senior engineers should be helping their juniors learn and grow to embrace and build a new generation of more and more productive tooling, considering usability all the while but never at the expense of utility.
-
-I'll address mentorship in more detail in future posts. For now, I'll just state that mentorship is the praxis of my tooling philosophy. We can build better, more powerful, and more productive tools, even if they require a steeper learning curve, so long as we're prepared to teach people how to use them, and they're prepared to learn.
diff --git a/content/blog/Utility-vs-usability.md b/content/blog/Utility-vs-usability.md
@@ -1,7 +1,6 @@
---
title: Utility vs usability
date: 2020-11-06
-outputs: [html, gemtext]
---
In many fields, professional-grade tooling requires a high degree of knowledge
diff --git a/content/blog/What-is-Gemini-anyway.gmi b/content/blog/What-is-Gemini-anyway.gmi
@@ -1,69 +0,0 @@
-I've been writing about some specific topics in the realm of Gemini on my blog over the past two months or so, but I still haven't written a broader introduction to Gemini, what I'm doing with it, and why you should be excited about it, too. Let's do that today!
-
-Gemini is a network protocol for exchanging hypertext documents — “hypertext” in the general sense of the word, not with respect to the hypertext markup language (HTML) that web browsers understand. It’s a simple network protocol which allows clients to request hypertext documents (in its own document format, gemtext). It is, in some respects, an evolution of Gopher, but more modernized and streamlined.
-
-=> https://en.wikipedia.org/wiki/Gopher_(protocol) Gopher
-
-Gemini is very simple. The protocol uses TLS to establish an encrypted connection (using self-signed certificates and TOFU rather than certificate authorities), and performs a very simple exchange: the client sends the URL it wants to retrieve, terminated with CRLF. The server responds with an informative line, consisting of a numeric status code and some additional information (such as the document's mimetype), then writes the document and closes the connection. Authentication, if desired, is done with client certificates. User input, if desired, is done with a response code which conveys a prompt string and a request for user input, followed by a second request with the user's response filled into the URL's query string. And that's pretty much it!
-
-```
-$ openssl s_client -quiet -crlf \
- -servername drewdevault.com \
- -connect drewdevault.com:1965 \
- | awk '{ print "<= " $0 }'
-gemini://drewdevault.com
-<= 20 text/gemini
-<= ```ASCII art of a rocket next to "Drew DeVault" in a stylized font
-<= /\
-<= || ________ ________ ____ ____ .__ __
-<= || \______ \_______ ______ _ __ \______ \ ___\ \ / /____ __ __| |_/ |_
-<= /||\ | | \_ __ \_/ __ \ \/ \/ / | | \_/ __ \ Y /\__ \ | | \ |\ __\
-<= /:||:\ | ` \ | \/\ ___/\ / | ` \ ___/\ / / __ \| | / |_| |
-<= |:||:| /_______ /__| \___ >\/\_/ /_______ /\___ >\___/ (____ /____/|____/__|
-<= |/||\| \/ \/ \/ \/ \/
-<= **
-<= **
-<= ```
-[...]
-```
-
-So why am I excited about it?
-
-My disdain for web browsers is well documented¹. Web browsers are extraordinarily complex, and any attempt to build a new one would be a Sisyphean task. Successfully completing that implementation, if even possible, would necessarily produce a Lovecraftian mess: unmaintainable, full of security vulnerabilities, with gigabytes in RAM use and hours in compile times. And given that all of the contemporary web browsers that implement a sufficiently useful subset of web standards are ass and getting assier, what should we do?
-
-The problem is unsolvable. We cannot have the “web” without all of these problems. But what we can have is something different, like Gemini. Gemini does not solve all of the web’s problems, but it addresses a subset of its use-cases better than the web does, and that excites me. I want to discard the parts of the web that Gemini does better, and explore other solutions for anything that’s left of the web which is worth keeping (hint: much of it is not).
-
-There are some aspects of Gemini which I approve of immensely:
-
-* It's dead simple. A client or server implementation can be written from scratch by a single person in the space of an afternoon or two. A new web browser could take hundreds of engineers millions of hours to complete.
-* It's not extensible. Gemini is designed to be difficult to extend without breaking backwards compatibility, and almost all proposals for expansion on the mailing list are ultimately shot down. This is a good thing: extensibility is generally a bad idea. Extensions ultimately lead to more complexity and Gemini might suffer the same fate as the web if not for its disdain for extensions.
-* It's opinionated about document formatting. There are no inline links (every link goes on its own line), no formatting, and no inline images. Gemini strictly separates the responsibility of content and presentation. Providing the content is the exclusive role of the server, and providing the presentation is the exclusive role of the client. There are no stylesheets and authors have very little say in how their content is presented. It's still possible for authors to express themselves within these constraints — as with any other constraints — but it allows clients to be simpler and act more as user agents than vendor agents.
-
-Some people argue that what we should have is “the web, but less of it”, i.e. a “sane” subset of web standards. I don’t agree (for one, I don’t think there is a “sane” subset of those standards), but I’ll save that for another blog post. Gemini is a new medium, and it’s different from the web. Anyone who checking it out should be prepared for that and open to working within its constraints. Limitations breed creativity!
-
-For my part, I have been working on a number of Gemini projects. For one, this blog is now available on Gemini, and I have started writing some Gemini-exclusive content for it. I've also written some software you're welcome to use:
-
-=> /gmni.gmi libgmni, gmni, and gmnlm
-
-libgmni, gmni, and gmnlm are my suite of Gemini client software, all written in C11 and only depending on a POSIX-like system and OpenSSL. libgmni is a general-purpose Gemini client library with a simple interface. gmni is a cURL-like command line tool for performing Gemini requests. Finally, gmnlm is a line-mode browser with a rich feature-set. Together these tools weigh just under 4,000 lines of code, of which about 1,600 are the URL parser from cURL vendored in.
-
-=> /gmnisrv.gmi gmnisrv
-
-gmnisrv is a high-performance Gemini server, also written in C11 for POSIX systems with OpenSSL. It supports zero-configuration TLS, CGI scripting, auto-indexing, regex routing and URL rewrites, and I have a couple more things planned for 1.0. It clocks in at about 6,700 lines, of which the same 1,600 are vendored from cURL, and an additional 2,800 lines are vendored from Fabrice Bellard's quickjs² regex implementation.
-
-=> /kineto.gmi kineto
-
-kineto is an HTTP-to-Gemini gateway, implemented as a single Go file (under 500 lines) with the assistance of ~adnano's go-gemini³ library. My Gemini blog is available through this portal⁴ if you would like to browse it.
-
-So dive in and explore! Install gmnisrv on your server and set up a Gemini space for yourself. Read the feeds from CAPCOM. Write some software of your own!
-
-=> gemini://gemini.circumlunar.space/capcom/ CAPCOM
-
-### References
-
-=> https://drewdevault.com/2020/08/13/Web-browsers-need-to-stop.html ¹ Exhibit A
-=> https://drewdevault.com/2020/03/18/Reckless-limitless-scope.html ¹ Exhibit B
-=> https://cmpwn.com/@sir/104894723861368333 ¹ Exhibit C
-=> https://bellard.org/quickjs/ ² quickjs
-=> https://sr.ht/~adnano/go-gemini/ ² go-gemini
-=> https://portal.drewdevault.com ⁴ portal.drewdevault.com
diff --git a/content/blog/What-is-Gemini-anyway.md b/content/blog/What-is-Gemini-anyway.md
@@ -1,7 +1,6 @@
---
title: What is this Gemini thing anyway, and why am I excited about it?
date: 2020-11-01
-outputs: [html, gemtext]
---
I've been writing about some specific topics in the realm of Gemini on my blog
diff --git a/content/blog/Why-am-I-working-in-private.gmi b/content/blog/Why-am-I-working-in-private.gmi
@@ -1,30 +0,0 @@
-As many readers are aware, I have been working on designing and implementing a systems programming language. This weekend, I’ve been writing a PNG file decoder in it, and over the past week, I have been working on a simple kernel with it as well. I’m very pleased with our progress so far — I recently remarked that this language feels like the language I always wanted, and that’s mission accomplished by any definition I care to consider.
-
-I started the project on December 27th, 2019, just over two years ago, and I have kept it in a semi-private state since. Though I have not given its name in public, the git repos, mailing lists, and bug trackers use sourcehut’s “unlisted” state, so anyone who knows the URL can see them. The website is also public, though its domain name is also undisclosed, and it is full of documentation, tutorials, and resources for developers. People can find the language if they want to, though at this stage the community only welcomes contributors, not users or onlookers. News of the project nominally spreads by word of mouth and with calls-to-action on this blog, and to date a total of 30 people have worked on it over the course of 3,029 commits. It is a major, large-scale project, secret though it may be.
-
-And, though we’ve invested a ton of work into this project together, it remains as-of-yet unfinished. There is no major software written in our language, though several efforts are underway. Several of our key goals have yet to be merged upstream, such as date/time support, TLS, and regular expressions, though, again, these efforts are well underway. Until we have major useful projects written in our language, we cannot be confident in our design, and efforts in these respects do a great deal to inform us regarding any changes which might be necessary. And some changes are already in the pipeline: we have plans to make several major revisions to the language and standard library design, which are certain to require changes in downstream software.
-
-When our community is small and private, these changes are fairly easy to reckon with. Almost everyone who is developing a project based on our language is also someone who has worked on the compiler or standard library. Often, the person who implements a breaking change will also send patches to various downstreams updating them to be compatible with this change, for every extant software project written in the language. This is a task which can be undertaken by one person. We all understand the need for these changes, participate in the discussions and review the implementations, and have the expertise necessary to make the appropriate changes to our projects.
-
-Moreover, all of these people are also understanding of the in-development nature of the project. All users of our language are equipped with the knowledge that they are expected to help fix the bugs they identify, and with the skills and expertise necessary to follow-up on this fact. We don’t have to think about users who stumble upon the project, spend a few hours trying to use it, then encounter an under-developed part of the language and run out of enthusiasm. We still lack DWARF support, so debugging is a chore. Sometimes the compiler segfaults or aborts without printing a useful error message. It’s a work-in-progress, after all. These kinds of problems can discourage new learners very fast, and often requires the developers to offer some of their precious bandwidth to provide expert assistance. With the semi-private model, there are, at any given time, a very small number of people involved who are new to the language and require more hands-on support to help them through their problems.
-
-A new programming language is a major undertaking. We’re building one with an explicit emphasis on simplicity and we’re still not done after two years. When most people hear about the project for the first time, I don’t want them to find a half-completed language which they will fail to apply to their problem because it’s not fleshed out for their use-case. The initial release will have comprehensive documentation, a detailed specification, and stability guarantees, so it can be picked up and used in production by curious users on day one. I want to fast-forward to the phase where people study it to learn how to apply it to their problems, rather than to learn if they can apply it to their problems.
-
-Even though it is under development in private, this project is both “free software” and “open source”, according to my strict understanding of those terms as defined by the FSF and OSI. “Open source” does not mean that the project has a public face. The compiler is GPL 3.0 licensed, the standard library is MPL 2.0, and the specification is CC-BY-ND (the latter is notably less free, albeit for good reasons), and these details are what matter. Every person who has worked on the project, and every person who stumbles upon it, possesses the right to lift the veil of secrecy and share it with the world. The reason they don’t is because I asked them not to, and we maintain a mutual understanding regarding the need for privacy.
-
-On a few occasions, someone has discovered the project and taken it upon themselves to share it in public places, including Hacker News, Lemmy, and 4chan. While this is well within your rights, I ask you to respect our wishes and allow us to develop this project in peace. I know that many readers are excited to try it out, but please give us some time and space to ensure that you are presented with a robust product. At the moment, we anticipate going public early next year. Thank you for your patience.
-
-Thank you for taking the time to read my thoughts as well. I welcome your thoughts and opinions on the subject: my inbox is always open. If you disagree, I would appreciate it if you reached out to me to discuss it before posting about the project online. And, if you want to get involved, here is a list of things we could use help with — email me to volunteer if you have both the time and expertise necessary:
-
-* Cryptography
-* Ports for new architectures or operating systems
-* Image & pixel formats/conversions
-* SQL database adapters
-* Signal handling
-* JSON parsing & encoding
-* Compression and decompression
-* Archive formats
-
-=> mailto:sir@cmpwn.com Send me an email to volunteer or open a discussion
-
-If you definitely don't want to wait for the language to go public, volunteering in one of our focus areas is the best way to get involved. Get in touch! If not, then the release will come around sooner than you think. We're depending on your patience and trust.
diff --git a/content/blog/Why-am-I-working-in-private.md b/content/blog/Why-am-I-working-in-private.md
@@ -1,7 +1,6 @@
---
title: Why am I building a programming language in private?
date: 2022-03-13
-outputs: [html, gemtext]
---
As many readers are aware, I have been working on designing and implementing a
diff --git a/content/blog/awk-is-the-coolest-tool-you-dont-know.gmi b/content/blog/awk-is-the-coolest-tool-you-dont-know.gmi
@@ -1,166 +0,0 @@
-awk, named for its authors Aho, Weinberger, and Kernighan, is a very cool little tool that you know exists and is installed on your system, but you have never bothered to learn how to use. I’m here to tell you that you really ought to!
-
-If I stop for a moment to ponder the question, “what is the coolest tool in Unix?”, the immediate answer is awk. If I insist on pondering it for longer, giving each tool a moment for fair evaluation, the answer is still awk.¹ There are few tools as perfectly suited to their problem as awk is.
-
-I’m not going to tell you what awk is, because there are already plenty of other resources for that. If you are totally unfamiliar with awk, then here’s a very brief summary:
-
-> awk reads a plaintext file as a list of newline-separated records of whitespace-separated columns. It then matches each line to a set of rules (defined by regular expressions) and then performs the actions listed by each matching rule (such as summing or averaging a column, reformatting the output, or doing any other number of things).
->
-> In short, awk is a domain-specific language which reads the kind of files you probably have a lot of already, then mutates them or computes something interesting from them.
-
-=> https://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html "awk" in the POSIX specification
-
-Instead of teaching you how to use it, I’m just going to tell you about some things I’ve used awk for, in an effort to convince you that it’s cool.
-
-Please hold while I hastily run history | grep awk. Wait, no, it would be more ironic to run history | awk '/awk/ { print $0 }' instead. One sec…
-
-## Rewriting references in comments
-
-I had some code like this:
-
-```
-// Duplicates a [dirent] object. Call [dirent_free] to get rid of it later.
-export fn dirent_dup(e: *dirent) dirent = { // ...
-```
-
-It needed to become this:
-
-```
-// Duplicates a [[dirent]] object. Call [[dirent_free]] to get rid of it later.
-export fn dirent_dup(e: *dirent) dirent = { // ...
-```
-
-So the following awk command looks for lines which are a comment, then globally substitutes a regex for another string:
-
-```
-awk '/^\/\/.*/ { gsub(/\[[a-zA-Z:]+\]/, "[&]") } { print $0 }' < input
-```
-
-In this manner I used it as a kind of modified sed which only operates on certain lines.
-
-## Extracting data from one script to use in another script
-
-I have a script which includes something like this:
-
-```
-modules="ascii
-bufio
-bytes
-compress_flate
-compress_zlib
-crypto_blake2b
-crypto_math
-crypto_random
-crypto_md5
-crypto_sha1
-crypto_sha256
-crypto_sha512
-# ...
-uuid"
-```
-
-I wanted to extract this list of names from the script, and replace _ with :: for all of them. awk to the rescue!
-
-```
-# Yes, I am entirely aware that this is a hack
-modules=$(awk '
-/^modules="/ { sub(/.*=\"/, ""); gsub(/_/, "::"); print $1; mods = 1 }
-/^[a-z][a-z0-9_]+$/ { if (mods == 1) { gsub(/_/, "::"); print $1 } }
-/^[a-z][a-z0-9_]+"$/ { if (mods == 1) { mods = 0; sub(/"/, ""); gsub(/_/, "::"); print $1 } }
-' < scripts/gen-stdlib)
-```
-
-## Adding syntax highlighting to patches
-
-My email client, aerc, lets you pipe emails into an arbitrary command to format them nicely for displaying in your terminal. One kind of email I get often is a patch, with a diff dropped directly into the email. I wrote this awk script to add ANSI colors to such an email:
-
-```
-BEGIN {
- bright = "\x1B[1m"
- red = "\x1B[31m"
- green = "\x1B[32m"
- cyan = "\x1B[36m"
- reset = "\x1B[0m"
-
- hit_diff = 0
-}
-{
- if (hit_diff == 0) {
- # Strip carriage returns from line
- gsub(/\r/, "", $0)
-
- if ($0 ~ /^diff /) {
- hit_diff = 1;
- print bright $0 reset
- } else if ($0 ~ /^.*\|.*(\+|-)/) {
- left = substr($0, 0, index($0, "|")-1)
- right = substr($0, index($0, "|"))
- gsub(/-+/, red "&" reset, right)
- gsub(/\++/, green "&" reset, right)
- print left right
- } else {
- print $0
- }
- } else {
- # Strip carriage returns from line
- gsub(/\r/, "", $0)
-
- if ($0 ~ /^-/) {
- print red $0 reset
- } else if ($0 ~ /^\+/) {
- print green $0 reset
- } else if ($0 ~ /^ /) {
- print $0
- } else if ($0 ~ /^@@ (-[0-9]+,[0-9]+ \+[0-9]+,[0-9]+) @@.*/) {
- sub(/^@@ (-[0-9]+,[0-9]+ \+[0-9]+,[0-9]+) @@/, cyan "&" reset)
- print $0
- } else {
- print bright $0 reset
- }
- }
-}
-```
-
-## Pulling a specific column out of another command
-
-This is most basic use of awk. git ls-tree returns something like this:
-
-```
-100644 blob aa61a6c84fa215178b560e2bddcdcb18bf62ccc7 .build.yml
-100644 blob 73ab8769f93bbbd5c4b69d33c2fa86329d05bc85 .gitignore
-100644 blob 65d4d3ae9206f664e72c49ffed1489414852e637 LICENSE
-100644 blob 6fd05a7d17471026df258d9931309a19ac286c5f README.md
-040000 tree dfdc471efbd87a131fd7fe41706debdb48411ebe assets
-100644 blob ede1a81e5b44031d95e315985ed7e7831067d609 config.toml
-040000 tree a8ab3f0c2db376725d480c673e289d654d289acc content
-040000 tree 7b1a4966c4143bcea991e9f77620cb4fda887d66 layouts
-040000 tree e4dfc4f7500e111652aa7880002252b47239a2d0 static
-100644 blob f31711885de4cd43571ee633b553016b766d3ec1 webring-in.template
-```
-
-Recently I was looking for comments on the first line of any file in my git repository, so:
-
-git ls-tree -r HEAD | awk '{ print $4 }' | xargs -n1 sed 1q | grep '//' | less
-I include this to demonstrate some restraint. The xargs, sed, and grep commands in this pipeline could all have been incorporated into awk, but it’s simpler not to.
-
-## Numbering lines from stdin
-
-Sometimes I have a file and I want it to have line numbers. So, I wrote a little shell one-liner that does the job:
-
-```
-$ cat ~/bin/lineno
-exec awk '{ print NR "\t" $0 }'
-$ lineno < /etc/os-release
-1 NAME="Alpine Linux"
-2 ID=alpine
-3 VERSION_ID=3.14.0_alpha20210212
-4 PRETTY_NAME="Alpine Linux edge"
-5 HOME_URL="https://alpinelinux.org/"
-6 BUG_REPORT_URL="https://bugs.alpinelinux.org/"
-```
-
-## In conclusion
-
-You’re doing yourself a disservice if you don’t know how to use awk. Awk is only applicable to a certain kind of problem, but it’s a problem you’ll encounter more often than you think. Plus, once you get thinking in awk terms, you’ll find yourself subtly formatting your data in awk-friendly ways :) Learn it!
-
-¹ Though special mention goes to ar (I dunno), cut (it’s useful), dd (for being a silly wart), ed (for not being installed on anyone’s system by default anymore, which pisses me off), and fort77 (for being specified by POSIX for some reason).
diff --git a/content/blog/awk-is-the-coolest-tool-you-dont-know.md b/content/blog/awk-is-the-coolest-tool-you-dont-know.md
@@ -1,5 +0,0 @@
----
-title: awk is the coolest tool you don't know
-date: 2021-05-03
-outputs: [gemtext]
----
diff --git a/content/blog/finger-client.gmi b/content/blog/finger-client.gmi
@@ -1,49 +0,0 @@
-This is a short follow-up to the io_uring finger server article posted about a month ago. In the time since, we have expanded our language with a more complete networking stack, most importantly by adding a DNS resolver. I have used these improvements to write a small client implementation of the finger protocol.
-
-=> gemini://drewdevault.com/2021/05/24/io_uring-finger-server.gmi Previously: Using io_uring to make a high-performance... finger server
-=> gemini://gemini.bortzmeyer.org/rfc-mirror/rfc1288.txt See also: RFC 1288
-
-```
-use fmt;
-use io;
-use net::dial;
-use os;
-use strings;
-
-@init fn registersvc() void = dial::registersvc("tcp", "finger", [], 79);
-@noreturn fn usage() void = fmt::fatal("Usage: {} <user>[@<host>]", os::args[0]);
-
-export fn main() void = {
- if (len(os::args) != 2) usage();
-
- const items = strings::split(os::args[1], "@");
- defer free(items);
- if (len(items) == 0) usage();
-
- const user = items[0];
- const host = if (len(items) == 1) "localhost"
- else if (len(items) == 2) items[1]
- else usage();
-
- match (execute(user, host)) {
- err: dial::error => fmt::fatal(dial::strerror(err)),
- err: io::error => fmt::fatal(io::strerror(err)),
- void => void,
- };
-};
-
-fn execute(user: str, host: str) (void | dial::error | io::error) = {
- const conn = dial::dial("tcp", host, "finger")?;
- defer io::close(conn);
- fmt::fprintf(conn, "{}\r\n", user)?;
- io::copy(os::stdout, conn)?;
-};
-```
-
-Technically, we could do more, but I chose to just address the most common use-case for finger servers in active use today: querying a specific user. Expanding this with full support for all finger requests would probably only grow this code by 2 or 3 times.
-
-Our language now provides a net::dial module, inspired by Go’s net.Dial and the Plan 9 dial function Go is derived from. Our dial actually comes a bit closer to Plan 9 by re-introducing the service parameter — Plan 9’s “tcp!example.org!http” becomes net::dial(“tcp”, “example.org”, “http”) in our language — which we use to find the port (unless you specify it in the address). The service parameter is tested against a small internal list of known services, and against /etc/services. We also automatically perform an SRV lookup for “_finger._tcp.example.org”, so most programs written in our language will support SRV records with no additional effort.
-
-In our client code, we can see that the @init function adds “finger” to the list of known internal services. @init functions run on start-up, and this one just lets dial know about our protocol. Our network stack is open to extension in other respects, too — unlike Go, third-party libraries can define new protocol handlers for dial as well, perhaps opening it up in the future to networks like AF_BLUETOOTH, AF_AX25, and so on, complete with support for network-appropriate addresses and resolver functionality.
-
-The rest is pretty straightforward! We just parse the command line, dial the server, write the username to it, and splice the connection into stdout. Much simpler than the server. Future improvements might rewrite the CRLF to LF, but that’s not particularly important.
diff --git a/content/blog/finger-client.md b/content/blog/finger-client.md
@@ -1,7 +1,6 @@
---
title: A finger client
date: 2021-06-24
-outputs: [html, gemtext]
---
This is a short follow-up to the [io_uring finger server][0] article posted
diff --git a/content/blog/godocs.io-six-months-later.gmi b/content/blog/godocs.io-six-months-later.gmi
@@ -1,15 +0,0 @@
-We’re six months on from forking godoc.org following its upstream deprecation, and we’ve made a lot of great improvements since. For those unaware, the original godoc.org was replaced with pkg.go.dev, and a redirect was set up. The new website isn’t right for many projects — one of the most glaring issues is the narrow list of software licenses pkg.go.dev will display documentation for. To continue serving the needs of projects which preferred the old website, we forked the project and set up godocs.io.
-
-Since then, we’ve made a lot of improvements, both for the hosted version and for the open source project. Special thanks is due to Adnan Maolood, who has taken charge of a lot of these improvements, and also to a few other contributors who have helped in their own small ways. Since forking, we’ve:
-
-* Added Go modules support
-* Implemented Gemini access
-* Made most of the frontend JavaScript optional and simpler
-* Rewritten the search backend to use PostgreSQL
-
-We also substantially cleaned up the codebase, removing over 37,000 lines of code — 64% of the lines from the original code base. The third-party dependencies to Google infrastructure have been removed and it’s much easier to run the software locally or on your intranet, too.
-
-What we have now is still the same GoDoc: the experience is very similar to the original godocs.org. However, we have substantially improved it: streamlining the codebase, making the UI more accessible, and adding a few important features; thanks to the efforts of just a small number of volunteers. We’re happy to be supporting the Go community with this tool, and looking forward to making more (conservative!) improvements in the future. Enjoy!
-
-=> gemini://godocs.io godocs.io via Gemini
-=> https://sr.ht/~sircmpwn/godocs.io/ Source code
diff --git a/content/blog/godocs.io-six-months-later.md b/content/blog/godocs.io-six-months-later.md
@@ -1,7 +1,6 @@
---
title: godocs.io six months later
date: 2021-05-07
-outputs: [html, gemtext]
---
We're six months on from [forking godoc.org][0] following its upstream
diff --git a/content/blog/godocs.io.gmi b/content/blog/godocs.io.gmi
@@ -1,21 +0,0 @@
-The inevitable forced migration to pkg.go.dev is shipping with known regressions. It’s rolling on out without fixing, for example, a problem I pointed out in June. godoc.org uses special meta tags which can link up a source path and line number with a URL for a git host’s source view. pkg.go.dev instead hardcodes a list of hosted git forges. git.sr.ht is among them — getting it added was my introduction to this mess — but only the hosted instance, and no third-party git.sr.ht instances. hg.sr.ht is still missing, too. The Go team committed to not shipping with regressions, but that was (predictably) false.
-
-=> https://github.com/golang/go/issues/38986 Raising the issue in June
-
-I have previously written about the botched pkg.go.dev roll-out. It’s a microcosm of Google’s toxic engineering culture, demonstrating how Google engineers are out-of-touch with the values and needs of their community. It initially shipped as a proprietary replacement, with no intention of making it open source until the community demanded it, under the rationale that it wasn’t useful for intranets. They didn’t take the opportunity to address any of the flaws in godoc.org’s design, and instead introduced regressions and made those flaws even worse.
-
-=> https://drewdevault.com/2020/08/01/pkg-go-dev-sucks.html Previously on Drew Hates pkg.go.dev
-
-And it’s shipping, in this state, in Q1 2021.
-
-This pattern appears to be common to many Google affairs, because it’s the primary means by which Googlers get promoted.
-
-In light of this, godocs.io is now available, running the last version of godoc.org that pre-dates pkg.go.dev. It is my intention to keep this running indefinitely, and you’re welcome to use it for your own purposes or projects.
-
-=> https://godocs.io godocs.io
-
-I haven’t made any especially interesting changes, but the source code is available on git.sr.ht — feel free to send patches along to sir@cmpwn.com if you want to submit (minor) improvements or bugfixes. One change I will definitely have to make is a new solution for full-text search (probably just Postgres), because I don't want to use AppEngine. If anyone wants to help with this, I would appreciate your patch.
-
-=> https://git.sr.ht/~sircmpwn/gddo The code
-
-Update: implemented full-text search on top of Postgres in about 30 minutes with a friend's help. Aside from the dataset (which will grow naturally as the service is used), this should be for all intents and purposes equivalent to pre-pkg.go.dev godoc.org now. Enjoy!
diff --git a/content/blog/io_uring-finger-server.gmi b/content/blog/io_uring-finger-server.gmi
@@ -1,635 +0,0 @@
-I’m working on adding a wrapper for the Linux io_uring interface to my secret programming language project. To help learn more about io_uring and to test out the interface I was designing, I needed a small project whose design was well-suited for the value-add of io_uring. The Finger protocol is perfect for this! After being designed in the 70’s and then completely forgotten about for 50 years, it’s the perfect small and simple network protocol to test drive this new interface with.
-
-In short, finger will reach out to a remote server and ask for information about a user. It was used back in the day to find contact details like the user’s phone number, office address, email address, sometimes their favorite piece of ASCII art, and, later, a summary of the things they were working on at the moment. The somewhat provocative name allegedly comes from an older usage of the word to mean “a snitch” or a member of the FBI. The last useful RFC related to Finger is RFC 1288, circa 1999, which will be our reference for this server. If you want to give it a test drive, try this to ping the server we’ll be discussing today:
-
-```
-printf 'drew\r\n' | nc drewdevault.com 79
-```
-
-You might also have the finger command installed locally (try running “finger drew@drewdevault.com”), and you can try out the Castor browser by sourcehut user ~julienxx for a graphical experience.
-
-And what is io_uring? It is the latest interface for async I/O on Linux, and it’s pretty innovative and interesting. The basic idea is to set up some memory which is shared between the kernel and the userspace program, and stash a couple of ring buffers there that can be updated with atomic writes. Userspace appends submission queue entries (SQEs) to the submission queue (SQ), and the kernel processes the I/O requests they describe and then appends completion queue events (CQEs) to the completion queue (CQ). Interestingly, both sides can see this happening without entering the kernel with a syscall, which is a major performance boost. It more or less solves the async I/O problem for Linux, which Linux (and Unix at large) has struggled to do for a long time.
-
-With that the background in place, I’m going to walk you through my finger server’s code. Given that this is written in an as-of-yet unreleased programming language, I’ll do my best to help you decipher the alien code.
-
-> A quick disclaimer:
->
-> This language, the standard library, and the interface provided by linux::io_uring, are all works in progress and are subject to change. In particular, this program will become obsolete when we design a portable I/O bus interface, which on Linux will be backed by io_uring but on other systems will use kqueue, poll, etc.
->
-> As a rule of thumb, anything which uses rt:: or linux:: is likely to change or be moved behind a portable abstraction in the future.
-
-Let’s start with the basics:
-
-```
-use fmt;
-use getopt;
-use net::ip;
-use strconv;
-use unix::passwd;
-
-def MAX_CLIENTS: uint = 128;
-
-export fn main() void = {
- let addr: ip::addr = ip::ANY_V6;
- let port = 79u16;
- let group = "finger";
-
- const cmd = getopt::parse(os::args,
- "finger server",
- ('B', "addr", "address to bind to (default: all)"),
- ('P', "port", "port to bind to (default: 79)"),
- ('g', "group", "user group enabled for finger access (default: finger)"));
- defer getopt::finish(&cmd);
-
- for (let i = 0z; i < len(cmd.opts); i += 1) {
- const opt = cmd.opts[i];
- switch (opt.0) {
- 'B' => match (ip::parse(opt.1)) {
- a: ip::addr => addr = a,
- ip::invalid => fmt::fatal("Invalid IP address"),
- },
- 'P' => match (strconv::stou16(opt.1)) {
- u: u16 => port = u,
- strconv::invalid => fmt::fatal("Invalid port"),
- strconv::overflow => fmt::fatal("Port exceeds range"),
- },
- 'g' => group = opt.1,
- };
- };
-
- const grent = match (passwd::getgroup(group)) {
- void => fmt::fatal("No '{}' group available", group),
- gr: passwd::grent => gr,
- };
- defer passwd::grent_finish(grent);
-};
-```
-
-None of this code is related to io_uring or finger, but just handling some initialization work. This is the daemon program, and it will accept some basic configuration via the command line. The getopt configuration shown here will produce the following help string:
-
-```
-$ fingerd -h
-fingerd: finger server
-
-Usage: ./fingerd [-B <addr>] [-P <port>] [-g <group>]
-
--B <addr>: address to bind to (default: all)
--P <port>: port to bind to (default: 79)
--g <group>: user group enabled for finger access (default: finger)
-```
-
-The basic idea is to make finger access opt-in for a given Unix account by adding them to the “finger” group. The “passwd::getgroup” lookup fetches that entry from /etc/group to identify the list of users for whom we should be serving finger access.
-
-```
-let serv = match (net::listen(addr, port,
- 256: net::backlog, net::reuseport)) {
- err: io::error => fmt::fatal("listen: {}", io::strerror(err)),
- l: *net::listener => l,
-};
-defer net::shutdown(serv);
-fmt::printfln("Server running on :{}", port)!;
-```
-
-Following this, we set up a TCP listener. I went for a backlog of 256 connections (overkill for a finger server, but hey), and set reuseport so you can achieve CLOUD SCALE by running several daemons at once.
-
-Next, I set up the io_uring that we’ll be using:
-
-```
-// The ring size is 2 for the accept and sigfd read, plus 2 SQEs for
-// each of up to MAX_CLIENTS: either read/write plus a timeout, or up to
-// two close SQEs during cleanup.
-static assert(MAX_CLIENTS * 2 + 2 <= io_uring::MAX_ENTRIES);
-
-let params = io_uring::params { ... };
-let ring = match (io_uring::setup(MAX_CLIENTS * 2 + 2, ¶ms)) {
- ring: io_uring::io_uring => ring,
- err: io_uring::error => fmt::fatal(io_uring::strerror(err)),
-};
-defer io_uring::finish(&ring);
-```
-
-If we were running this as root (and we often are, given that fingerd binds to port 79 by default), we could go syscall-free by adding io_uring::setup_flags::SQPOLL to params.flags, but this requires more testing on my part so I have not added it yet. With this configuration, we’ll need to use the io_uring_enter syscall to submit I/O requests.
-
-We also have to pick a queue size when setting up the uring. I planned this out so that we can have two SQEs in flight for every client at once — one for a read/write request and its corresponding timeout, or for the two “close” requests used when disconnecting the client — plus two extra entries, one for the “accept” call, and another to wait for signals from a signalfd.
-
-Speaking of signalfds:
-
-```
-let mask = rt::sigset { ... };
-rt::sigaddset(&mask, rt::SIGINT)!;
-rt::sigaddset(&mask, rt::SIGTERM)!;
-rt::sigprocmask(rt::SIG_BLOCK, &mask, null)!;
-let sigfd = signalfd::signalfd(-1, &mask, 0)!;
-defer rt::close(sigfd)!;
-
-const files = [net::listenerfd(serv) as int, sigfd];
-io_uring::register_files(&ring, files)!;
-
-const sqe = io_uring::must_get_sqe(&ring);
-io_uring::poll_add(sqe, 1, rt::POLLIN: uint, flags::FIXED_FILE);
-io_uring::set_user(sqe, &sigfd);
-```
-
-We haven’t implemented a high-level signal interface yet, so this is just using the syscall wrappers. I chose to use a signalfd here so I can monitor for SIGINT and SIGTERM with my primary I/O event loop, to (semi-)gracefully1 terminate the server.
-
-This also happens to show off our first SQE submission. “must_get_sqe” will fetch the next SQE, asserting that there is one available, which relies on the math I explained earlier when planning for our queue size. Then, we populate this SQE with a “poll_add” operation, which polls on the first fixed file descriptor. The “register” call above adds the socket and signal file descriptors to the io_uring’s list of “fixed” file descriptors, and so with “flags::FIXED_FILE” this refers to the signalfd.
-
-We also set the user_data field of the SQE with “set_user”. This will be copied to the CQE later, and it’s necessary that we provide a unique value in order to correlate the CQE back to the SQE it refers to. We can use any value, and the address of the signalfd variable is a convenient number we can use for this purpose.
-
-There’s one more step — submitting the SQE — but that’ll wait until we set up more I/O. Next, I have set up a “context” structure which will store all of the state the server needs to work with, to be passed to functions throughout the program.
-
-```
-type context = struct {
- users: []str,
- clients: []*client,
- uring: *io_uring::io_uring,
-};
-
-// ...
-
-const ctx = context {
- users = grent.userlist,
- uring = &ring,
- ...
-};
-```
-
-The second “...” towards the end is not for illustrative purposes - it sets all of the remaining fields to their default values (in this case, clients becomes an empty slice).
-
-Finally, this brings us to the main loop:
-
-```
-let accept_waiting = false;
-for (true) {
- const peeraddr = rt::sockaddr { ... };
- const peeraddr_sz = size(rt::sockaddr): uint;
- if (!accept_waiting && len(ctx.clients) < MAX_CLIENTS) {
- const sqe = io_uring::must_get_sqe(ctx.uring);
- io_uring::accept(sqe, 0, &peeraddr, &peeraddr_sz,
- 0, flags::FIXED_FILE);
- io_uring::set_user(sqe, &peeraddr);
- accept_waiting = true;
- };
-
- io_uring::submit(&ring)!;
-
- let cqe = match (io_uring::wait(ctx.uring)) {
- err: io_uring::error => fmt::fatal("Error: {}",
- io_uring::strerror(err)),
- cqe: *io_uring::cqe => cqe,
- };
- defer io_uring::cqe_seen(&ring, cqe);
-
- const user = io_uring::get_user(cqe);
- if (user == &peeraddr) {
- accept(&ctx, cqe, &peeraddr);
- accept_waiting = false;
- } else if (user == &sigfd) {
- let si = signalfd::siginfo { ... };
- rt::read(sigfd, &si, size(signalfd::siginfo))!;
- fmt::errorln("Caught signal, terminating")!;
- break;
- } else for (let i = 0z; i < len(ctx.clients); i += 1) {
- let client = ctx.clients[i];
- if (user == client) {
- dispatch(&ctx, client, cqe);
- break;
- };
- };
-};
-```
-
-At each iteration, assuming we have room and aren’t already waiting on a new connection, we submit an “accept” SQE to fetch the next incoming client. This SQE accepts an additional parameter to write the client’s IP address to, which we provide via a pointer to our local peeraddr variable.
-
-We call “submit” at the heart of the loop to submit any SQEs we have pending (including both the signalfd poll and the accept call, but also anything our future client handling code will submit) to the io_uring, then wait the next CQE from the kernel.
-
-When we get one, we defer a “cqe_seen”, which will execute at the end of the current scope (i.e. the end of this loop iteration) to advance our end of the completion queue, then figure out what I/O request was completed. The code earlier sets up SQEs for the accept and signalfd, which we check here. If a signal comes in, we read the details to acknowledge it and then terminate the loop. We also check if the user data was set to the address of any client state data, which we’ll use to dispatch for client-specific I/O later on. If a new connection comes in:
-
-```
-fn accept(ctx: *context, cqe: *io_uring::cqe, peeraddr: *rt::sockaddr) void = {
- const fd = match (io_uring::result(cqe)) {
- err: io_uring::error => fmt::fatal("Error: accept: {}",
- io_uring::strerror(err)),
- fd: int => fd,
- };
- const peer = net::ip::from_native(*peeraddr);
- const now = time::now(time::clock::MONOTONIC);
- const client = alloc(client {
- state = state::READ_QUERY,
- deadline = time::add(now, 10 * time::SECOND),
- addr = peer.0,
- fd = fd,
- plan_fd = -1,
- ...
- });
- append(ctx.clients, client);
- submit_read(ctx, client, client.fd, 0);
-};
-```
-
-This is fairly self-explanatory, but we do see the first example of how to determine the result from a CQE. The result field of the CQE structure the kernel fills in is set to what would normally be the return value of the equivalent syscall, and “linux::io_uring::result” is a convenience function which translates negative values (i.e. errno) into a more idiomatic result type.
-
-We choose a deadline here, 10 seconds from when the connection is established, for the entire exchange to be completed by. This helps to mitigate Slowloris attacks, though there are more mitigations we could implement for this.
-
-Our client state is handled by a state machine, which starts in the “READ_QUERY” state. Per the RFC, the client will be sending us a query, followed by a CRLF. Our initial state is prepared to handle this. The full client state structure is as follows:
-
-```
-type state = enum {
- READ_QUERY,
- OPEN_PLAN,
- READ_PLAN,
- WRITE_RESP,
- WRITE_ERROR,
-};
-
-type client = struct {
- state: state,
- deadline: time::instant,
- addr: ip::addr,
- fd: int,
- plan_fd: int,
- plan_path: *const char,
- xbuf: [2048]u8,
- buf: []u8,
-};
-```
-
-Each field will be explained in due time. We add this to our list of active connections and call “submit_read”.
-
-```
-fn submit_read(ctx: *context, client: *client, fd: int, offs: size) void = {
- const sqe = io_uring::must_get_sqe(ctx.uring);
- const maxread = len(client.xbuf) / 2;
- io_uring::read(sqe, fd, client.xbuf[len(client.buf)..]: *[*]u8,
- maxread - len(client.buf), offs: u64, flags::IO_LINK);
- io_uring::set_user(sqe, client);
-
- let ts = rt::timespec { ... };
- time::instant_to_timespec(client.deadline, &ts);
- const sqe = io_uring::must_get_sqe(ctx.uring);
- io_uring::link_timeout(sqe, &ts, timeout_flags::ABS);
-};
-```
-
-I’ve prepared two SQEs here. The first is a read, which will fill half of the client buffer with whatever they send us over the network (why half? I’ll explain later). It’s configured with “flags::IO_LINK”, which will link it to the following request: a timeout. This will cause the I/O to be cancelled if it doesn’t complete before the deadline we set earlier. “timeout_flags::ABS” specifies that the timeout is an absolute timestamp rather than a duration computed from the time of I/O submission.
-
-I set the user data to the client state pointer, which will be used the next time we have a go-around in the main event loop (feel free to scroll back up if you want to re-read that bit). The event loop will send the CQE to the dispatch function, which will choose the appropriate action based on the current client state.
-
-```
-fn dispatch(ctx: *context, client: *client, cqe: *io_uring::cqe) void = {
- match (switch (client.state) {
- state::READ_QUERY => client_query(ctx, client, cqe),
- state::OPEN_PLAN => client_open_plan(ctx, client, cqe),
- state::READ_PLAN => client_read_plan(ctx, client, cqe),
- state::WRITE_RESP, state::WRITE_ERROR =>
- client_write_resp(ctx, client, cqe),
- }) {
- err: error => disconnect_err(ctx, client, err),
- void => void,
- };
-};
-```
-
-> What’s the difference between match and switch? The former works with types, and switch works with values. We might attempt to merge these before the language’s release, but for now the distinction simplifies our design.
-
-I've structured the client state machine into four states based on the kind of I/O they handle, plus a special case for error handling:
-
-* Reading the query from the client
-* Opening the plan file for the requested user
-* Reading from the plan file
-* Forwarding its contents to the client
-
-=> https://l.sr.ht/p5yc.svg State machine diagram (SVG)
-
-Each circle in this diagram represents a point where we will submit some I/O to our io_uring instance and return to the event loop. If any I/O resulted in an error, we’ll follow the dotted line to the error path, which transmits the error to the user (and if an error occurs during error transmission, we’ll immediately disconnect them, but that’s not shown here).
-
-I need to give a simplified introduction to error handling in this new programming language before we move on, so let’s take a brief detour. In this language, we require the user to explicitly do something about errors. Generally speaking, there are three somethings that you will do:
-
-* Some context-appropriate response to an error condition
-* Bumping the error up to the caller to deal with
-* Asserting that the error will never happen in practice
-
-The latter two options have special operators ("?" and “!”, respectively, used as postfix operators on expressions which can fail), and the first option is handled manually in each situation as appropriate. It’s usually most convenient to use ? to pass errors up the stack, but the buck has got to stop somewhere. In the code we’ve seen so far, we’re in or near the main function — the top of the call stack — and so have to handle these errors manually, usually by terminating the program with “!”. But, when a client causes an error, we cannot terminate the program without creating a DoS vulnerability. This “dispatch” function sets up common client error handling accordingly, allowing later functions to use the “?” operator to pass errors up to it.
-
-To represent the errors themselves, we use a lightweight approach to tagged unions, similar to a result type. Each error type, optionally with some extra metadata, is enumerated, along with any possible successful types, as part of a function’s return type. The only difference between an error type and a normal type is that the former is denoted with a “!” modifier — so you can store any representable state in an error type.
-
-I also wrote an “errors” file which provides uniform error handling for all of the various error conditions we can expect to occur in this program. This includes all of the error conditions that we define ourselves, as well as any errors we expect to encounter from modules we depend on. The result looks like this:
-
-```
-use fs;
-use io;
-use linux::io_uring;
-
-type unexpected_eof = !void;
-type invalid_query = !void;
-type no_such_user = !void;
-type relay_denied = !void;
-type max_query = !void;
-type error = !(
- io::error |
- fs::error |
- io_uring::error |
- unexpected_eof |
- invalid_query |
- no_such_user |
- relay_denied |
- max_query
-);
-
-fn strerror(err: error) const str = {
- match (err) {
- err: io::error => io::strerror(err),
- err: fs::error => fs::strerror(err),
- err: io_uring::error => io_uring::strerror(err),
- unexpected_eof => "Unexpected EOF",
- invalid_query => "Invalid query",
- no_such_user => "No such user",
- relay_denied => "Relay access denied",
- max_query => "Maximum query length exceeded",
- };
-};
-```
-
-With an understanding of error handling, we can re-read the dispatch function’s common error handling for all client issues:
-
-```
-fn dispatch(ctx: *context, client: *client, cqe: *io_uring::cqe) void = {
- match (switch (client.state) {
- state::READ_QUERY => client_query(ctx, client, cqe),
- state::OPEN_PLAN => client_open_plan(ctx, client, cqe),
- state::READ_PLAN => client_read_plan(ctx, client, cqe),
- state::WRITE_RESP, state::WRITE_ERROR =>
- client_write_resp(ctx, client, cqe),
- }) {
- err: error => disconnect_err(ctx, client, err),
- void => void,
- };
-};
-```
-
-Each dispatched-to function returns a tagged union of (void | error), the latter being our common error type. If they return void, we do nothing, but if an error occurred, we call “disconnect_err”.
-
-```
-fn disconnect_err(ctx: *context, client: *client, err: error) void = {
- fmt::errorfln("{}: Disconnecting with error: {}",
- ip::string(client.addr), strerror(err))!;
-
- const forward = match (err) {
- (unexpected_eof | invalid_query | no_such_user
- | relay_denied | max_query) => true,
- * => false,
- };
- if (!forward) {
- disconnect(ctx, client);
- return;
- };
-
- client.buf = client.xbuf[..];
- const s = fmt::bsprintf(client.buf, "Error: {}\r\n", strerror(err));
- client.buf = client.buf[..len(s)];
- client.state = state::WRITE_ERROR;
- submit_write(ctx, client, client.fd);
-};
-
-fn disconnect(ctx: *context, client: *client) void = {
- const sqe = io_uring::must_get_sqe(ctx.uring);
- io_uring::close(sqe, client.fd);
- if (client.plan_fd != -1) {
- const sqe = io_uring::must_get_sqe(ctx.uring);
- io_uring::close(sqe, client.plan_fd);
- };
-
- let i = 0z;
- for (i < len(ctx.clients); i += 1) {
- if (ctx.clients[i] == client) {
- break;
- };
- };
- delete(ctx.clients[i]);
- free(client);
-};
-```
-
-We log the error here, and for certain kinds of errors, we “forward” them to the client by writing them to our client buffer and going into the “WRITE_RESP” state. For other errors, we just drop the connection.
-
-The disconnect function, which disconnects the client immediately, queues io_uring submissions to close the open file descriptors associated with it, and then removes it from the list of clients.
-
-Let’s get back to the happy path. Remember the read SQE we submitted when the client established the connection? When the CQE comes in, the state machine directs us into this function:
-
-```
-fn client_query(ctx: *context, client: *client, cqe: *io_uring::cqe) (void | error) = {
- const r = io_uring::result(cqe)?;
- if (r <= 0) {
- return unexpected_eof;
- };
-
- const r = r: size;
- if (len(client.buf) + r > len(client.xbuf) / 2) {
- return max_query;
- };
-
- client.buf = client.xbuf[..len(client.buf) + r];
-
- // The RFC requires queries to use CRLF, but it is also one of the few
- // RFCs which explicitly reminds you to, quote, "as with anything in the
- // IP protocol suite, 'be liberal in what you accept'", so we accept LF
- // as well.
- let lf = match (bytes::index(client.buf, '\n')) {
- z: size => z,
- void => {
- if (len(client.buf) == len(client.xbuf) / 2) {
- return max_query;
- };
- submit_read(ctx, client, client.fd, 0);
- return;
- },
- };
- if (lf > 0 && client.buf[lf - 1] == '\r': u8) {
- lf -= 1; // CRLF
- };
- const query = match (strings::try_fromutf8(client.buf[..lf])) {
- * => return invalid_query,
- q: str => q,
- };
-
- fmt::printfln("{}: finger {}", ip::string(client.addr), query)!;
- const plan = process_query(ctx, query)?;
- defer free(plan);
-
- client.plan_path = strings::to_c(plan);
-
- const sqe = io_uring::must_get_sqe(ctx.uring);
- io_uring::openat(sqe, rt::AT_FDCWD, client.plan_path, rt::O_RDONLY, 0);
- io_uring::set_user(sqe, client);
- client.state = state::OPEN_PLAN;
-};
-```
-
-The first half of this function figures out if we’ve received a full line, including CRLF. The second half parses this line as a finger query and prepares to fulfill the enclosed request.
-
-The read operation behaves like the read(2) syscall, which returns 0 on EOF. We aren’t expecting an EOF in this state, so if we see this, we boot them out. We also have a cap on our buffer length, so we return the max_query error if it’s been exceeded. Otherwise, we look for a line feed. If there isn’t one, we submit another read to get more from the client, but if a line feed is there, we trim off a carriage return (if present) and decode the completed query as a UTF-8 string.
-
-We call “process_query” (using the error propagation operator to bubble up errors), which returns the path to the requested user’s ~/.plan file. We’ll look at the guts of that function in a moment. The return value is heap allocated, so we defer a free for later.
-
-Strings in our language are not null terminated, but io_uring expects them to be. This is another case which will be addressed transparently once we build a higher-level, portable interface. For now, though, we need to call “strings::to_c” ourselves, and stash it on the client struct. It’s heap allocated, so we’ll free it in the next state when the I/O submission completes.
-
-Speaking of which, we finish this process after preparing the next I/O operation — opening the plan file — and setting the client state to the next step in the state machine.
-
-Before we move on, though, I promised that we’d talk about the process_query function. Here it is in all of its crappy glory:
-
-```
-use path;
-use strings;
-use unix::passwd;
-
-fn process_query(ctx: *context, q: str) (str | error) = {
- if (strings::has_prefix(q, "/W") || strings::has_prefix(q, "/w")) {
- q = strings::sub(q, 2, strings::end);
- for (strings::has_prefix(q, " ") || strings::has_prefix(q, "\t")) {
- q = strings::sub(q, 1, strings::end);
- };
- };
- if (strings::contains(q, '@')) {
- return relay_denied;
- };
-
- const user = q;
- const pwent = match (passwd::getuser(user)) {
- void => return no_such_user,
- p: passwd::pwent => p,
- };
- defer passwd::pwent_finish(pwent);
-
- let enabled = false;
- for (let i = 0z; i < len(ctx.users); i += 1) {
- if (user == ctx.users[i]) {
- enabled = true;
- break;
- };
- };
- if (!enabled) {
- return no_such_user;
- };
-
- return path::join(pwent.homedir, ".plan");
-};
-```
-
-The grammar described in RFC 1288 is pretty confusing, but most of it is to support features I’m not interested in for this simple implementation, like relaying to other finger hosts or requesting additional information. I think I’ve “parsed” most of the useful bits here, and ultimately I’m aiming to end up with a single string: the username whose details we want. I grab the user’s passwd entry and check if they’re a member of the “finger” group we populated way up there in the first code sample. If so, we pull the path to their homedir out of the passwd entry, join it with “.plan”, and send it up the chain.
-
-At this point we’ve received, validated, and parsed the client’s query, and looked up the plan file we need. The next step is to open the plan file, which is where we left off at the end of the last function. The I/O we prepared there takes us here when it completes:
-
-```
-fn client_open_plan(
- ctx: *context,
- client: *client,
- cqe: *io_uring::cqe,
-) (void | error) = {
- free(client.plan_path);
-
- client.plan_fd = io_uring::result(cqe)?;
- client.buf = client.xbuf[..0];
- client.state = state::READ_PLAN;
- submit_read(ctx, client, client.plan_fd, -1);
-};
-```
-
-By now, this should be pretty comprehensible. I will clarify what the “[..0]” syntax does here, though. This language has slices, which store a pointer to an array, a length, and a capacity. In our client state, xbuf is a fixed-length array which provides the actual storage, and “buf” is a slice of that array, which acts as a kind of cursor, telling us what portion of the buffer is valid. The result of this expression is to take a slice up to, but not including, the 0th item of that array — in other words, an empty slice. The address and capacity of the slice still reflect the traits of the underlying array, however, which is what we want.
-
-We’re now ready to read data out of the user’s plan file. We submit a read operation for that file descriptor, and when it completes, we’ll end up here:
-
-```
-fn client_read_plan(
- ctx: *context,
- client: *client,
- cqe: *io_uring::cqe,
-) (void | error) = {
- const r = io_uring::result(cqe)?;
- if (r == 0) {
- disconnect(ctx, client);
- return;
- };
-
- client.buf = client.xbuf[..r];
-
- // Convert LF to CRLF
- //
- // We always read a maximum of the length of xbuf over two so that we
- // have room to insert these.
- let seencrlf = false;
- for (let i = 0z; i < len(client.buf); i += 1) {
- switch (client.buf[i]) {
- '\r' => seencrlf = true,
- '\n' => if (!seencrlf) {
- static insert(client.buf[i], '\r');
- i += 1;
- },
- * => seencrlf = false,
- };
- };
-
- client.state = state::WRITE_RESP;
- submit_write(ctx, client, client.fd);
-};
-```
-
-Again, the read operation for io_uring behaves similarly to the read(2) syscall, so it returns the number of bytes read. If this is zero, or EOF, we can terminate the state machine and disconnect the client (this is a nominal disconnect, so we don’t use disconnect_err here). If it’s nonzero, we set our buffer slice to the subset of the buffer which represents the data io_uring has read.
-
-The Finger RFC requires all data to use CRLF for line endings, and this is where we deal with it. Remember earlier when I noted that we only ever used half of the read buffer? This is why: if we read 1024 newlines from the plan file, we will need another 1024 bytes to insert carriage returns. Because we’ve planned for and measured out our memory requirements in advance, we can use “static insert” here. This built-in works similarly to insert normally does, but it will never re-allocate the underlying array. Instead, it asserts that the insertion would not require a re-allocation, and if it turns out that you did the math wrong, it aborts the program instead of buffer overflowing. But, we did the math and it works out, so it saves us from an extra allocation.
-
-Capping this off, we submit a write to transmit this buffer to the client. “submit_write” is quite similar to submit_read:
-
-```
-fn submit_write(ctx: *context, client: *client, fd: int) void = {
- const sqe = io_uring::must_get_sqe(ctx.uring);
- io_uring::write(sqe, fd, client.buf: *[*]u8, len(client.buf),
- 0, flags::IO_LINK);
- io_uring::set_user(sqe, client);
-
- let ts = rt::timespec { ... };
- time::instant_to_timespec(client.deadline, &ts);
- const sqe = io_uring::must_get_sqe(ctx.uring);
- io_uring::link_timeout(sqe, &ts, timeout_flags::ABS);
-};
-```
-
-Ideally, this should not require explanation. From here we transition to the WRITE_RESP state, so when the I/O completes we end up here:
-
-```
-fn client_write_resp(
- ctx: *context,
- client: *client,
- cqe: *io_uring::cqe,
-) (void | error) = {
- const r = io_uring::result(cqe)?: size;
- if (r < len(client.buf)) {
- client.buf = client.buf[r..];
- submit_write(ctx, client, client.fd);
- return;
- };
-
- if (client.state == state::WRITE_ERROR) {
- disconnect(ctx, client);
- return;
- };
-
- client.buf = client.xbuf[..0];
- client.state = state::READ_PLAN;
- submit_read(ctx, client, client.plan_fd, -1);
-};
-```
-
-First, we check if we need to repeat this process: if we have written less than the size of the buffer, then we advance the slice by that much and submit another write.
-
-We can arrive at the next bit for two reasons: because "client.buf" includes a fragment of a plan file which has been transmitted to the client, which we just covered, or because it is the error message buffer prepared by "disconnect_err", which we discussed earlier. The dispatch function will bring us here for both the normal and error states, and we distinguish between them with this second if statement. If we're sending the plan file, we submit a read for the next buffer-ful of plan. But, our error messages always fit into one buffer, so if we ran out of buffer then we can just disconnect in the error case.
-
-And that’s it! That completes our state machine, and I’m pretty sure we’ve read the entire program’s source code by this point. Pretty neat, huh? io_uring is quite interesting. I plan on using this as a little platform upon which I can further test our io_uring implementation and develop a portable async I/O abstraction. We haven’t implemented a DNS resolver for the stdlib yet, but I’ll also be writing a finger client (using synchronous I/O this time) once we do.
-
-If you really wanted to max out the performance for a CLOUD SCALE WEB 8.0 XTREME PERFORMANCE finger server, we could try a few additional improvements:
-
-* Adding an internal queue for clients until we have room for their I/O in the SQ
-* Using a shared buffer pool with the kernel, with io_uring ops like READ_FIXED
-* Batching requests for the same plan file by only answering requests for it every Nth millisecond (known to some as the “data loader” pattern)
-* More slow loris mitigations, such as limiting open connections per IP address
-
-It would also be cool to handle SIGHUP to reload our finger group membership list without rebooting the daemon. I would say “patches welcome”, but I won’t share the git repo until the language is ready. And the code is GPL’d, but not AGPL’d, so you aren’t entitled to it if you finger me!
diff --git a/content/blog/io_uring-finger-server.md b/content/blog/io_uring-finger-server.md
@@ -1,7 +1,6 @@
---
title: Using io_uring to make a high-performance... finger server
date: 2021-05-24
-outputs: [html, gemtext]
---
I'm working on adding a wrapper for the [Linux io_uring interface][0] to my
diff --git a/content/blog/skytree.gmi b/content/blog/skytree.gmi
@@ -1,29 +0,0 @@
-=> gemini://blekksprut.net/%e6%97%a5%e5%b8%b8%e9%91%91%e8%b3%9e/2020-11-29 RE: 寒くなってる時期
-
-3年前、俺もスカイツリー近所に行きました。雨の日じゃないけど、くもいでした。
-
-=> /pictures/skytree.jpg 遠いからスカイツリー
-
-=> /pictures/bike.jpg 自電車
-
-=> /pictures/flower.jpg 花
-
-=> /pictures/manhole.jpg マンホールの蓋
-
-=> /pictures/park.jpg 公園
-
-=> /pictures/turtle.jpg 亀
-
-=> /pictures/scientology.jpg サイエントロジー wwww
-
-次の日は初音ミクの10記念日コンサートに行った!
-
-=> /pictures/expo.jpg エクスポホール
-
-=> /pictures/senbonsakura.jpg 千本桜の子
-
-=> /pictures/map.jpg アメリカにステッカーを付けった
-
-=> /pictures/venue.jpg 会場
-
-=> /pictures/live.png ライブ
diff --git a/content/blog/skytree.md b/content/blog/skytree.md
@@ -1,5 +0,0 @@
----
-title: 俺のスカイツリー旅行
-outputs: [gemtext]
-date: 2020-11-29
----
diff --git a/content/blog/tar-is-good-actually.gmi b/content/blog/tar-is-good-actually.gmi
@@ -1,54 +0,0 @@
-Let's talk about tar, that tool that everyone loves to hate.
-
-=> https://xkcd.com/1168/ xkcd: tar
-
-For my part, I think tar is a really useful and under-rated tool. And unlike Cueball, I could defuse that bomb 😉
-
-tar is a shell utility for making archives, often called "tarballs", which store several files. tar is a Unix-native file format, and includes information about file ownership and modes. This makes it very useful for moving groups files around on Unix systems.
-
-The general usage of tar, like most Unix programs, is the following:
-
-```
-$ tar [-options] <files...>
-```
-
-The options you specify must include at least one operation, and optionally some number of flags. The most useful operations are create (-c), extract (-x), and list (-t). The most common flags are -C (to change the working directory first), -z (to add gzip compression), and -v (verbose mode, prints out each file name it archives or extracts).
-
-To create a tar archive which contains the files "a.txt", "b.txt", and "c.txt", you can do the following:
-
-```
-$ tar -c a.txt b.txt c.txt > files.tar
-```
-
-You can add compression in one of two ways:
-
-```
-$ tar -c a.txt b.txt c.txt | gzip > files.tar.gz
-$ tar -cz a.txt b.txt c.txt > files.tar.gz
-```
-
-Shell globbing is often useful here. To create an archive of all of the text files in documents:
-
-```
-$ tar -c documents/*.txt > documents.tar
-```
-
-Getting the path right is often important: tar uses the paths which appear on the command line. In this example, the tarball contains a directory called "documents" which is full of text files. Extracting this tarball somewhere else will create a "documents" directory and fill it with these files. If you want to store the files in the root of the tarball, you need to change to the "documents" directory first. You can do this in your shell, with cd, or you can use the -C flag:
-
-```
-$ tar -C documents/ -c text/ index.txt > documents.tar
-```
-
-This creates a tarball of the files at "documents/text/" and "documents/index.txt", but roots them in the tarball as "text/" and "index.txt".
-
-You'll note that I'm consistently using | in each of these examples to form a pipeline. Another important use-case for tar is to move several files at once through a Unix pipeline. For instance, you can transfer files without scp or rsync like so:
-
-```
-$ ssh hostname tar -C sources/ -cz linux | pv | tar -xzv
-```
-
-This SSHes into "hostname", tars up "sources/linux/", and extracts it to linux/ on the client machine. The -xzv flags extracts, unzips, and prints each file name. Combined with pv, this gives me a nice progress indication for the transfer.
-
-=> http://www.ivarch.com/programs/pv.shtml pv: I install it on all of my machines
-
-That's all for today. I think tar is pretty cool and gets an unfairly bad rep. You should mess around with it sometime. Cheers!
diff --git a/content/blog/tar-is-good-actually.md b/content/blog/tar-is-good-actually.md
@@ -1,5 +0,0 @@
----
-title: tar is good actually
-date: 2022-02-17
-outputs: ["gemtext"]
----
diff --git a/content/blog/visurf-announcement.gmi b/content/blog/visurf-announcement.gmi
@@ -1,34 +0,0 @@
-I’ve started a new side project that I would like to share with you: visurf. visurf, or nsvi, is a NetSurf frontend which provides vi-inspired key bindings and a lightweight Wayland UI with few dependencies. It’s still a work-in-progress, and is not ready for general use yet. I’m letting you know about it today in case you find it interesting and want to help.
-
-=> https://sr.ht/~sircmpwn/visurf visurf
-=> https://www.netsurf-browser.org Netsurf
-
-NetSurf is a project which has been on my radar for some time. It is a small web browser engine, developed in C independently of the lineage of WebKit and Gecko which defines the modern web today. It mostly supports HTML4 and CSS2, plus only a small amount of HTML5 and CSS3. Its JavaScript support, while present, is very limited. Given the epidemic of complexity in the modern web, I am pleased by the idea of a small browser, more limited in scope, which perhaps requires the cooperation of like-minded websites to support a pleasant experience.
-
-=> https://drewdevault.com/2020/03/18/Reckless-limitless-scope.html Previously: The reckless, infinite scope of web browsers
-
-I was a qutebrowser user for a long time, and I think it’s a great project given the constraints that it’s working in — namely, the modern web. But I reject the modern web, and qute is just as much a behemoth of complexity as the rest of its lot. Due to stability issues, I finally ended up abandoning it for Firefox several months ago.
-
-=> https://qutebrowser.org qutebrowser
-
-The UI paradigm of qutebrowser’s modal interface, inspired by vi, is quite nice. I tried to use Tridactyl, but it’s a fundamentally crippled experience due to the limitations of Web Extensions on Firefox. Firefox has more problems besides — it may be somewhat more stable, but it’s ultimately still an obscenely complex, monsterous codebase, owned by an organization which cares less and less about my needs with each passing day. A new solution is called for.
-
-Here’s where visurf comes in. Here’s a video of it in action:
-
-=> https://mirror.drewdevault.com/visurf.webm Video demonstrating basic visurf features
-
-I hope that this project will achieve these goals:
-
-* Create a nice new web browser
-* Drive interest in the development of NetSurf
-* Encourage more websites to build with scope-constrained browsers in mind
-
-The first goal will involve fleshing out this web browser, and I could use your help. Please join #netsurf on irc.libera.chat, browse the issue tracker, and send patches if you are able. Some features I have in mind for the future are things like interactive link selection, a built-in readability mode to simplify the HTML of articles around the web, and automatic redirects to take advantage of tools like Nitter. However, there’s also more fundamental features to do, like clipboard support, command completion, even key repeat. There is much to do.
-
-=> https://todo.sr.ht/~sircmpwn/visurf Open tickets for visurf
-=> https://lists.sr.ht/~sircmpwn/visurf-devel Development mailing list for visurf
-=> https://github.com/zedeus/nitter See also: Nitter
-
-I also want to get people interested in improving NetSurf. I don’t want to see it become a “modern” web browser, and frankly I think that’s not even possible, but I would be pleased to see more people helping to improve its existing features, and expand them to include a reasonable subset of the modern web. I would also like to add Gemini support. I don’t know if visurf will ever be taken upstream, but I have been keeping in touch with the NetSurf team while working on it and if they’re interested it would be easy to see that through. Regardless, any improvements to visurf or to NetSurf will also improve the other.
-
-To support the third goal, I plan on overhauling sourcehut’s frontend, and in the course of that work we will be building a new HTML+CSS framework (think Bootstrap) which treats smaller browsers like NetSurf as a first-class target. The goal for this effort will be to provide a framework that allows for conservative use of newer browser features, with suitable fallbacks, with enough room for each website to express its own personality in a manner which is beautiful and useful on all manner of web browsers.
diff --git a/content/blog/visurf-announcement.md b/content/blog/visurf-announcement.md
@@ -1,7 +1,6 @@
---
title: visurf, a web browser based on NetSurf
date: 2021-09-11
-outputs: [html, gemtext]
---
I've started a new side project that I would like to share with you:
diff --git a/content/gmni.gmi b/content/gmni.gmi
@@ -1,16 +0,0 @@
-# gmni: a Gemini client
-
-gmni is a client for the Gemini protocol. Included are:
-
-* A CLI utility (like curl): gmni
-* A line-mode browser: gmnlm
-
-=> https://sr.ht/~sircmpwn/gmni Development information
-=> https://git.sr.ht/~sircmpwn/gmni Source code (git)
-
-Browser features:
-
-* Page history
-* Regex searches
-* Bookmarks
-* TOFU support
diff --git a/content/gmnisrv.gmi b/content/gmnisrv.gmi
@@ -1,34 +0,0 @@
-# gmnisrv: a Gemini server
-
-gmnisrv is a high-performance Gemini server for POSIX systems.
-
-=> https://sr.ht/~sircmpwn/gmnisrv Development information
-=> https://git.sr.ht/~sircmpwn/gmnisrv Source code (git)
-
-Features:
-
-* Zero-configuration TLS
-* CGI script support
-* Directory auto-indexing
-* Multiple domains and routing configs
-* Regex routing and URL rewriting
-
-The configuration is straightforward:
-
-```
-listen=0.0.0.0:1965 [::]:1965
-
-[:tls]
-store=/var/lib/gemini/certs
-organization=gmnisrv user
-
-[example.org]
-root=/srv/gemini/example.org
-
-[example.org:/cgi-bin]
-root=/srv/gemini/example.org/cgi-bin
-cgi=on
-
-[example.com]
-root=/srv/gemini/example.com
-```
diff --git a/content/kineto.gmi b/content/kineto.gmi
@@ -1,7 +0,0 @@
-# kineto: an HTTP to Gemini proxy
-
-Kineto is an HTTP to Gemini proxy designed to service a single domain, i.e. to make your Gemini site available over HTTP.
-
-=> https://sr.ht/~sircmpwn/kineto Development information (http)
-=> https://git.sr.ht/~sircmpwn/kineto Source code (git)
-=> https://portal.drewdevault.com Live demo (http)
diff --git a/layouts/blog/single.gmi b/layouts/blog/single.gmi
@@ -1,17 +0,0 @@
-# {{$.Title | safeHTML}}
-
-{{ trim (readFile (replace (replace $.File.Path ".md" ".gmi") ".html" ".gmi")) "\n" | safeHTML }}
-
-```An ASCII art rocket
- \ \_____
-###[==_____>
- /_/
-```
-
-“{{$.Title | safeHTML}}” was published on {{.Date.Format "January 2, 2006"}}.
-
-=> / Back to the home page{{ with .OutputFormats.Get "html" }}
-=> {{.Permalink}} View “{{$.Title | safeHTML}}” on the WWW
-{{- end }}
-
-The content for this site is CC-BY-SA. The code for this site is MIT.
diff --git a/layouts/index.gmi b/layouts/index.gmi
@@ -1,13 +0,0 @@
-{{readFile (replace (replace $.File.Path ".md" ".gmi") ".html" ".gmi") | safeHTML}}
-## Blog posts
-{{ range (where .Site.RegularPages "Section" "blog") }}
-{{- if .OutputFormats.Get "gemtext" }}
-=> {{replace .Permalink "/gemini" "" 1}} {{.Date.Format "2006-01-02"}}: {{.Title | safeHTML}}{{ end }}{{ end }}
-
-=> /feed.xml RSS feed
-
-A backlog of additional articles is available on the World Wide Web:
-
-=> https://drewdevault.com Drew DeVault's blog
-
-The content for this site is CC-BY-SA. The code for this site is MIT.