Notes
Cloudflare, the Medium, and the Message
I've been saying this for years: CloudFlare's approach to information security is miles ahead of other infrastructure providers. Their latest annual founder letter reinforces my conviction.
CloudFlare clearly understands that the internet's operating model is undergoing a fundamental vibe-shift. They recognize that industrial-scale content production and content theft present massive challenges for creators, consumers, and infrastructure providers. But I think this isn’t just a technical problem, it's also a societal one. And CloudFlare gets it.
Casually dropping keywords like “Answer Engines" and "Traffic != value" isn’t innocent. While they kept the details relatively high-level, the strategic direction is clear.
After years of focusing on the medium, CloudFlare is now turning their attention to the value of the message itself. This evolution positions them uniquely as the internet continues to transform.
-
Cloudflare’s 2025 Annual Founders’ Letter
blog.cloudflare.com
moralhardcandy by Blasphemy (demo)
An era-defining demo of the pure software rendering days. Lots of alpha layers, a clean design, beautiful colors, and an incredible IDM soundtrack. It's been a favorite of mine since its release in 1999.




-
-
moralhardcandy by blasphemy
pouet.net
Chevy Ray on Creating Hundreds of Fonts Using Rust
Chevy Ray goes into a lot of details on building her own tool to generate 175 (!!) pixel fonts. The post walks through the technical implementation including converting pixel clusters into TrueType contours, automatic kerning calculation, and deploying everything to itch.io with command-line scripts. Very cool read.
-
Chevy Ray | How I Created 175 Fonts Using Rust
chevyray.dev
Making Discogs Data 13% Smaller with Parquet
Recently, I have been working with the Discogs data dumps. Discogs uploads monthly dumps of their database in a gzipped XML format. They release dumps for: artists, labels, masters, and releases. I was curious about converting them to the Parquet file format. Parquet is a binary columnar file format heavily used in data engineering. It allows different compression algorithms per column and nested structures. It is also natively supported by databases such as ClickHouse or DuckDB. I was mostly curious about the size of a parquet file vs a compressed XML file. Would parquet files be smaller than a gzipped XML? If so, by how much? Also, what would be the conversion speed?
Implementation
SELECT path_in_schema, type, encodings, compression, (total_compressed_size / 1024) AS compressed_size, (total_uncompressed_size / 1024) AS uncompressed_size FROM parquet_metadata('file.parquet');
will show the respective size of columns.
Results
Conversion speed
| Type | Records | Time | Records / Second | |----------|------------|--------|------------------| | Labels | 2,274,143 | 12.48s | 182,222 | | Artists | 9,174,834 | 63.44s | 144,713 | | Masters | 2,459,324 | 69.77s | 35,249 | | Releases | 18,412,655 | 34m14s | 8,964 |
File size
| Type | .xml.gz | Parquet | Difference | |----------|---------|---------|------------| | Labels | 83M | 72M | -13.2% | | Artists | 441M | 397M | -9.9% | | Masters | 577M | 537M | -6.7% | | Releases | 10.74G | 10.14G | -5.5% |
0b5vr GLSL Techno Live Set - "0mix"
A 7-minute techno live set created entirely in GLSL shaders that fits in just 64KB. Yes, 64kb. This WebGL intro by 0b5vr was submitted to the Revision 2023 demoscene competition. Procedural visuals meets algorave meets extreme compression. My mind is blown.