2024-05-09

I take an irrational amount of pleasure in disabling notifications for apps that use them to send me marketing.

2024-05-08

I enjoyed reading Yuxuan’s article on whether Github Copilot increased their productivity. I personally don’t love Copilot but enjoy using other AI-assisted software tools like Cursor, which allow for use of more capable models than Copilot. It’s encouraging to see more folks adopting a more unfiltered thought journal.
I read this post by Steph today and loved it. I want to try writing this concisely. I imagine it takes significant effort but the result are beautiful, satisfying and valuable. It’s a privilege to read a piece written by someone who values every word.
llama 3-400B with multimodal capabilities and long context would put the nail in the coffin for OAI — anton (@abacaj) May 6, 2024 Having gotten more into using llama 7b and 30b lately, this take seems likes it could hold water. Model inference still isn’t free when you scale a consumer app. Maybe I can use llama3 for all my personal use cases, but I still need infra to scale it.
I read Jason, Ivan and Charles’ blog post on Modal about fine tuning an embedding model. It’s a bit in the weeds of ML for me but I learn a bit more every time I read something new.
I played around with trying to run a Temporal worker on Modal. I didn’t do a ton of research upfront – I just kind of gave it a shot. I suspect this isn’t possible. Both use Python magic to do the things they do. This is what I tried. import asyncio import os import modal from temporalio import activity, workflow from temporalio.client import Client, TLSConfig from temporalio.worker import Worker @activity.defn async def my_activity(name: str) -> str: return f"Hello, {name}!
I read this interesting article by Gajus about finetuning gpt-3.5-turbo. It was quite similar to my experience fine tuning a model to play Connections. A helpful takeaway was that after finetuning the model, you shouldn’t need to include system prompt in future model inference, so you can save on token cost. I also liked the suggestion to use a database to store training data. I had also been wrangling jsonl files.
About a month ago, I had been looking into creating a NL to SQL plugin for datasette. Simon release a version of exactly that the next day and I came across it in his article here. Hopefully I can find time to try this out in the next few days.
I did a refactor of my nix config following a pattern I learned from reading Davis’ setup. My two main uses right now for Nix/home-manager are to install and configure programs. Some of these programs have nix modules that allow for the configuration to be written in Nix. Others don’t, but you can still use Nix to create a config file for that program to read. I do the latter with skhd and goku to create a karabiner.
For me, invoking a language model using a playground (UI) interface is the most common approach for my usage. Occasionally, it can be helpful to use the a CLI to directly pipe output into a model. For example git diff --staged | llm "write a commit message for these changes" However, I am more often inclined to open a playground and paste the bits and pieces of context I need. Maybe, it’s that refinement and followups are common enough that using a CLI isn’t nearly as flexible.