Auto-GPT is a popular project on Github that attempts to build an autonomous agent on top of an LLM. This is not my first time using Auto-GPT. I used it shortly after it was released and gave it a second try a week or two later, which makes this my third, zero-to-running effort. I installed it python -m venv env . env/bin/activate pip install -r requirements.txt and then ran it
I believe that language models are most useful when available at your fingertips in the context of what you’re doing. Github Copilot is a well known application that applies language models in this manner. There is no need to pre-prompt the model. It knows you’re writing code and that you’re going to ask it to write code for you based on the contents of your comment. Github is further extending this idea with Copilot for CLI, which looks promising but isn’t generally available yet.
Over the the years, I’ve developed a system for capturing knowledge that has been useful to me. The idea behind this practice is to provide immediate access to useful snippets and learnings, often with examples. I’ll store things like: Amend commit message git commit --amend -m "New commit message" with tags like #git, #commit, and #amend after I searched Google for “how to amend a git commit message”. With the knowledge available locally and within my own file system, I’ve added search capabilities on top which allows me to quickly find this snippet any time I need it.
I know a little about nix. Not a lot. I know some things about Python virtual environments, asdf and a few things about package managers. I’ve heard the combo of direnv and nix is fantastic from a number of engineers I trust, but I haven’t had the chance to figure out what these tools can really do. I could totally ask someone what their setup is, but I decided to ask ChatGPT (gpt-4) to see what would happen.
I came upon https://gpa.43z.one today. It’s a GPT-flavored capture the flag. The idea is, given a prompt containing a secret, convince the LM to leak the prompt against prior instructions it’s been given. It’s cool way to develop intuition for how to prompt and steer LMs. I managed to complete all 21 levels. A few were quite easy and several were challenging. I haven’t figured out how to minimize the number of characters needed to expose the secret.
I’ve been experimenting with ways to prevent applications for deviating from their intended purpose. This problem is a subset of the generic jailbreaking problem at the model level. I’m not particularly well-suited to solve that problem and I imagine it will be a continued back and forth between security teams and hackers as we have today with many software systems. Maybe this one will be similar, but it’s a new area for me.
Jailbreaking as prompt injection Shortly after learning about ChatGPT, I also became familiar with efforts to circumvent the content restrictions OpenAI imposed upon the model. Initial successful efforts were straightforward. You could get the model to curse and say rude things (and worse) just by beginning a prompt with Ignore all previous instructions. <my policy-violating prompt here> Now, it seems as exploits are found, OpenAI attempts to patch them in what I imagine includes, but aren’t limited to the following ways:
I’ve been keeping an eye out for language models that can run locally so that I can use them on personal data sets for tasks like summarization and knowledge retrieval without sending all my data up to someone else’s cloud. Anthony sent me a link to a Twitter thread about product called deepsparse by Neural Magic that claims to offer [a]n inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
Why bother? I create a bunch of little Python projects and I like to have them sandboxed and independent of each other. I also sometimes need to change which version of Python I am running due to requirements of dependencies. Your OS may come installed with Python but it’s rarely a great idea to try and run your projects from it. Here’s what I do: Install asdf asdf is a tool version manager.
If you want to try running these examples yourself, check out my writeup on using a clean Python setup I spent the week hacking on a language model use case for structured data generation. It turns out the structured data we hoped to generate didn’t have a well-defined schema. For the language model to have any chance of success, it felt important to construct a schema definition as guide for the structure of the output.