Further investigation with Open Interpreter today reaffirmed certain strengths but also revealed a number of weaknesses. The look is excellent at parsing structured data like JSON or CSV, doing analysis with tools like pandas and numpy, and plotting the results with matplotlib. However, it falls short when trying to perform more complex data fetching tasks. It seems to struggle to scrape websites or make use of less common libraries, at least when I tried without providing any additional documentation. At one point, I had it scrape a relatively simple html website and it was able to parse that into a dataframe, when cleanup the data to a point where I think I was close to being ready to do analysis. Unfortunately, the REPL seemed to hang at that point. I’m not sure if I maxed out the token context of the model or if something else happened, but I had to hard exit the process. There isn’t an easy way to “jump back to where you were in a session” and I didn’t have the patience to try again from the start. I need to look into whether some kind of step memoization is possible.