The parameters in the OpenAI playground have always been a bit of a mystery to me. I wasn't sure when to turn on the frequency penalty, decrease the "top P", or use the "best of" parameter that triples the cost.
This tool lets you experiment with the impact of various GPT-3 parameters on the generated results. This should help you get a better understanding of how to adjust the parameters for your specific needs.
I've pre-calculated completions across different parameters using four different tasks: generating startup ideas, copywriting, summarization, and brainstorming. For each set of parameters, I've generated five different completions to give you an idea of the variety of possible outcomes.
[Edited: tool is currently disabled, since it got out-of-sync pretty quickly.]
]]>This is a report describing the variety of Python libraries used with FastAPI in open source projects. The repositories were sourced using GitHub code search. I analyzed requirements.txt files as the most common source of dependencies. The report includes popular packages, FastAPI versions and dependency clusters.
You can use the report to explore new libraries or pick a dependency between a few options. If you want to discover how exactly a library is used with FastAPI, use GitHub code search: fastapi {package_name} filename:requirements.txt
]]>Inspired by Sophie Koonin's blog post "Everything I googled in a week as a professional software engineer", I decided to open the entire history of my Google queries related to Python. The queries showcase all of my Google queries with the word "python", "django" and "flask". I'm publishing the list in hopes that it would be helpful for someone else.
If I were to scroll through such a list in 2014, I'd save a lot of time: for example, by learning about useful libraries ("arrow python", "shiny for python" and "dash python", "opencv python") or about the precise names of some concepts that I didn't know how to search for ("infinity scrolling django", "sentiment analysis python"). Try to scroll through the list and run by yourself queries that you find curious.
I exported raw queries by running "select text, time from google_searches where text LIKE '%python%' order by time" on an SQLite DB, which I filled with Google Takeout data using Bionic CLI [1].
The data spans 6 years: from 2014 (I was 13) to 2020 (I was 19). I publish the queries without any changes besides translating some early queries from Russian to English. The whole dataset is messy and unstructured as was my learning process: driven either by random curiosity or a desire to launch some project.
I encourage everyone to look on their full Google Search history with a keyword filter. For me, some queries triggered particular happy old memories of projects I was hacking. Other queries showed a multi-year narrative of the learning process.
[1] Bionic is a tool, which my friend and I created, to load personal data exports from various services to a single SQLite database.
]]>