Skip to content

Instantly share code, notes, and snippets.

View DomHudson's full-sized avatar

DomHudson

View GitHub Profile
@weapp
weapp / nginx.conf
Created August 4, 2014 17:21
Return common errors as json in Nginx
error_page 500 /500.html;
location /500.html{
return 500 '{"error": {"status_code": 500,"status": "Internal Server Error"}}';
}
error_page 502 /502.html;
location /502.html{
return 502 '{"error": {"status_code": 502,"status": "Bad Gateway"}}';
}
@kalomaze
kalomaze / llm_samplers_explained.md
Last active January 14, 2025 20:45
LLM Samplers Explained

LLM Samplers Explained

Everytime a large language model makes predictions, all of the thousands of tokens in the vocabulary are assigned some degree of probability, from almost 0%, to almost 100%. There are different ways you can decide to choose from those predictions. This process is known as "sampling", and there are various strategies you can use which I will cover here.

OpenAI Samplers

Temperature

  • Temperature is a way to control the overall confidence of the model's scores (the logits). What this means is that, if you use a lower value than 1.0, the relative distance between the tokens will become larger (more deterministic), and if you use a larger value than 1.0, the relative distance between the tokens becomes smaller (less deterministic).
  • 1.0 Temperature is the original distribution that the model was trained to optimize for, since the scores remain the same.
  • Graph demonstration with voiceover: https://files.catbox.moe/6ht56x.mp4