Skip to content

Instantly share code, notes, and snippets.

@will-thompson-k
will-thompson-k / nvidia_install.sh
Created September 21, 2023 16:17
if nvidia-smi fails to work
sudo apt-get remove --purge '^nvidia-.*'
sudo ubuntu-drivers autoinstall
sudo reboot
@veekaybee
veekaybee / normcore-llm.md
Last active December 25, 2024 20:39
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@will-thompson-k
will-thompson-k / pl_multigpu_infer.py
Created August 15, 2023 15:09
multi-gpu inference via ddp for pytorch-lightning/lightning-ai models
import pytorch_lightning as pl
...
# override this method on pytorch-lightning model
def on_predict_epoch_end(self, results):
# gather all results onto each device
# find created world_size from pl.trainer
results = all_gather(results[0], WORLD_SIZE, self._device)
# concatenate on the cpu
results = torch.concat([x.cpu() for x in results], dim=1)
# output will not preserve input order.
@neubig
neubig / dispatch_openai_requests.py
Last active February 19, 2024 17:55
A simple script to get results from the OpenAI Asynchronous API
# NOTE:
# You can find an updated, more robust and feature-rich implementation
# in Zeno Build
# - Zeno Build: https://github.com/zeno-ml/zeno-build/
# - Implementation: https://github.com/zeno-ml/zeno-build/blob/main/zeno_build/models/providers/openai_utils.py
import openai
import asyncio
from typing import Any