from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
"black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
).to("cuda")
pipeline.load_lora_weights("sayakpaul/yarn_art_lora_flux", weight_name="pytorch_lora_weights.safetensors")
image = pipeline("a puppy in a pond, yarn art style", guidance_scale=3.5, height=768).images[0]
image.save("yarn.png")
def generate_speculative( | |
model: nn.Module, | |
draft_model: nn.Module, | |
tokenizer: Union[PreTrainedTokenizer, TokenizerWrapper], | |
prompt: str, | |
max_tokens: int = 100, | |
verbose: bool = False, | |
formatter: Optional[Callable] = None, | |
**kwargs, |
# /// script | |
# dependencies = [ | |
# "SpeechRecognition", | |
# "mlx-whisper", | |
# "pyaudio", | |
# ] | |
# /// | |
import speech_recognition as sr | |
import numpy as np |
Begin by enclosing all thoughts within <thinking> tags, exploring multiple angles and approaches. | |
Break down the solution into clear steps within <step> tags. Start with a 20-step budget, requesting more for complex problems if needed. | |
Use <count> tags after each step to show the remaining budget. Stop when reaching 0. | |
Continuously adjust your reasoning based on intermediate results and reflections, adapting your strategy as you progress. | |
Regularly evaluate progress using <reflection> tags. Be critical and honest about your reasoning process. | |
Assign a quality score between 0.0 and 1.0 using <reward> tags after each reflection. Use this to guide your approach: | |
0.8+: Continue current approach | |
0.5-0.7: Consider minor adjustments | |
Below 0.5: Seriously consider backtracking and trying a different approach |
import os | |
import mlx.core as mx | |
from mlx_lm import load, generate | |
filename = os.path.join(os.path.dirname(mx.__file__), "core/__init__.pyi") | |
with open(filename, 'r') as fid: | |
prompt = fid.read() | |
prompt += "\nHow do you write a self-attention layer using the above API in MLX?" | |
model, tokenizer = load("mlx-community/meta-Llama-3.1-8B-Instruct-4bit") |
{ | |
"last_node_id": 469, | |
"last_link_id": 1401, | |
"nodes": [ | |
{ | |
"id": 16, | |
"type": "KSamplerSelect", | |
"pos": [ | |
-280, | |
20 |
macOS Live Text has a very good quality/speed tradeoff.
Compared to Tesseract, it has much higher quality and is up to 3x as fast.
Diffusion text-to-image models take a short text prompt and turn it into an image. Here are some prompts I've written that worked well:
{"prompts":["scientific rendering of a black hole whose accretion disk is a spiders web, a consciousness holographically projected in 1D space from the bulk of the void", "a tesseract hypercube in an illuminated glow, a tesseract suspended above the dint of reality", "russian cosmonauts driving a rover on the lunar surface in the style of Lucien Rudaux", "symbol of the phoenix, a phoenix rising over all the sentences that have ever been written", "a yin yang symbol where each half is a black snake and a white snake devouring each others tails"]}
Your task is to write 5 more prompts in the way you infer I'd write them from these examples, but based on a combination of subject, style, and setting. For example:
I'm using backtranslation to create a synthetic dataset of bad/fallacious/disingenuous arguments with the bad parts labeled so I can train a classifier. I'm seeking a reliable and flexible generation method for these arguments and have settled on something like the following:
Model making an argument as a two step process roughly analogous to type checking then logic checking. In the Phil Tetlock/Daniel Kahneman paradigm this would be something like choice of a reference class to get an outside view/prior and then mental modeling of specific logical structure to predict counterfactual outcomes in various cases:
- Reference Classes: Does this argument contradict the behavior of a working comparable system or agreed upon set of norms used elsewhere in society?
- Mental Models: Does this argument imply a model that captures the behavior of X correctly?
"Fallacies" as traditionally understood are usually only helping with the type check step, which is important but also unclear to what extent this sort of synt