Skip to content

Instantly share code, notes, and snippets.

@celsowm
Created November 13, 2024 13:46
Show Gist options
  • Save celsowm/b7c211b8920b7fdea9c2b945f203bd3b to your computer and use it in GitHub Desktop.
Save celsowm/b7c211b8920b7fdea9c2b945f203bd3b to your computer and use it in GitHub Desktop.
test_transformers.py
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [
{
"role": "user",
"content": "Qual a capital da Brasil?"
},
]
outputs = pipe(
messages,
max_new_tokens=512,
)
print(outputs[0]["generated_text"][-1]['content'])
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment