-
-
Save yunjey/14e3a069ad2aa3adf72dee93a53117d6 to your computer and use it in GitHub Desktop.
# First, you should install flickrapi | |
# pip install flickrapi | |
import flickrapi | |
import urllib | |
from PIL import Image | |
# Flickr api access key | |
flickr=flickrapi.FlickrAPI('c6a2c45591d4973ff525042472446ca2', '202ffe6f387ce29b', cache=True) | |
keyword = 'siberian husky' | |
photos = flickr.walk(text=keyword, | |
tag_mode='all', | |
tags=keyword, | |
extras='url_c', | |
per_page=100, # may be you can try different numbers.. | |
sort='relevance') | |
urls = [] | |
for i, photo in enumerate(photos): | |
print (i) | |
url = photo.get('url_c') | |
urls.append(url) | |
# get 50 urls | |
if i > 50: | |
break | |
print (urls) | |
# Download image from the url and save it to '00001.jpg' | |
urllib.urlretrieve(urls[1], '00001.jpg') | |
# Resize the image and overwrite it | |
image = Image.open('00001.jpg') | |
image = image.resize((256, 256), Image.ANTIALIAS) | |
image.save('00001.jpg') |
Can I use this for my repo with a small adaptation?
Hi,
thanks for the code! I'm quite new to python and if I run the script I only get the first url downloaded. Is there a way to download all images from alle the urls? Like an array or loop?
Also, can you specify if its possible to change the path where images are downloaded?
Thanks a lot!
You can check out mine, I made some changes for what I needed at the time.
https://github.com/lenoqt/PyTorch/blob/main/flickrdownloader.py
Hi,
cool stuff . . . how would I possible download photos fom an album ( a set in flickr wording I assume ) ? Actually I want to check a specific set for updates and download them if there are any . . . this I want to do as a cronjob then . .
I post this again . . . I assume I posted it wrongly the first time
cheers
T
if I just what to download some images at random from flicker without explicitly searching a specific keyword, how can I do it??
Hi, here's fix for the error if you run this script:
https://stackoverflow.com/questions/17960942/attributeerror-module-object-has-no-attribute-urlretrieve
HI!,
ive been trying to create a dataset for my final project and your code has been the most helpful out of anything that I've found. I understand it's older code, but might you be willing to share how one might download the results into a specific folder? also, when I try the resize option on more than one, using the fname, it doesn't work :( it says fname has no attribute read. I would greatly appreciate any help.
HI!,
ive been trying to create a dataset for my final project and your code has been the most helpful out of anything that I've found. I understand it's older code, but might you be willing to share how one might download the results into a specific folder? also, when I try the resize option on more than one, using the fname, it doesn't work :( it says fname has no attribute read. I would greatly appreciate any help.
Both things are doable, for the path you can adapt using os.path then join it with the f-string in the script, for changing dimensions or manipulating any of the image you can use PIL.
I think it woud work for you
pip install flickrapi
import flickrapi
import urllib
from PIL import Image
from tqdm import tqdm
Flickr api access key
Flickr = flickrapi.FlickrAPI('c6a2c45591d4973ff525042472446ca2', '202ffe6f387ce29b', cache=True)
N_MAX = 60000
KEYWORD = 'cat'
PHOTOs = flickr.walk(text=KEYWORD,
tag_mode='all',
tags=KEYWORD
extras='url_c',
per_page=100, # may be you can try different numbers..
sort='relevance')
URLs = []
for N, PHOTO in tqdm(enumerate(PHOTOs)):
Download image from the url and save it
RESIZE_OPTION = Image.ANTIALIAS #Image.NEAREST, Image.BICUBIC
for i in range(N_MAX):
fname = f'img_{keyword}{i}.jpg'
urllib.request.urlretrieve(urls[i], f'img{keyword}_{i}.jpg')