영어지만, 조금 더 상세하게 마크다운 사용법을 안내하고 있는
"Markdown Guide (https://www.markdownguide.org/)" 를 보시는 것을 추천합니다. ^^
아, 그리고 마크다운만으로 표현이 부족하다고 느끼신다면, HTML 태그를 활용하시는 것도 좋습니다.
영어지만, 조금 더 상세하게 마크다운 사용법을 안내하고 있는
"Markdown Guide (https://www.markdownguide.org/)" 를 보시는 것을 추천합니다. ^^
아, 그리고 마크다운만으로 표현이 부족하다고 느끼신다면, HTML 태그를 활용하시는 것도 좋습니다.
#include <iostream> | |
#include <fstream> | |
#include <vector> | |
#include <cstdint> | |
#include <memory> | |
#include <jpeglib.h> | |
using namespace std; |
Collection of License badges for your Project's README file.
This list includes the most common open source and open data licenses.
Easily copy and paste the code under the badges into your Markdown files.
Translations: (No guarantee that the translations are up-to-date)
/* | |
* Read video frame with FFmpeg and convert to OpenCV image | |
* | |
* Copyright (c) 2016 yohhoy | |
*/ | |
#include <iostream> | |
#include <vector> | |
// FFmpeg | |
extern "C" { | |
#include <libavformat/avformat.h> |
{0: 'tench, Tinca tinca', | |
1: 'goldfish, Carassius auratus', | |
2: 'great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias', | |
3: 'tiger shark, Galeocerdo cuvieri', | |
4: 'hammerhead, hammerhead shark', | |
5: 'electric ray, crampfish, numbfish, torpedo', | |
6: 'stingray', | |
7: 'cock', | |
8: 'hen', | |
9: 'ostrich, Struthio camelus', |
webm | |
mkv | |
flv | |
vob | |
ogv | |
ogg | |
rrc | |
gifv | |
mng | |
mov |
#!/bin/bash | |
# | |
# script to extract ImageNet dataset | |
# ILSVRC2012_img_train.tar (about 138 GB) | |
# ILSVRC2012_img_val.tar (about 6.3 GB) | |
# make sure ILSVRC2012_img_train.tar & ILSVRC2012_img_val.tar in your current directory | |
# | |
# https://github.com/facebook/fb.resnet.torch/blob/master/INSTALL.md | |
# | |
# train/ |
#!/usr/bin/env python | |
# -*- coding: utf-8 -*- | |
from argparse import ArgumentParser | |
import torch | |
import torch.distributed as dist | |
from torch.nn.parallel import DistributedDataParallel as DDP | |
from torch.utils.data import DataLoader, Dataset | |
from torch.utils.data.distributed import DistributedSampler | |
from transformers import BertForMaskedLM |
""" | |
Based on https://raw.githubusercontent.com/jkjung-avt/yolov4_crowdhuman/master/data/gen_txts.py | |
Inputs: | |
* nothing | |
* or folder with CrowdHuman_train01.zip, CrowdHuman_train02.zip, CrowdHuman_train03.zip, CrowdHuman_val.zip, annotation_train.odgt, annotation_val.odgt | |
python crowdhuman_to_yolo.py --dataset_path foo/bar/ | |
Outputs: |
This should give an idea of relative throughput of the models. I could not discern what would be fastest from the names alone.
This is just a speed test. Obviously the larger models will perform better on evaluation benchmarks at the tradeoff of speed. Find the models that meet your throughput requirements then benchmark those for performance on the task you are doing.
Tested on an NVIDIA RTX 3090. CPU is an AMD 7950x, though that should not affect the benchmark much.