Bitcoin miners want their newly-found blocks to propagate across the network as quickly as possible, because every millisecond of delay increases the chances that another block, found at about the same time, wins the "block race."
from __future__ import division | |
from numpy.fft import rfft | |
from numpy import argmax, mean, diff, log, nonzero | |
from scipy.signal import blackmanharris, correlate | |
from time import time | |
import sys | |
try: | |
import soundfile as sf | |
except ImportError: | |
from scikits.audiolab import flacread |
# By Jake VanderPlas | |
# License: BSD-style | |
import matplotlib.pyplot as plt | |
import numpy as np | |
def discrete_cmap(N, base_cmap=None): | |
"""Create an N-bin discrete colormap from the specified input map""" |
package com.isciurus.oauth_poc; | |
import java.io.IOException; | |
import java.text.DateFormat; | |
import java.util.Date; | |
import com.google.android.gms.auth.GoogleAuthException; | |
import com.google.android.gms.auth.GoogleAuthUtil; | |
import com.google.android.gms.auth.UserRecoverableAuthException; | |
import android.accounts.AccountManager; | |
import android.app.Activity; |
The original issue was that some applications (ex. packers) launch the JNI/native code too fast for a person | |
to attach an IDA Pro instance to the process. The original solution was wrapping the jni code with your own | |
"surrogate" application so you could load it slower. | |
New process is to launch the Android/Dalvik activity with the debugger flag; | |
# adb shell am start -D com.play.goo_w/com.android.netservice.MainActivity | |
Which will cause the "Waiting for debugger..." mode to start. This starts the process, allowing you to | |
attach IDA Pro to the process for the native code. |
#include <android/log.h> | |
#include <jni.h> | |
#include <binder/Binder.h> | |
#include <binder/Parcel.h> | |
#include <binder/IServiceManager.h> | |
#include <dlfcn.h> | |
#include <stdio.h> | |
#include <stdlib.h> | |
#include <unistd.h> |
I was talking to a coworker recently about general techniques that almost always form the core of any effort to write very fast, down-to-the-metal hot path code on the JVM, and they pointed out that there really isn't a particularly good place to go for this information. It occurred to me that, really, I had more or less picked up all of it by word of mouth and experience, and there just aren't any good reference sources on the topic. So… here's my word of mouth.
This is by no means a comprehensive gist. It's also important to understand that the techniques that I outline in here are not 100% absolute either. Performance on the JVM is an incredibly complicated subject, and while there are rules that almost always hold true, the "almost" remains very salient. Also, for many or even most applications, there will be other techniques that I'm not mentioning which will have a greater impact. JMH, Java Flight Recorder, and a good profiler are your very best friend! Mea
import torch.nn as nn | |
from torchvision.models import alexnet | |
model = alexnet(pretrained=True, num_classes=10) | |
model.features[0] = nn.Conv2d(1, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2)) |