For a basic discussion and demo, see https://www.postgresql.org/docs/current/explicit-locking.html#id-1.5.12.6.8.2
My own demo follows.
Create two tables with unique indexes:
For a basic discussion and demo, see https://www.postgresql.org/docs/current/explicit-locking.html#id-1.5.12.6.8.2
My own demo follows.
Create two tables with unique indexes:
Based on my reading and listening, observability is the ability to answer a wide range of questions about a system's behavior based on previously-captured data. Ideally, it lets you see how a system is performing for various use cases and users in real time, and watch how that changes as new code goes into production.
Observability folks like to talk about it as "testing in production", to which they add that everyone does this, like it or not, because only in production can we see the kinds of edge cases that happen with real data, real traffic, real network conditions, etc. Observability's goal is that when we test in production, we can get much more detailed information than "it works" or "it doesn't work", and thus find and fix problems much more easily.
For example, a user emails and say "doing X in the system is slow for me this morning." With poor observability, you might be able to look at the system's overall latency, or the overall CPU load of the se
Quote from Elixir Mix 63 - "063: Designing Elixir Systems With OTP with Bruce Tate and James Gray", starting at 01:03:13
"I've worked at a bunch of companies building web apps for a long time, and I keep seeing this same pattern, and it haunts me. In the web world, all we want is these long interactions with people, and we live in this stateless world. So what we do is, the first part of every request, we do thirty queries to re-establish the state of the world that we just forgot a few seconds ago after the last request. And then we go forward and make one tiny step forward, and then we forget everything again, so that when the next request comes in we can do thirty queries to put it all back and make one more tiny step. And I kept thinking, "there has to be a better way than this, right?"
And if you look at web advancements over the years, most of the things we're doing are
10 requests with no threads | |
response codes: ["200", "200", "200", "200", "200", "200", "200", "200", "200", "200"] | |
that took 1.787615 seconds | |
10 requests with threads | |
response codes: ["200", "200", "200", "200", "200", "200", "200", "200", "200", "200"] | |
that took 0.200502 seconds |
#!/bin/sh | |
set -e | |
HEROKU_APP_NAME=someapp | |
DEV_DB_NAME=someapp_dev | |
LOCAL_BACKUP_FOLDER=tmp | |
LOCAL_BACKUP_LOCATION=$LOCAL_BACKUP_FOLDER/$HEROKU_APP_NAME-production-dump-$(date +"%Y-%m-%dT%H:%M") | |
echo | |
read -p "Make and download a fresh backup of production? [y/n]" -n 1 -r |
defmodule MyApp.Periodically do | |
use GenServer | |
def start_link do | |
GenServer.start_link(__MODULE__, %{}) | |
end | |
def init(state) do | |
Process.send_after(self(), :work, 2 * 60 * 60 * 1000) # In 2 hours | |
{:ok, state} |
#!/bin/bash | |
# Shell into the Docker container with the given name. | |
# eg: `docker-bash my_app` | |
# Note: fails if more than one id is returned. | |
ID=$(docker-id $1) | |
docker exec -it $ID bash |
-- (This code was run in PostgreSQL 9.6.1) | |
-- Demonstration of how serializable isolation for PostgreSQL, which detects possible | |
-- interference between concurrent transactions, can produce false positives | |
-- in psql, create the following table | |
CREATE TABLE users( | |
id SERIAL NOT NULL PRIMARY KEY, | |
username VARCHAR NOT NULL | |
); |
defmodule Zalgo do | |
def this(string) do | |
String.graphemes(string) | |
|> Enum.map(fn (char) -> char <> the_funk end) | |
|> Enum.join | |
end | |
def the_funk do | |
accent_codes | |
|> Enum.map(fn (hex) -> |
(Update - thanks to Chris for looking at this and not saying I'm crazy. :))
"Metaprogramming Elixir" (Chris McCord) talks about how String.Unicode
reads a text file of unicode characters at compile time and defines a separate function head for each character we might want to upcase. It says this leans on the Erlang VM's pattern matching prowess and implies (I think) that it's more performant than it would be to create a lower -> upper map at compile time and consult it at runtime.
Similarly, McCord in advocates this approach for some example code that looks up I18n keys.
By generating function heads for each translation mapping, we again let the Virtual Machine take over for fast lookup.
Although defining multiple function heads is idiomatic Elixir, this seemed odd to me. I've heard that the Erlang VM is really fast at pattern matching and hence at finding the right function f