A seemingly common problem that people encounter is how to handle all of your users authorized_keys file.
People struggle over management, ensuring that users only have specific keys in the authorized_keys file or even a method for expiring keys. A centralized key management system could help provide all of this functionality with a little scripting.
One piece of functionality overlooked in OpenSSH is the AuthorizedKeysCommand
configuration keyword. This configuration allows you to specify a command that will run during login to retrieve a users public key file from a remote source and perform validation just as if the authorized_keys file was local.
Here is an example directory structure for a set of users with SSH public keys that can be shared out via a web server:
users/
├── dave
│ └── keys
├── matt
│ └── keys
├── nathen
│ └── keys
└── paul
└── keys
Each of these keys files might look like:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt8lGmfZ0fxPz/66JlNg9CmZNaLsJ/TDrYnpBpiWWeuoLxP1tEbDiutApVOkjjQszBQV6CgvG3PeBYYAcJxUTRKhY8dUUbsAvVK3SRVwpr8jhtcohYgRE4V9/xPnwilDAfd9TymCMvM/mBpauQCyL40SImFQMJl5aBAhBiy6zyWx6WeDTzJ4+ZGUTmwFFyaWzzIqIZXWe1QiM98rfzle0mYM8KSKdTuGEf0EmY63MbMl3PQ61ms/qkR3fnKWpGF+EsigS0NgT6nBYoOZm5nFtrB2WM8nixyD5v82Z6yA6+O2SfLxtzJ6OcowtwtitrcZrAZdcNIwOAX1T7G4qcFEFn [email protected]
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZ0N1dcto3td7j5/7UPCE2XlhDaCOZTlYtCgNifJygM5GNAG97JcChnnoYbdmiEM+dFMs7Jk6fS/WzG0Q0Ypu3rQ9AzzeUEMbhrFB90f28JsfUtgnkYuUF+1dNDGZn1fhYMlNwwyIt5s0KSS18iJNU6ZrSTudk9v1gyBM+Sxz97YMg2RiiGpCajPHzZbj2AwMl52MjT8ZDCGLt2qFo+w4u4BNQdtAA+zs/GiwgFbdGHM2HR1VxmII61LpvyyeuRkRwxN1ak3R7FcPMmYNhC9cvzbnvpmVcXxwXChI/9ceOm6DODCgHl9YeOgngoe5gEtZHnqtOWZWao8cFfd4wcEEN [email protected]
If I were to serve out the users/
directory using a webserver such as nginx, I might be able to request the keys for the user matt
such as:
curl -sf https://keyserver.example.org/users/matt/keys
The AuthorizedKeysCommand
expects an executable that takes a single argument, which is the username to retrieve the keys for. An example executable may look like:
#!/bin/bash
curl -sf https://keyserver.example.org/users/$1/keys
Name this file something like /usr/local/bin/userkeys.sh
and make it executable: chmod a+x /usr/local/bin/userkeys.sh
Now add the following to your /etc/sshd/sshd_config
file:
AuthorizedKeysCommand /usr/local/bin/userkeys.sh
AuthorizedKeysCommandUser nobody
Most operating systems have a nobody user, but you can replace that user with any non-root user that is different from the user running OpenSSH on the server. This should preferably be a user not already in use by another daemon.
Now, when a user logs in, userkeys.sh
will be executed and if there are keys for that user they will be returned by our simple script.
Now all you need to do is manage that users/
directory via something like git and you are good to go.
If all your users are on GitHub, you can even have them use GitHub for the storage location of their SSH public keys, and you can replace the URL with https://github.com/$1.keys
.
Although, if you usernames don't match GitHub, then you would have to maintain a lookup table that may get complicated.
This also works for GitHub Enterprise too, which if your company uses it could solve the username issues.
This article was really helpful in getting AuthorizedKeysCommand working on AWS EC2 instances. A couple of things to assist others later:
1.' chmod a+x the userkeys.sh' file wasn't enough. I had to use 'chmod 755 userkeys.sh'. If you get 126 errors in the sshd log file this is probably the problem and can be tested by 'sudo -su nobody' and then trying to run the script manually.
2. Running '/etc/init.d/sshd restart' was required in the cloudformation template I am using to pick up the changes to the userkeys.sh file.
Hope that helps someone (or me when I return... hello me :-)). Thanks for the article!