Refreshing Comments...

For anyone not aware, you can use macOS's keychain to store ssh key passwords and have them unlock at login. This way you can have the benefits and convenience of password managers in the command line for SSH certificates.

https://apple.stackexchange.com/questions/48502/how-can-i-pe...

Does it have touch to authorise (doesn't seem to support that), or is it just going to send on all of one's currently-loaded SSH keys whenever one connects with -A (seems to)?

> You can configure your key so that they require Touch ID (or Watch) authentication before they're accessed.

That, to me, would be a key thing to want to have: something that tells me "hey, Terminal just wanted to access your Github key. Is that okay?"

If I'm git pushing, that's fine. If I just connected to a random server... that's not okay. What is that trying to do? Deny.

1. with pubkeyauthentication, you don’t send your private key. Your public key, which is stored on the server, is used to encrypt something. You prove you have the corresponding private key by sending back a decrypted version of the something. Attempting to log in to a server, whether they have zero or more of your public keys poses no risk.

2. You can control which private keys are used for which remote server using .ssh/config. You can look up the man page for more.

3. There is a risk of using ssh-sgent key forwarding that while you are connected to a server with key forwarding turned on, a super user sudo to your user and log in to a second host. This risk can be minimized by only enabling key agent forwarding to hosts you trust and limiting the keys available to each host.

After researching this for a while, it seems there is no documented, native option to do this. The only option is to unlock all SSH keys all the time, which makes them less secure than the passwords for websites managed by the exact same keychain. Which, in my opinion, is weird.

Do they employees at Apple use a different system altogether? Because the built-in one doesn't seem very secure. Or maybe I am using it wrong, who knows.

In a similar vein, is there an exhaustive manual for macOS? It bugs me that Apple machines cost a small fortune, the OS is full of nifty features, but there is no non-superficial manual shipped with it.

> which makes them less secure than the passwords for websites managed by the exact same keychain

That's NOT true. While giving out less information to untrusted parties is obviously better than more, the private key itself is not transmitted directly to the server. This means that connecting to an attacker's SSH server doesn't give them a copy of your private key, so they can't then connect to your SSH servers.

The scenario here is with -A (agent forwarding). This means as long as you’re connected to a server with that enabled, that server can auth as you to a bunch of other stuff by having your client do it silently.
In regards to the ssh-agent stuff (and a lot of the other CLI tricks), the man pages usually document them. They may lag a bit for new features, but most of them will have a man page. Barring that they’ll usually print a help message.

One command off the top of my head that doesn’t really follow this is the undocumented/internal `airport` command. In that case it has two different help messages depending on how it’s called, and is also tucked away in a framework as well.

You used to be able to have multiple keychain files in keychain so you could create another one which doesn't unlock at login which you keep for more securerer stuff?
There is a native way in .ssh/config. I just added a reply to a different post.
That only restricts the blast radius to one key.

One is unawares of when, how, and for what purposes that key is used - as forwarding the key means it's available for use by any user process (as the mechanism behind the forwarding is user-owned) or root (as root can see everything).

Touch-to-authorize helps mitigate that.

If one seees the prompt come up when they've just performed a git pull, it's expected and likely non-malicious. Allow.

If it pops up after having ran "ls" or "randomly" in the course of a session - what's going on? Deny.

To be clear, that would be a privacy issue (a malicious server could tell what keys you have), but wouldn't allow a malicious server to log in to anything else with your keys. You don't send the private key when you log in.
Note that GP said -A -- this means the agent gets forwarded, and processes on the malicious server can ask the agent to perform authentication operations.

Touch to auth means the agent (or hardware token) asks the user to to confirm they are expecting an authentication request to come in.

This allows you to forward your agent to a host and have slightly more protection against malicious processes on the host using your key.

My general understanding is that -A is generally discouraged in the first place: especially when connecting to an untrusted server. But, it just seems to me to be a bad idea to ever let a private key leave your computer. Instead, you should generate a key on the new box and authorize that one somehow (the lowest friction way would be to use an ssh ca to sign the key and then have the servers you want to log into trust only public keys that have been signed by the CA’s key.
-A does not forward your keys to the remote server itself; it "just" lets the remote server make requests to your local SSH agent to sign things. But you that's still sufficient to allow the remote server to sign into things as you without your authorization, so probably not the best of ideas to sign into servers you don't trust with -A.

If you're using -A to log into other machines behind the SSH server (really, the only reason one would use -A), there are now better mechanisms to do that. ProxyJump if the server supports it; port forwarding or ProxyCommand if it doesn't.

For anyone interested in a standardised/Linux-compatible version of this, check out PKCS11 tokens. Smartcards can and do implement this spec, and if you use PKCS11, the secret is used from the token to sign an SSH login (for example), without being revealed.

This means the secret itself stays on the card. You can combine this with certificates if needed; the smartcard handles the authentication. Using PKCS11 tokens is supported in recent openssh versions. You can also use them for client certificate authentication in web browsers. For anyone familiar with DoD CACs, this is similar - I believe they use a particular card with a proprietary PKCS11 driver, but you can use opensc with any card running a PKCS11 applet.

Some refs:

https://zerowidthjoiner.net/2019/01/12/using-ssh-public-key-...

https://developers.yubico.com/PIV/Guides/SSH_with_PIV_and_PK...

You can also do it with PGP smartcards like the open-source NitroKey [0].

For a more analogous equivalent though, there's the tpm2-pkcs11 project [1] that uses anything conforming to the TCG TPM2 spec [2], which includes a surprisingly wide variety of hardware, including many motherboards. My XPS 13 for example has one.

What I'd really like to see though is something that uses the TPM for host verification rather than client verification. I'd love to be able to ensure that the only way for a machine to give me the host key I expect is for that machine to be hardware I own running verified software (e.g. verified by Secure Boot with a custom key).

[0]: https://www.nitrokey.com/

[1]: https://github.com/tpm2-software/tpm2-pkcs11/blob/master/doc...

[2]: https://trustedcomputinggroup.org/resource/tpm-library-speci...

Indeed, PGP smartcards should work too, as should cards using the open-source ISOApplet.

I also played around with tpm2-pkcs11 last year and it worked nicely, and has support on a lot of devices like the XPS (which works really well on Linux!)

Your idea for using TPM as host verification makes a lot of sense to me - as it stands right now, I'm playing around with this for a side project right now, and a TPM-backed key which is bound to the right PCRs should give you assurance secure boot is enabled, with your own custom keys, and that it loaded your signed Grub, booting your signed kernel.

Got all the secure boot stuff working, now need to figure out what to use the TPM for - options are remote attestation, where server verifies the attestation signature (but this wouldn't be particularly standardised), network level authentication (actually easier - IIRC NetworkManager supports 802.1x authentication using PKCS11, so you can use tpm2-pkcs11 for that), or disk encryption.

Assuming you mean for the remote-end host key, and assuming your server has a TPM available, I reckon that could be quite interesting, but it doesn't look like openssh supports using PKCS11 for HostKey access. Would need to see if the key is used often, or if it's just used to establish connections (since PKCS11 crypto is usually pretty slow, but fine for one-off authentications).

As a closing note for time-travellers from the future etc, worth remembering TPM is far from perfect, and there's quite a few nice attacks if you can "sniff" the serial lines from the motherboard to the TPM itself. And if using the fTPM (firmware TPM), the regular Intel SGX/TXE holes will likely compromise the TPM security properties.

This works ok and is the best option at the moment, but as more systems upgrade to newer versions I think the Fido/U2F support is probably going to take over. It's nice to not have anything key specific, any initialization steps, and so on.
Edit: Maybe OpenSSH does offer resident keys. On the one hand their release notes say they do but on the other hand I was 100% sure somebody who should know insisted they didn't. A trawl of my records cannot find such a communication so perhaps I dreamt it. If resident keys are an option then you need to make sure to buy FIDO2 authenticators and to explicitly tell OpenSSH you want resident keys.

The current SSH FIDO behaviour doesn't do resident keys AFAIK. The OpenSSH team has discussed it, but not as I understand it written an implementation.

In a PIV setup Jim, Sarah and Gary can each have a device (say a Yubikey) with their own private key inside it. Both their workstations and shared desktop/ interactive servers can use these keys, when they're plugged in. So when Jim is sat at this workstation, Jim's key uses Jim's private key to authenticate Jim over SSH. If Sarah wants to use a machine she's never previously used, but it was set up for Jim and Gary, her key, Sarah's key, just works to authenticate as Sarah over SSH. Simple and easy to think about.

With FIDO non-resident keys a similar team Beth, Larry and Zoe have personal FIDO authenticators, let's say from Titan, they each set up their personal laptop to require their personal FIDO authenticator. Both elements are needed to authenticate. Beth's laptop, plus Beth's Titan key is enough to authenticate via SSH. But if Beth is at a Workstation she's never used before she can't use her Titan key to authenticate, it has no idea how to help on its own. If Beth sets that workstation up (a multi-step process) this isn't re-usable, Zoe can't use her Titan key without also going through that enrolment process.

The reason why is a mixture of differences between typical Web authentication journeys and SSH, and the deliberate (privacy preserving) feature paucity of FIDO Security Keys when used without resident keys.

The FIDO authenticator genuinely has no idea what your private key is (this is unlike for resident keys like Apple's recent announcement). It first needs to be presented with an ID, a large random-looking byte string. It has in fact encrypted your private key using AEAD mode with a secret symmetric key that's baked inside it and then used that as the ID. So when the ID comes back it decrypts it, the AEAD checks out, it now has a private key and can use that to authenticate you before it forgets it again.

On the Web in this case (again not the resident case like Apple's cool Safari feature) the ID is being delivered by a remote server to your web browser when you enter a username and the browser passes it to the FIDO authenticator. But in SSH authentication doesn't work that way.

So, OpenSSH stores the ID locally on a computer. It isn't secret, so it's not a huge deal if someone somehow steals it. But without that ID the authenticator can't do its job.

Maybe you could arrange to synchronise the IDs across machines in an organisation. But AFAIK nothing exists to sort that out today. So without either resident credentials (which need a more expensive FIDO2 authenticator) or some further synchronisation framework PIV is easier today if you want employees to authenticate from machines that aren't "their" personal machine.

The 8.3 release notes cover resident key support with compatible hardware.. I have older hardware, but I will still be likely to switch to my u2f key being primary as soon as most cloud services support them. Making sure the right library can load with ssh-agent on nixos (and having a gpg applet/key for devices that don't work with opensc) is less convenient for me than just having separate key identities at work and at home.
Does this mean that the Secure Enclave is accessible by the user? If so, it prompts so many questions. How much disk space is available on the Enclave, for example.
You don’t actually need to store Secure Enclave protected data in the enclave. Can “wrap” it with the enclave’s key and store it on your own disk.
Unwrapping it puts the private key into memory, from which it can be extracted (it’s hard, of course, but looking at the state of Intel security, everything is possible). Secure Enclave is supposed to sign with non-extractable private key without putting it into RAM.
This is not what it does though. It says the private key can't be exported, and this wouldn't be the case if it was just a regular SSH keyfile encrypted using the Enclave's key.
You can ask the Secure Enclave or a smart card to perform operations using a key pair which is generated on and never leaves the device. In the case of SSH, you can use that directly as long as your SSH client and server support that algorithm: the client advertises the key fingerprint and just passes the server’s challenge through without modification. Apple has had native support built-in for using PIV/CAC tokens for years.

For storing arbitrary sized things, you’d typically use that key pair to sign an AES key used to encrypt the actual file so your device only needs to store the comparatively small keys.