[dev][keystone] Key loading performance during token validation
Thanks for your test. I think the performance issue here is mainly
related to a case like: When issuing/validating tokens concurrently in
multi-keystone mode, the disk I/O may be blocked. So for your test env, I'm
not sure it would work.
We hit this issue when using PKI/PKIz token, PKI/PKIz token use "openssl"
to issue/vallidating tokens and openssl loads the keys from disk within
every request as well. So I think it's similar with fernet/jws. They all
read keys from disk everytime.
Since PKI/PKIz is removed from unstream already. I can't not give you any
useful test data for fernet or jws. Regardless of my concern, we can of
cause merge JWT feature first. Then I guess we can get more feedback easier
in the future.
Lance Bragstad <lbragstad at gmail.com> äº?2019å¹´2æ??14æ?¥å?¨å?? ä¸?å??5:02å??é??ï¼?
> Both fernet tokens and JSON Web Tokens (still under review ) require
> keys to either encrypt or sign the token payload. These keys are kept on
> disk by default and are treated like configuration. During the validation
> process, they're loaded from disk, stored as a list, and iterated over
> until the correct key decrypts the ciphertext/validates the signature or
> the iterable is exhausted .
> Last week XiYuan raised some concerns about loading all public keys from
> disk every time we validate a token . To clarify, this would be
> applicable to both fernet keys and asymmetric key pairs used for JWT. I
> spent a couple days late last week trying to recreate the performance
> There were two obvious approaches.
> 1. Watch for writable changes to key repositories on disk
> I used a third-party library for this called watchdog , but the
> inherent downside is that it is specific to file-based key repositories.
> For example, this might not work with a proper key manager, which has been
> proposed in the past .
> 2. Encode a hash of the key used to create the token inside the token
> This would give any keystone server validating the token the ability
> preemptively load the correct key for validation instead of iterating over
> a list of all possible keys. There was some hesitation around this approach
> because the hash would be visible to anyone with access to the token (it
> would be outside of ciphertext in the case of fernet). While hashes are
> one-way, it would allow someone to "collect" tokens they know were issued
> using the same symmetric private key or signed with the same asymmetric
> private key.
> Security concerns aside, I did attempt to write up the second approach
> . I was able to get preemptive loading to work, but I didn't notice a
> significant performance impact. For clarification, I was using a single
> keystone server with one private key for signing and 100 public keys to
> simulate a deployment of 100 API servers that all need to validate tokens
> issued by each other. At a certain point, I wondered if the loading of keys
> was really the issue, or if it was because we were iterating over an entire
> set every time we validate a token. It's also important to note that I had
> to turn off caching to fully test this. Keystone enables caching by
> default, which short-circuits the entire key loading/token provider code
> path after the token is issued.
> My question is if anyone has additional feedback on how to recreate
> performance issues specifically for loading files from disk, since I wasn't
> particularly successful in noticing a difference between repetitive key
> loading or iterating all keys on token validation against what we already
>  https://review.openstack.org/#/c/614549/
> https://review.openstack.org/#/c/614549/13/keystone/token/providers/jws/core.py,unified at 103
>  https://pythonhosted.org/watchdog/
>  https://review.openstack.org/#/c/363065/
>  https://review.openstack.org/#/c/636151/
-------------- next part --------------
An HTML attachment was scrubbed...