You are viewing a single comment's thread from:

RE: Why does steem use this canonical thing instead of strict secp256k1?

in #beyondbitcoin8 years ago

I wouldnt calt it tampering but simply being more strict. That is evetything there is to see. Graphene simply doesnt accept every signature that is valid for other chains but requires additional properties to be fulfilled. Namely r and s need to be of lenght 32. No way around it and the only possible way to achieve such a property for any message is to pick a deterministic k and add a random part to it and try with another sig.

Tale a look at how canonicality is done in ethereum. You will be surprised.

Sort:  

@xeroc,

Looking.

This? And the few other related bits across their code?

https://github.com/ethereum/go-ethereum/blob/6d15d00ac4b5f162737a27dfa6f8e9976383776e/accounts/abi/event.go

Yes, it looks a good deal simpler.

@xeroc

Thinking about how to approach this to see if I am FUD-ing or not, I'm installing Parity, the leading ethereum client, and I'll compare to the same steps on graphene.

If your serialization expects a 65byte signature and you only provide 63 byte .. then it needs to either perform running length encoding or do zero padding. The former leads to additional data on the serialization while the latter leaves you with meaningless bytes in the serialization that can be used to hide information (e.g. leakage of private key data) -- this is particularilly a concern for hardware wallets where you don't see what code is actually executed.

That said, restricting signatures to 65bytes is easiest with respect to security, scalability and network efficiency.

You are actually right, the canonicality of eth signatures does not require a loop, but they need to deal with signatures that are (maybe) shorter than 64bytes. How they deal with it is their decision to make. Graphene architects made the decision to not allow shorter signatures and there are good reasons to do so: backend performance, information leakage, consistency, etc..

How is it possible to optimize for back-end performance without potentially leaking information?

Also, doesn't a variable key length improve the situation overall and reflect a more standard implementation?