Doesn't the choice of encryption algorithm add entropy by itself?
Let's say someone has my encrypted data and he wants to decrypt it. People always talk about how the length of the key (e.g. 256 bits) decides about the entropy of the encryption, which totally makes sense. If the attacker tries all 2256 possibilities, his great-great-…-grand-children will have my data.
But what if all the years he was using the wrong algorithm? Isn't the choice of the algorithm itself adding entropy as well or am I wrong to assume this? So instead of naming my file super_secret.aes256
I would just name it super_secret.rsa256
or maybe even not give it a file ending at all?
encryption entropy obscurity
|
show 3 more comments
Let's say someone has my encrypted data and he wants to decrypt it. People always talk about how the length of the key (e.g. 256 bits) decides about the entropy of the encryption, which totally makes sense. If the attacker tries all 2256 possibilities, his great-great-…-grand-children will have my data.
But what if all the years he was using the wrong algorithm? Isn't the choice of the algorithm itself adding entropy as well or am I wrong to assume this? So instead of naming my file super_secret.aes256
I would just name it super_secret.rsa256
or maybe even not give it a file ending at all?
encryption entropy obscurity
69
Let's assume there are 8 ciphers you could choose from in 2019 that have 256 bit keys, and that there was no way for an attacker to look at the cipher text and tell which kind of algorithm was used, then your secret algorithm choice is adding log2(8) = 3 bits of entropy. That's negligible noise compared to the 256 bits of the key.
– Mike Ounsworth
Jan 30 at 14:18
5
@MikeOunsworth Also worth considering that some ciphers may take several times longer to apply, but really most encryption formats explicitly say what cipher is being used anyway.
– AndrolGenhald
Jan 30 at 14:21
Couldn't you use a side-channel attack to discover whether it was a symmetric-key or public-key algorithm?
– EJoshuaS
Jan 30 at 21:01
1
There is one case that this helpful, the encryption scheme is created by you and you never revealed the design though it is hard to achieve. Than it is almost impossible to break.
– kelalaka
Jan 30 at 21:34
15
@kelalaka if you or me (or anybody with less then several decades in the field with ample peer reviews) invented our own encryption scheme, it is practically assured that it would be so weak it will get cracked in dozens of ways which are waaay faster then bruteforcing.
– Matija Nalis
Jan 31 at 0:30
|
show 3 more comments
Let's say someone has my encrypted data and he wants to decrypt it. People always talk about how the length of the key (e.g. 256 bits) decides about the entropy of the encryption, which totally makes sense. If the attacker tries all 2256 possibilities, his great-great-…-grand-children will have my data.
But what if all the years he was using the wrong algorithm? Isn't the choice of the algorithm itself adding entropy as well or am I wrong to assume this? So instead of naming my file super_secret.aes256
I would just name it super_secret.rsa256
or maybe even not give it a file ending at all?
encryption entropy obscurity
Let's say someone has my encrypted data and he wants to decrypt it. People always talk about how the length of the key (e.g. 256 bits) decides about the entropy of the encryption, which totally makes sense. If the attacker tries all 2256 possibilities, his great-great-…-grand-children will have my data.
But what if all the years he was using the wrong algorithm? Isn't the choice of the algorithm itself adding entropy as well or am I wrong to assume this? So instead of naming my file super_secret.aes256
I would just name it super_secret.rsa256
or maybe even not give it a file ending at all?
encryption entropy obscurity
encryption entropy obscurity
edited Jan 30 at 23:14
Michael
1,1861226
1,1861226
asked Jan 30 at 11:22
RobertRobert
357123
357123
69
Let's assume there are 8 ciphers you could choose from in 2019 that have 256 bit keys, and that there was no way for an attacker to look at the cipher text and tell which kind of algorithm was used, then your secret algorithm choice is adding log2(8) = 3 bits of entropy. That's negligible noise compared to the 256 bits of the key.
– Mike Ounsworth
Jan 30 at 14:18
5
@MikeOunsworth Also worth considering that some ciphers may take several times longer to apply, but really most encryption formats explicitly say what cipher is being used anyway.
– AndrolGenhald
Jan 30 at 14:21
Couldn't you use a side-channel attack to discover whether it was a symmetric-key or public-key algorithm?
– EJoshuaS
Jan 30 at 21:01
1
There is one case that this helpful, the encryption scheme is created by you and you never revealed the design though it is hard to achieve. Than it is almost impossible to break.
– kelalaka
Jan 30 at 21:34
15
@kelalaka if you or me (or anybody with less then several decades in the field with ample peer reviews) invented our own encryption scheme, it is practically assured that it would be so weak it will get cracked in dozens of ways which are waaay faster then bruteforcing.
– Matija Nalis
Jan 31 at 0:30
|
show 3 more comments
69
Let's assume there are 8 ciphers you could choose from in 2019 that have 256 bit keys, and that there was no way for an attacker to look at the cipher text and tell which kind of algorithm was used, then your secret algorithm choice is adding log2(8) = 3 bits of entropy. That's negligible noise compared to the 256 bits of the key.
– Mike Ounsworth
Jan 30 at 14:18
5
@MikeOunsworth Also worth considering that some ciphers may take several times longer to apply, but really most encryption formats explicitly say what cipher is being used anyway.
– AndrolGenhald
Jan 30 at 14:21
Couldn't you use a side-channel attack to discover whether it was a symmetric-key or public-key algorithm?
– EJoshuaS
Jan 30 at 21:01
1
There is one case that this helpful, the encryption scheme is created by you and you never revealed the design though it is hard to achieve. Than it is almost impossible to break.
– kelalaka
Jan 30 at 21:34
15
@kelalaka if you or me (or anybody with less then several decades in the field with ample peer reviews) invented our own encryption scheme, it is practically assured that it would be so weak it will get cracked in dozens of ways which are waaay faster then bruteforcing.
– Matija Nalis
Jan 31 at 0:30
69
69
Let's assume there are 8 ciphers you could choose from in 2019 that have 256 bit keys, and that there was no way for an attacker to look at the cipher text and tell which kind of algorithm was used, then your secret algorithm choice is adding log2(8) = 3 bits of entropy. That's negligible noise compared to the 256 bits of the key.
– Mike Ounsworth
Jan 30 at 14:18
Let's assume there are 8 ciphers you could choose from in 2019 that have 256 bit keys, and that there was no way for an attacker to look at the cipher text and tell which kind of algorithm was used, then your secret algorithm choice is adding log2(8) = 3 bits of entropy. That's negligible noise compared to the 256 bits of the key.
– Mike Ounsworth
Jan 30 at 14:18
5
5
@MikeOunsworth Also worth considering that some ciphers may take several times longer to apply, but really most encryption formats explicitly say what cipher is being used anyway.
– AndrolGenhald
Jan 30 at 14:21
@MikeOunsworth Also worth considering that some ciphers may take several times longer to apply, but really most encryption formats explicitly say what cipher is being used anyway.
– AndrolGenhald
Jan 30 at 14:21
Couldn't you use a side-channel attack to discover whether it was a symmetric-key or public-key algorithm?
– EJoshuaS
Jan 30 at 21:01
Couldn't you use a side-channel attack to discover whether it was a symmetric-key or public-key algorithm?
– EJoshuaS
Jan 30 at 21:01
1
1
There is one case that this helpful, the encryption scheme is created by you and you never revealed the design though it is hard to achieve. Than it is almost impossible to break.
– kelalaka
Jan 30 at 21:34
There is one case that this helpful, the encryption scheme is created by you and you never revealed the design though it is hard to achieve. Than it is almost impossible to break.
– kelalaka
Jan 30 at 21:34
15
15
@kelalaka if you or me (or anybody with less then several decades in the field with ample peer reviews) invented our own encryption scheme, it is practically assured that it would be so weak it will get cracked in dozens of ways which are waaay faster then bruteforcing.
– Matija Nalis
Jan 31 at 0:30
@kelalaka if you or me (or anybody with less then several decades in the field with ample peer reviews) invented our own encryption scheme, it is practically assured that it would be so weak it will get cracked in dozens of ways which are waaay faster then bruteforcing.
– Matija Nalis
Jan 31 at 0:30
|
show 3 more comments
7 Answers
7
active
oldest
votes
If you’re designing a cryptosystem, the answer is No. Kerckhoffs's principle states “A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.” Restated as Shannon's maxim, that means “one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them.”
Making the assumption that the attacker won’t learn your algorithm is security through obscurity, an approach to security that is considered inadequate.
Relying on the attacker to not know the algorithm won’t add any work on his or her end, because according to Kerckhoff, he or she either knows it, or can be reasonably expected to find out. If it adds no uncertainty it adds no entropy. And their capabilities are not something you can quantify.
In the case of a lost cryptosystem, like you describe, there is usually enough historic or statistical information to determine the nature of the algorithm (if not the key itself.) But you can’t design a system under the assumption that it will be lost as soon as it’s used. That’s OpSec, not cryptography.
EDIT Comments have mentioned using algorithm selection as a part of the key. The problem with this approach is that the algorithm selection must necessarily be determined prior to the decryption of the data. This is exactly how protocols such as TLS work today.
If you’re truly looking to mix algorithms together and use a factor of the key to determine things like S-box selection, etc., you’re effectively creating a singular new algorithm (adopting all the well-known risks that rolling your own algorithm entails.) And if you’ve created a new algorithm, then all of the bits of the key are part of that entropy computation. But if you can point out specific bits that determine algorithm instead of key material, you still have to treat them as protocol bits and exclude them.
Regarding secrecy of the algorithms, your protocol may be secret today but if one of your agents is discovered and his system is copied, even if no keys are compromised the old messages are no longer using secret algorithms. Any “entropy” you may have ascribed to them is lost, and you may not even know it.
12
It's not security trough obscurity. It's notreliance on the secrecy of the design or implementation as the main method of providing security
. You could use the same reasoning to argue that assuming that attacker won't find your private key is security trough obscurity. It doesn't make much sense to worry about entropy added by choice of algorithm, because you can get enough entropy by the key anyway.
– FINDarkside
Jan 30 at 14:32
9
@FINDarkside "You could use the same reasoning..." The difference being that – given proper key management – an attacker should never be able to work out your private key. However, you've got to assume that they have the opportunity to discover more and more of any implementation details, for example through reverse-engineering the code.
– TripeHound
Jan 30 at 16:19
3
@FINDarkside , the querent is asking if the secrecy of the implementation adds entropy. In other words he is specifically asking if security through obscurity should be factored into the equation. The answer is no partly because there is no way to quantify the attacker’s capabilities in this regard. You are correct that all the entropy must originate from the key.
– John Deters
Jan 30 at 17:44
4
There is energy required to obtain the information about the algorithm, be it human intelligence, reverse engineering, or social engineering, you can factor in an estimate of this energy with the average energy required to conceivably circumvent your algorithm once they have that knowledge. And perhaps in certain context's this can be used to make a security judgement in an organization. From an information science context, as you point out, Cryptography this is out of scope and not quantifiable.
– crasic
Jan 30 at 19:09
3
@FINDarkside , I think I see where you’re coming from. You are considering a “lost message” scenario, one that has no context or information about it. You are also considering secrecy, perhaps even to avoid detection. Those are attributes of OpSec, and absolutely can help increase overall security. But as they aren’t mathematically determined, they can’t be factored into an entropy calculation. Think of it this way: your overall security includes your algorithm’s entropy as a component, but overall security isn’t measured by entropy- it’s measured by risk.
– John Deters
Jan 31 at 20:36
|
show 11 more comments
In practical terms, no, as John's answer neatly explains.
Hypothetically, if you had enough secure encryption methods to choose from, you could potentially select one method at random and use it to encrypt the data using – for example – a 256-bit key. The choice of algorithm used would need to be "added" to the key and become part of the "not to be revealed secret" (taking the combined entropy to 259 bits if there were eight encryption algorithms to choose between).
The problems with doing this include:
Only a small number of bits are added: eight algorithms only adds three bits of entropy. To add eight bits (for a total of 264 bits with a 256-bit key) would require 256 different encryption algorithms. Finding enough secure algorithms to make a practical difference is almost certainly much harder than simply extending the key-length of a single, known-to-the-attacker, algorithm.
You have to "extend" the key with the choice of algorithm: this means passing the choice to the "user" to be "remembered" alongside the normal key. This greatly complicates the process of key-management. Storing the choice in the encrypted data is a non-starter, since an attacker with "total knowledge" would be able to find the information and know which algorithm to use.
If any of the algorithms chosen leave some kind of "fingerprint" that allows an attacker to identify the algorithm used (or at least reduce the range of possible algorithms), then this will (partially) nullify the extra bits of entropy.
All-in-all, it is much easier to extend the length of the key used and not worry that an attacker knows the encryption method.
3
To make things worse, it's necessary to come up with the bits used in the algorithm selection somehow. If one has an algorithm which can use a 256-bit key and one genuinely has 256 bits of entropy, the probability of compromise from insufficient entropy will be essentially zero. If one doesn't have 256 bits of real entropy, taking entropy that could have been used for key generation and using it for algorithm selection won't improve anything.
– supercat
Jan 30 at 19:33
I suppose if you want to be really pedantic about the question, you could allow users to upload their own algorithms rather than choose from a predefined list. But the downsides associated with arbitrary code execution, the fact that most users would not choose a secure algorithm, and the increased difficulty in the user managing this obviously outweigh any benefits.
– jpmc26
Jan 31 at 17:35
One interpretation of Kerckhoffs's principle is that the information that must be kept secret comprises the key, which is what we're saying here. So a choice of algorithm from a fixed set of 8 requires an extra 3 bits in your key store. A choice of algorithm that you've coded adds a much bigger amount to the key store (at least the Shannon entropy of the code). It's not very good value-for-bits.
– Toby Speight
Feb 1 at 11:18
add a comment |
The answers from @John Deter and @TripeHound explain things very well, but I wanted to give an example that would put Kerckhoffs's principle in context. It's natural to approach these questions from the perspective of an outside attacker, but that's not the only relevant threat model. In fact, roughly half of all data breaches start from inside agents (aka employees, contractors, etc...), with a mix of both accidental and intentional leaks.
More realistic Threat Vectors
Having a hidden encryption algorithm may help against an outside attacker if they can't easily deduce what system you used. However, it provides absolutely no additional protection against an inside attacker who have access to your code. As an extreme example, if your system keeps critical Personally Identifiable Information (PII) in your database, but both production database access credentials, encryption algorithms, and encryption keys are stored directly in your code repository, then you have effectively given everyone who has access to your code repository access to all of your customers' PII.
Of course you don't want to do that, so you keep production systems segregated from everyone expect admins, you keep encryption keys stored in a separate key management system accessible (as much as possible) only to the application, etc... Your developers know what encryption algorithms are used (because they can see it in the code repository), but they don't have access to the production database, and even if they did get read access to the database they wouldn't have the keys to decrypt the data that is there.
Applying Kerckhoff's Principle
That's the whole point of Kerckhoffs's principle - the only thing that you have to keep a secret is the actual secret (aka your encryption key). Everything else can be known by everyone and you're still secure. Which is good, because keeping just one secret is hard enough. Trying to devise a system that hides not just the keys but also the encryption algorithms and other details from as many people as possible is quite a bit harder and more prone to failure.
In short, people are bad at keeping secret. As a result, designing your system so you have less secrets to keep actually makes you more secure, even if it seems counter intuitive. After all, what you suggest makes sense on some level: why should we only encrypt our data? Let's hide the encryption method too and be extra secure! In practice though, hiding more things gives you more room to make mistakes and a sense of false security. It is much better to use an effective encryption method that makes keeping secrets as simple as possible - hide the key and the message is secure.
add a comment |
Abstractly, if there were 2^n encryption schemes that were exactly equally hard to break, and had the same space of possible keys, then sure, you could define a new encryption scheme as "randomly pick one of these 2^n schemes" and effectively consider those n bits to be added to the key.
But in practice, even if this were possible, that's a lot of unnecessary complexity when you could instead just pick a single algorithm and make the key a bit longer.
add a comment |
If you just encrypt something with one algorithm and pretend another one was used, Kerckhoff's principle applies as pointed out in other answers, so that's kinda useless, at least against an attacker with knowledge about our implementation. It will still "work" against an attacker who e.g. steals the encrypted file from your OneDrive or whatever cloud store without knowing anything about it.
If you require the choice of encryption algorithm to be included in the decryption process (i.e. your tool chooses one out of several algorithms, depending on what you input) then you effectively add bits to your key length. Kerckhoff's principle does not apply here. It is however kinda hard to add a significant amount of bits -- one would need many algorithms to choose from -- and it doesn't make any sense (see last paragraph).
In either case, whatever you want to do is pretty much pointless. The assumption that someone's great-great-grand children may have your data if they invest a couple of billions in equipment and elictricity bill is arguably true for keys in the 90-100 bit range. Although realistically, nobody (not even the NSA) would do that. The cost-benefit ratio doesn't give that away. Social engineering, or torture followed up by murder is a much cheaper, faster, and practical approach.
For anything noticeably larger than 110 or so bits, a brute force attack isn't realistic even if you neglect the cost-benefit ratio. You should be more worried about backdoors built into AES which is more likely to be the case than you seeing one single 128 bit key broken by brute force during your lifetime.
Now, the mere idea of cracking a 256-bit key via brute force is outright ridiculous, and the idea of adding bits on top of that is nonsensical, it makes exactly zero difference. Impossible doesn't get any better than "impossible".
1
I don't really see what this adds to any of the other answers that were already given. Could you elaborate?
– Tom K.
Jan 31 at 14:31
@TomK: If nothing else, then the fact that even thinking about brute-forcing a 256 bit key or the hypothetical consequences of that is absurd (or adding to a 256 bit key). Which nobody else seems to consider at all. Thinking about the possibility of brute forcing a 128 bit key is absurd enough already, though it may be physically possible.
– Damon
Jan 31 at 16:43
add a comment |
I think OPs question demonstrates insight and the answer is, at least in theory, yes it does. There is something here. I think that's the first point that should be made.
The responses given are mostly of the view: in practice it doesn't work like that.
These aren't wrong but I think they miss the validity/interest of OP's point.
The way I reason this:
From a theoretical black box point of view the choice between 2 encryption systems is analogous to the choice of the first bit of the key. In fact they really are the same thing (if you add the bit back). In a black box, there really is nothing special about the key. They are just a good way of enumerating your options of which encryption transformations you want to use.
To see this:
Say I make a new variant of AES128 lets call it JES_0_128. The way this works is: I add a a binary encoding of 0 (in this case 128 zeros) to the front of the key supplied and use this in (standard) AES256. Then I make another one called JES_1_128: an encoding of 1 etc all the way up to JES_(whatever 2^128 is in base 10)_128. All of these are perfectly valid 128 bit key encryption algorithms. But if you don't know which one... it's a 256 bit key encryption algorithm. AES256 to be precise. Which is indeed a lot more entropy.
The differences the other answers point out is that in practice a key is a really good way of picking which of the 2^256 AES-256 encryption algorithms to use.
It's flexible, well understood and leaves the generating and trusting of the mutual secret to the users. Why use anything else?
On the other hand, picking one of the handful of 256 bit families of encryption algorithms to use and hard coding it, is not a very good way. Even relative to a very small increase in key size. Or at all. You may as well just tell everyone. From a practical / writing software / type point of view this is not at all safe to rely in this being kept from an attacker of any interest. There are a host of reason why. Not least because if an attacker had a copy which value you picked would be easy to test which. But they are 'just' practical considerations...
add a comment |
I think this is a good way to look at it:
You have a secret, which may be a 256-bit key, or a password from which you derive that key, or either of those plus other information like which encryption algorithm you used.
The attacker wants to guess your secret. They do this by trying various possibilities until they find the right one or they run out of time, money, or motivation.
You have no idea what possibilities they are trying. In your question, you say "what if all the years he was using the wrong algorithm?" and the only answer to that is "what if he wasn't?" You have no control over that. If you knew which possibilities the attacker was going to try, you could just pick anything not on their list as your secret, and the security problem would be trivially solved.
What you can do, though, is roughly estimate how many possibilities they can try before running out of time and/or money, based on the state of computing technology. This assumes that they don't secretly have access to technology that the rest of the world doesn't, such as quantum computing or a backdoor in AES - which is probably a safe assumption since they would have better things to do in that case than try to crack your password. (Cf Cut Lex Luthor a Check, though see also this rebuttal.)
You can also prove the following result: if you choose your secret uniformly at random (using a high quality RNG) from n possibilities, and the attacker tries k possibilities, no matter what they are, the chance that they'll guess your secret is at most k/n.
The nice thing is that n grows exponentially with the amount of information you have to store/remember, whereas k grows only linearly with the amount of time/money they spend, so it's not hard to make k/n very small.
So, you should choose your secret uniformly at random from a large set of possibilities. A random 256-bit symmetric key is chosen uniformly from a set of size 2256, which is (far more than) large enough.
You can pick randomly from a bag of (algorithm,key) pairs as well, but it's pointless because any single algorithm already offers plenty of choices.
You can pick an obscure algorithm and hope that the attacker won't try it, but that's not picking at random any more, and therefore you can't prove that it helps at all. If there were no other options then this would be better than nothing, but there are other options.
This is the fundamental reason that cryptographers advise you to treat only the key as your secret: there are plenty of keys and keys are the easiest thing to choose at random. You don't need anything else.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "162"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsecurity.stackexchange.com%2fquestions%2f202534%2fdoesnt-the-choice-of-encryption-algorithm-add-entropy-by-itself%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
If you’re designing a cryptosystem, the answer is No. Kerckhoffs's principle states “A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.” Restated as Shannon's maxim, that means “one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them.”
Making the assumption that the attacker won’t learn your algorithm is security through obscurity, an approach to security that is considered inadequate.
Relying on the attacker to not know the algorithm won’t add any work on his or her end, because according to Kerckhoff, he or she either knows it, or can be reasonably expected to find out. If it adds no uncertainty it adds no entropy. And their capabilities are not something you can quantify.
In the case of a lost cryptosystem, like you describe, there is usually enough historic or statistical information to determine the nature of the algorithm (if not the key itself.) But you can’t design a system under the assumption that it will be lost as soon as it’s used. That’s OpSec, not cryptography.
EDIT Comments have mentioned using algorithm selection as a part of the key. The problem with this approach is that the algorithm selection must necessarily be determined prior to the decryption of the data. This is exactly how protocols such as TLS work today.
If you’re truly looking to mix algorithms together and use a factor of the key to determine things like S-box selection, etc., you’re effectively creating a singular new algorithm (adopting all the well-known risks that rolling your own algorithm entails.) And if you’ve created a new algorithm, then all of the bits of the key are part of that entropy computation. But if you can point out specific bits that determine algorithm instead of key material, you still have to treat them as protocol bits and exclude them.
Regarding secrecy of the algorithms, your protocol may be secret today but if one of your agents is discovered and his system is copied, even if no keys are compromised the old messages are no longer using secret algorithms. Any “entropy” you may have ascribed to them is lost, and you may not even know it.
12
It's not security trough obscurity. It's notreliance on the secrecy of the design or implementation as the main method of providing security
. You could use the same reasoning to argue that assuming that attacker won't find your private key is security trough obscurity. It doesn't make much sense to worry about entropy added by choice of algorithm, because you can get enough entropy by the key anyway.
– FINDarkside
Jan 30 at 14:32
9
@FINDarkside "You could use the same reasoning..." The difference being that – given proper key management – an attacker should never be able to work out your private key. However, you've got to assume that they have the opportunity to discover more and more of any implementation details, for example through reverse-engineering the code.
– TripeHound
Jan 30 at 16:19
3
@FINDarkside , the querent is asking if the secrecy of the implementation adds entropy. In other words he is specifically asking if security through obscurity should be factored into the equation. The answer is no partly because there is no way to quantify the attacker’s capabilities in this regard. You are correct that all the entropy must originate from the key.
– John Deters
Jan 30 at 17:44
4
There is energy required to obtain the information about the algorithm, be it human intelligence, reverse engineering, or social engineering, you can factor in an estimate of this energy with the average energy required to conceivably circumvent your algorithm once they have that knowledge. And perhaps in certain context's this can be used to make a security judgement in an organization. From an information science context, as you point out, Cryptography this is out of scope and not quantifiable.
– crasic
Jan 30 at 19:09
3
@FINDarkside , I think I see where you’re coming from. You are considering a “lost message” scenario, one that has no context or information about it. You are also considering secrecy, perhaps even to avoid detection. Those are attributes of OpSec, and absolutely can help increase overall security. But as they aren’t mathematically determined, they can’t be factored into an entropy calculation. Think of it this way: your overall security includes your algorithm’s entropy as a component, but overall security isn’t measured by entropy- it’s measured by risk.
– John Deters
Jan 31 at 20:36
|
show 11 more comments
If you’re designing a cryptosystem, the answer is No. Kerckhoffs's principle states “A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.” Restated as Shannon's maxim, that means “one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them.”
Making the assumption that the attacker won’t learn your algorithm is security through obscurity, an approach to security that is considered inadequate.
Relying on the attacker to not know the algorithm won’t add any work on his or her end, because according to Kerckhoff, he or she either knows it, or can be reasonably expected to find out. If it adds no uncertainty it adds no entropy. And their capabilities are not something you can quantify.
In the case of a lost cryptosystem, like you describe, there is usually enough historic or statistical information to determine the nature of the algorithm (if not the key itself.) But you can’t design a system under the assumption that it will be lost as soon as it’s used. That’s OpSec, not cryptography.
EDIT Comments have mentioned using algorithm selection as a part of the key. The problem with this approach is that the algorithm selection must necessarily be determined prior to the decryption of the data. This is exactly how protocols such as TLS work today.
If you’re truly looking to mix algorithms together and use a factor of the key to determine things like S-box selection, etc., you’re effectively creating a singular new algorithm (adopting all the well-known risks that rolling your own algorithm entails.) And if you’ve created a new algorithm, then all of the bits of the key are part of that entropy computation. But if you can point out specific bits that determine algorithm instead of key material, you still have to treat them as protocol bits and exclude them.
Regarding secrecy of the algorithms, your protocol may be secret today but if one of your agents is discovered and his system is copied, even if no keys are compromised the old messages are no longer using secret algorithms. Any “entropy” you may have ascribed to them is lost, and you may not even know it.
12
It's not security trough obscurity. It's notreliance on the secrecy of the design or implementation as the main method of providing security
. You could use the same reasoning to argue that assuming that attacker won't find your private key is security trough obscurity. It doesn't make much sense to worry about entropy added by choice of algorithm, because you can get enough entropy by the key anyway.
– FINDarkside
Jan 30 at 14:32
9
@FINDarkside "You could use the same reasoning..." The difference being that – given proper key management – an attacker should never be able to work out your private key. However, you've got to assume that they have the opportunity to discover more and more of any implementation details, for example through reverse-engineering the code.
– TripeHound
Jan 30 at 16:19
3
@FINDarkside , the querent is asking if the secrecy of the implementation adds entropy. In other words he is specifically asking if security through obscurity should be factored into the equation. The answer is no partly because there is no way to quantify the attacker’s capabilities in this regard. You are correct that all the entropy must originate from the key.
– John Deters
Jan 30 at 17:44
4
There is energy required to obtain the information about the algorithm, be it human intelligence, reverse engineering, or social engineering, you can factor in an estimate of this energy with the average energy required to conceivably circumvent your algorithm once they have that knowledge. And perhaps in certain context's this can be used to make a security judgement in an organization. From an information science context, as you point out, Cryptography this is out of scope and not quantifiable.
– crasic
Jan 30 at 19:09
3
@FINDarkside , I think I see where you’re coming from. You are considering a “lost message” scenario, one that has no context or information about it. You are also considering secrecy, perhaps even to avoid detection. Those are attributes of OpSec, and absolutely can help increase overall security. But as they aren’t mathematically determined, they can’t be factored into an entropy calculation. Think of it this way: your overall security includes your algorithm’s entropy as a component, but overall security isn’t measured by entropy- it’s measured by risk.
– John Deters
Jan 31 at 20:36
|
show 11 more comments
If you’re designing a cryptosystem, the answer is No. Kerckhoffs's principle states “A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.” Restated as Shannon's maxim, that means “one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them.”
Making the assumption that the attacker won’t learn your algorithm is security through obscurity, an approach to security that is considered inadequate.
Relying on the attacker to not know the algorithm won’t add any work on his or her end, because according to Kerckhoff, he or she either knows it, or can be reasonably expected to find out. If it adds no uncertainty it adds no entropy. And their capabilities are not something you can quantify.
In the case of a lost cryptosystem, like you describe, there is usually enough historic or statistical information to determine the nature of the algorithm (if not the key itself.) But you can’t design a system under the assumption that it will be lost as soon as it’s used. That’s OpSec, not cryptography.
EDIT Comments have mentioned using algorithm selection as a part of the key. The problem with this approach is that the algorithm selection must necessarily be determined prior to the decryption of the data. This is exactly how protocols such as TLS work today.
If you’re truly looking to mix algorithms together and use a factor of the key to determine things like S-box selection, etc., you’re effectively creating a singular new algorithm (adopting all the well-known risks that rolling your own algorithm entails.) And if you’ve created a new algorithm, then all of the bits of the key are part of that entropy computation. But if you can point out specific bits that determine algorithm instead of key material, you still have to treat them as protocol bits and exclude them.
Regarding secrecy of the algorithms, your protocol may be secret today but if one of your agents is discovered and his system is copied, even if no keys are compromised the old messages are no longer using secret algorithms. Any “entropy” you may have ascribed to them is lost, and you may not even know it.
If you’re designing a cryptosystem, the answer is No. Kerckhoffs's principle states “A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.” Restated as Shannon's maxim, that means “one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them.”
Making the assumption that the attacker won’t learn your algorithm is security through obscurity, an approach to security that is considered inadequate.
Relying on the attacker to not know the algorithm won’t add any work on his or her end, because according to Kerckhoff, he or she either knows it, or can be reasonably expected to find out. If it adds no uncertainty it adds no entropy. And their capabilities are not something you can quantify.
In the case of a lost cryptosystem, like you describe, there is usually enough historic or statistical information to determine the nature of the algorithm (if not the key itself.) But you can’t design a system under the assumption that it will be lost as soon as it’s used. That’s OpSec, not cryptography.
EDIT Comments have mentioned using algorithm selection as a part of the key. The problem with this approach is that the algorithm selection must necessarily be determined prior to the decryption of the data. This is exactly how protocols such as TLS work today.
If you’re truly looking to mix algorithms together and use a factor of the key to determine things like S-box selection, etc., you’re effectively creating a singular new algorithm (adopting all the well-known risks that rolling your own algorithm entails.) And if you’ve created a new algorithm, then all of the bits of the key are part of that entropy computation. But if you can point out specific bits that determine algorithm instead of key material, you still have to treat them as protocol bits and exclude them.
Regarding secrecy of the algorithms, your protocol may be secret today but if one of your agents is discovered and his system is copied, even if no keys are compromised the old messages are no longer using secret algorithms. Any “entropy” you may have ascribed to them is lost, and you may not even know it.
edited Jan 31 at 18:24
answered Jan 30 at 14:05
John DetersJohn Deters
27.6k24189
27.6k24189
12
It's not security trough obscurity. It's notreliance on the secrecy of the design or implementation as the main method of providing security
. You could use the same reasoning to argue that assuming that attacker won't find your private key is security trough obscurity. It doesn't make much sense to worry about entropy added by choice of algorithm, because you can get enough entropy by the key anyway.
– FINDarkside
Jan 30 at 14:32
9
@FINDarkside "You could use the same reasoning..." The difference being that – given proper key management – an attacker should never be able to work out your private key. However, you've got to assume that they have the opportunity to discover more and more of any implementation details, for example through reverse-engineering the code.
– TripeHound
Jan 30 at 16:19
3
@FINDarkside , the querent is asking if the secrecy of the implementation adds entropy. In other words he is specifically asking if security through obscurity should be factored into the equation. The answer is no partly because there is no way to quantify the attacker’s capabilities in this regard. You are correct that all the entropy must originate from the key.
– John Deters
Jan 30 at 17:44
4
There is energy required to obtain the information about the algorithm, be it human intelligence, reverse engineering, or social engineering, you can factor in an estimate of this energy with the average energy required to conceivably circumvent your algorithm once they have that knowledge. And perhaps in certain context's this can be used to make a security judgement in an organization. From an information science context, as you point out, Cryptography this is out of scope and not quantifiable.
– crasic
Jan 30 at 19:09
3
@FINDarkside , I think I see where you’re coming from. You are considering a “lost message” scenario, one that has no context or information about it. You are also considering secrecy, perhaps even to avoid detection. Those are attributes of OpSec, and absolutely can help increase overall security. But as they aren’t mathematically determined, they can’t be factored into an entropy calculation. Think of it this way: your overall security includes your algorithm’s entropy as a component, but overall security isn’t measured by entropy- it’s measured by risk.
– John Deters
Jan 31 at 20:36
|
show 11 more comments
12
It's not security trough obscurity. It's notreliance on the secrecy of the design or implementation as the main method of providing security
. You could use the same reasoning to argue that assuming that attacker won't find your private key is security trough obscurity. It doesn't make much sense to worry about entropy added by choice of algorithm, because you can get enough entropy by the key anyway.
– FINDarkside
Jan 30 at 14:32
9
@FINDarkside "You could use the same reasoning..." The difference being that – given proper key management – an attacker should never be able to work out your private key. However, you've got to assume that they have the opportunity to discover more and more of any implementation details, for example through reverse-engineering the code.
– TripeHound
Jan 30 at 16:19
3
@FINDarkside , the querent is asking if the secrecy of the implementation adds entropy. In other words he is specifically asking if security through obscurity should be factored into the equation. The answer is no partly because there is no way to quantify the attacker’s capabilities in this regard. You are correct that all the entropy must originate from the key.
– John Deters
Jan 30 at 17:44
4
There is energy required to obtain the information about the algorithm, be it human intelligence, reverse engineering, or social engineering, you can factor in an estimate of this energy with the average energy required to conceivably circumvent your algorithm once they have that knowledge. And perhaps in certain context's this can be used to make a security judgement in an organization. From an information science context, as you point out, Cryptography this is out of scope and not quantifiable.
– crasic
Jan 30 at 19:09
3
@FINDarkside , I think I see where you’re coming from. You are considering a “lost message” scenario, one that has no context or information about it. You are also considering secrecy, perhaps even to avoid detection. Those are attributes of OpSec, and absolutely can help increase overall security. But as they aren’t mathematically determined, they can’t be factored into an entropy calculation. Think of it this way: your overall security includes your algorithm’s entropy as a component, but overall security isn’t measured by entropy- it’s measured by risk.
– John Deters
Jan 31 at 20:36
12
12
It's not security trough obscurity. It's not
reliance on the secrecy of the design or implementation as the main method of providing security
. You could use the same reasoning to argue that assuming that attacker won't find your private key is security trough obscurity. It doesn't make much sense to worry about entropy added by choice of algorithm, because you can get enough entropy by the key anyway.– FINDarkside
Jan 30 at 14:32
It's not security trough obscurity. It's not
reliance on the secrecy of the design or implementation as the main method of providing security
. You could use the same reasoning to argue that assuming that attacker won't find your private key is security trough obscurity. It doesn't make much sense to worry about entropy added by choice of algorithm, because you can get enough entropy by the key anyway.– FINDarkside
Jan 30 at 14:32
9
9
@FINDarkside "You could use the same reasoning..." The difference being that – given proper key management – an attacker should never be able to work out your private key. However, you've got to assume that they have the opportunity to discover more and more of any implementation details, for example through reverse-engineering the code.
– TripeHound
Jan 30 at 16:19
@FINDarkside "You could use the same reasoning..." The difference being that – given proper key management – an attacker should never be able to work out your private key. However, you've got to assume that they have the opportunity to discover more and more of any implementation details, for example through reverse-engineering the code.
– TripeHound
Jan 30 at 16:19
3
3
@FINDarkside , the querent is asking if the secrecy of the implementation adds entropy. In other words he is specifically asking if security through obscurity should be factored into the equation. The answer is no partly because there is no way to quantify the attacker’s capabilities in this regard. You are correct that all the entropy must originate from the key.
– John Deters
Jan 30 at 17:44
@FINDarkside , the querent is asking if the secrecy of the implementation adds entropy. In other words he is specifically asking if security through obscurity should be factored into the equation. The answer is no partly because there is no way to quantify the attacker’s capabilities in this regard. You are correct that all the entropy must originate from the key.
– John Deters
Jan 30 at 17:44
4
4
There is energy required to obtain the information about the algorithm, be it human intelligence, reverse engineering, or social engineering, you can factor in an estimate of this energy with the average energy required to conceivably circumvent your algorithm once they have that knowledge. And perhaps in certain context's this can be used to make a security judgement in an organization. From an information science context, as you point out, Cryptography this is out of scope and not quantifiable.
– crasic
Jan 30 at 19:09
There is energy required to obtain the information about the algorithm, be it human intelligence, reverse engineering, or social engineering, you can factor in an estimate of this energy with the average energy required to conceivably circumvent your algorithm once they have that knowledge. And perhaps in certain context's this can be used to make a security judgement in an organization. From an information science context, as you point out, Cryptography this is out of scope and not quantifiable.
– crasic
Jan 30 at 19:09
3
3
@FINDarkside , I think I see where you’re coming from. You are considering a “lost message” scenario, one that has no context or information about it. You are also considering secrecy, perhaps even to avoid detection. Those are attributes of OpSec, and absolutely can help increase overall security. But as they aren’t mathematically determined, they can’t be factored into an entropy calculation. Think of it this way: your overall security includes your algorithm’s entropy as a component, but overall security isn’t measured by entropy- it’s measured by risk.
– John Deters
Jan 31 at 20:36
@FINDarkside , I think I see where you’re coming from. You are considering a “lost message” scenario, one that has no context or information about it. You are also considering secrecy, perhaps even to avoid detection. Those are attributes of OpSec, and absolutely can help increase overall security. But as they aren’t mathematically determined, they can’t be factored into an entropy calculation. Think of it this way: your overall security includes your algorithm’s entropy as a component, but overall security isn’t measured by entropy- it’s measured by risk.
– John Deters
Jan 31 at 20:36
|
show 11 more comments
In practical terms, no, as John's answer neatly explains.
Hypothetically, if you had enough secure encryption methods to choose from, you could potentially select one method at random and use it to encrypt the data using – for example – a 256-bit key. The choice of algorithm used would need to be "added" to the key and become part of the "not to be revealed secret" (taking the combined entropy to 259 bits if there were eight encryption algorithms to choose between).
The problems with doing this include:
Only a small number of bits are added: eight algorithms only adds three bits of entropy. To add eight bits (for a total of 264 bits with a 256-bit key) would require 256 different encryption algorithms. Finding enough secure algorithms to make a practical difference is almost certainly much harder than simply extending the key-length of a single, known-to-the-attacker, algorithm.
You have to "extend" the key with the choice of algorithm: this means passing the choice to the "user" to be "remembered" alongside the normal key. This greatly complicates the process of key-management. Storing the choice in the encrypted data is a non-starter, since an attacker with "total knowledge" would be able to find the information and know which algorithm to use.
If any of the algorithms chosen leave some kind of "fingerprint" that allows an attacker to identify the algorithm used (or at least reduce the range of possible algorithms), then this will (partially) nullify the extra bits of entropy.
All-in-all, it is much easier to extend the length of the key used and not worry that an attacker knows the encryption method.
3
To make things worse, it's necessary to come up with the bits used in the algorithm selection somehow. If one has an algorithm which can use a 256-bit key and one genuinely has 256 bits of entropy, the probability of compromise from insufficient entropy will be essentially zero. If one doesn't have 256 bits of real entropy, taking entropy that could have been used for key generation and using it for algorithm selection won't improve anything.
– supercat
Jan 30 at 19:33
I suppose if you want to be really pedantic about the question, you could allow users to upload their own algorithms rather than choose from a predefined list. But the downsides associated with arbitrary code execution, the fact that most users would not choose a secure algorithm, and the increased difficulty in the user managing this obviously outweigh any benefits.
– jpmc26
Jan 31 at 17:35
One interpretation of Kerckhoffs's principle is that the information that must be kept secret comprises the key, which is what we're saying here. So a choice of algorithm from a fixed set of 8 requires an extra 3 bits in your key store. A choice of algorithm that you've coded adds a much bigger amount to the key store (at least the Shannon entropy of the code). It's not very good value-for-bits.
– Toby Speight
Feb 1 at 11:18
add a comment |
In practical terms, no, as John's answer neatly explains.
Hypothetically, if you had enough secure encryption methods to choose from, you could potentially select one method at random and use it to encrypt the data using – for example – a 256-bit key. The choice of algorithm used would need to be "added" to the key and become part of the "not to be revealed secret" (taking the combined entropy to 259 bits if there were eight encryption algorithms to choose between).
The problems with doing this include:
Only a small number of bits are added: eight algorithms only adds three bits of entropy. To add eight bits (for a total of 264 bits with a 256-bit key) would require 256 different encryption algorithms. Finding enough secure algorithms to make a practical difference is almost certainly much harder than simply extending the key-length of a single, known-to-the-attacker, algorithm.
You have to "extend" the key with the choice of algorithm: this means passing the choice to the "user" to be "remembered" alongside the normal key. This greatly complicates the process of key-management. Storing the choice in the encrypted data is a non-starter, since an attacker with "total knowledge" would be able to find the information and know which algorithm to use.
If any of the algorithms chosen leave some kind of "fingerprint" that allows an attacker to identify the algorithm used (or at least reduce the range of possible algorithms), then this will (partially) nullify the extra bits of entropy.
All-in-all, it is much easier to extend the length of the key used and not worry that an attacker knows the encryption method.
3
To make things worse, it's necessary to come up with the bits used in the algorithm selection somehow. If one has an algorithm which can use a 256-bit key and one genuinely has 256 bits of entropy, the probability of compromise from insufficient entropy will be essentially zero. If one doesn't have 256 bits of real entropy, taking entropy that could have been used for key generation and using it for algorithm selection won't improve anything.
– supercat
Jan 30 at 19:33
I suppose if you want to be really pedantic about the question, you could allow users to upload their own algorithms rather than choose from a predefined list. But the downsides associated with arbitrary code execution, the fact that most users would not choose a secure algorithm, and the increased difficulty in the user managing this obviously outweigh any benefits.
– jpmc26
Jan 31 at 17:35
One interpretation of Kerckhoffs's principle is that the information that must be kept secret comprises the key, which is what we're saying here. So a choice of algorithm from a fixed set of 8 requires an extra 3 bits in your key store. A choice of algorithm that you've coded adds a much bigger amount to the key store (at least the Shannon entropy of the code). It's not very good value-for-bits.
– Toby Speight
Feb 1 at 11:18
add a comment |
In practical terms, no, as John's answer neatly explains.
Hypothetically, if you had enough secure encryption methods to choose from, you could potentially select one method at random and use it to encrypt the data using – for example – a 256-bit key. The choice of algorithm used would need to be "added" to the key and become part of the "not to be revealed secret" (taking the combined entropy to 259 bits if there were eight encryption algorithms to choose between).
The problems with doing this include:
Only a small number of bits are added: eight algorithms only adds three bits of entropy. To add eight bits (for a total of 264 bits with a 256-bit key) would require 256 different encryption algorithms. Finding enough secure algorithms to make a practical difference is almost certainly much harder than simply extending the key-length of a single, known-to-the-attacker, algorithm.
You have to "extend" the key with the choice of algorithm: this means passing the choice to the "user" to be "remembered" alongside the normal key. This greatly complicates the process of key-management. Storing the choice in the encrypted data is a non-starter, since an attacker with "total knowledge" would be able to find the information and know which algorithm to use.
If any of the algorithms chosen leave some kind of "fingerprint" that allows an attacker to identify the algorithm used (or at least reduce the range of possible algorithms), then this will (partially) nullify the extra bits of entropy.
All-in-all, it is much easier to extend the length of the key used and not worry that an attacker knows the encryption method.
In practical terms, no, as John's answer neatly explains.
Hypothetically, if you had enough secure encryption methods to choose from, you could potentially select one method at random and use it to encrypt the data using – for example – a 256-bit key. The choice of algorithm used would need to be "added" to the key and become part of the "not to be revealed secret" (taking the combined entropy to 259 bits if there were eight encryption algorithms to choose between).
The problems with doing this include:
Only a small number of bits are added: eight algorithms only adds three bits of entropy. To add eight bits (for a total of 264 bits with a 256-bit key) would require 256 different encryption algorithms. Finding enough secure algorithms to make a practical difference is almost certainly much harder than simply extending the key-length of a single, known-to-the-attacker, algorithm.
You have to "extend" the key with the choice of algorithm: this means passing the choice to the "user" to be "remembered" alongside the normal key. This greatly complicates the process of key-management. Storing the choice in the encrypted data is a non-starter, since an attacker with "total knowledge" would be able to find the information and know which algorithm to use.
If any of the algorithms chosen leave some kind of "fingerprint" that allows an attacker to identify the algorithm used (or at least reduce the range of possible algorithms), then this will (partially) nullify the extra bits of entropy.
All-in-all, it is much easier to extend the length of the key used and not worry that an attacker knows the encryption method.
answered Jan 30 at 14:49
TripeHoundTripeHound
65657
65657
3
To make things worse, it's necessary to come up with the bits used in the algorithm selection somehow. If one has an algorithm which can use a 256-bit key and one genuinely has 256 bits of entropy, the probability of compromise from insufficient entropy will be essentially zero. If one doesn't have 256 bits of real entropy, taking entropy that could have been used for key generation and using it for algorithm selection won't improve anything.
– supercat
Jan 30 at 19:33
I suppose if you want to be really pedantic about the question, you could allow users to upload their own algorithms rather than choose from a predefined list. But the downsides associated with arbitrary code execution, the fact that most users would not choose a secure algorithm, and the increased difficulty in the user managing this obviously outweigh any benefits.
– jpmc26
Jan 31 at 17:35
One interpretation of Kerckhoffs's principle is that the information that must be kept secret comprises the key, which is what we're saying here. So a choice of algorithm from a fixed set of 8 requires an extra 3 bits in your key store. A choice of algorithm that you've coded adds a much bigger amount to the key store (at least the Shannon entropy of the code). It's not very good value-for-bits.
– Toby Speight
Feb 1 at 11:18
add a comment |
3
To make things worse, it's necessary to come up with the bits used in the algorithm selection somehow. If one has an algorithm which can use a 256-bit key and one genuinely has 256 bits of entropy, the probability of compromise from insufficient entropy will be essentially zero. If one doesn't have 256 bits of real entropy, taking entropy that could have been used for key generation and using it for algorithm selection won't improve anything.
– supercat
Jan 30 at 19:33
I suppose if you want to be really pedantic about the question, you could allow users to upload their own algorithms rather than choose from a predefined list. But the downsides associated with arbitrary code execution, the fact that most users would not choose a secure algorithm, and the increased difficulty in the user managing this obviously outweigh any benefits.
– jpmc26
Jan 31 at 17:35
One interpretation of Kerckhoffs's principle is that the information that must be kept secret comprises the key, which is what we're saying here. So a choice of algorithm from a fixed set of 8 requires an extra 3 bits in your key store. A choice of algorithm that you've coded adds a much bigger amount to the key store (at least the Shannon entropy of the code). It's not very good value-for-bits.
– Toby Speight
Feb 1 at 11:18
3
3
To make things worse, it's necessary to come up with the bits used in the algorithm selection somehow. If one has an algorithm which can use a 256-bit key and one genuinely has 256 bits of entropy, the probability of compromise from insufficient entropy will be essentially zero. If one doesn't have 256 bits of real entropy, taking entropy that could have been used for key generation and using it for algorithm selection won't improve anything.
– supercat
Jan 30 at 19:33
To make things worse, it's necessary to come up with the bits used in the algorithm selection somehow. If one has an algorithm which can use a 256-bit key and one genuinely has 256 bits of entropy, the probability of compromise from insufficient entropy will be essentially zero. If one doesn't have 256 bits of real entropy, taking entropy that could have been used for key generation and using it for algorithm selection won't improve anything.
– supercat
Jan 30 at 19:33
I suppose if you want to be really pedantic about the question, you could allow users to upload their own algorithms rather than choose from a predefined list. But the downsides associated with arbitrary code execution, the fact that most users would not choose a secure algorithm, and the increased difficulty in the user managing this obviously outweigh any benefits.
– jpmc26
Jan 31 at 17:35
I suppose if you want to be really pedantic about the question, you could allow users to upload their own algorithms rather than choose from a predefined list. But the downsides associated with arbitrary code execution, the fact that most users would not choose a secure algorithm, and the increased difficulty in the user managing this obviously outweigh any benefits.
– jpmc26
Jan 31 at 17:35
One interpretation of Kerckhoffs's principle is that the information that must be kept secret comprises the key, which is what we're saying here. So a choice of algorithm from a fixed set of 8 requires an extra 3 bits in your key store. A choice of algorithm that you've coded adds a much bigger amount to the key store (at least the Shannon entropy of the code). It's not very good value-for-bits.
– Toby Speight
Feb 1 at 11:18
One interpretation of Kerckhoffs's principle is that the information that must be kept secret comprises the key, which is what we're saying here. So a choice of algorithm from a fixed set of 8 requires an extra 3 bits in your key store. A choice of algorithm that you've coded adds a much bigger amount to the key store (at least the Shannon entropy of the code). It's not very good value-for-bits.
– Toby Speight
Feb 1 at 11:18
add a comment |
The answers from @John Deter and @TripeHound explain things very well, but I wanted to give an example that would put Kerckhoffs's principle in context. It's natural to approach these questions from the perspective of an outside attacker, but that's not the only relevant threat model. In fact, roughly half of all data breaches start from inside agents (aka employees, contractors, etc...), with a mix of both accidental and intentional leaks.
More realistic Threat Vectors
Having a hidden encryption algorithm may help against an outside attacker if they can't easily deduce what system you used. However, it provides absolutely no additional protection against an inside attacker who have access to your code. As an extreme example, if your system keeps critical Personally Identifiable Information (PII) in your database, but both production database access credentials, encryption algorithms, and encryption keys are stored directly in your code repository, then you have effectively given everyone who has access to your code repository access to all of your customers' PII.
Of course you don't want to do that, so you keep production systems segregated from everyone expect admins, you keep encryption keys stored in a separate key management system accessible (as much as possible) only to the application, etc... Your developers know what encryption algorithms are used (because they can see it in the code repository), but they don't have access to the production database, and even if they did get read access to the database they wouldn't have the keys to decrypt the data that is there.
Applying Kerckhoff's Principle
That's the whole point of Kerckhoffs's principle - the only thing that you have to keep a secret is the actual secret (aka your encryption key). Everything else can be known by everyone and you're still secure. Which is good, because keeping just one secret is hard enough. Trying to devise a system that hides not just the keys but also the encryption algorithms and other details from as many people as possible is quite a bit harder and more prone to failure.
In short, people are bad at keeping secret. As a result, designing your system so you have less secrets to keep actually makes you more secure, even if it seems counter intuitive. After all, what you suggest makes sense on some level: why should we only encrypt our data? Let's hide the encryption method too and be extra secure! In practice though, hiding more things gives you more room to make mistakes and a sense of false security. It is much better to use an effective encryption method that makes keeping secrets as simple as possible - hide the key and the message is secure.
add a comment |
The answers from @John Deter and @TripeHound explain things very well, but I wanted to give an example that would put Kerckhoffs's principle in context. It's natural to approach these questions from the perspective of an outside attacker, but that's not the only relevant threat model. In fact, roughly half of all data breaches start from inside agents (aka employees, contractors, etc...), with a mix of both accidental and intentional leaks.
More realistic Threat Vectors
Having a hidden encryption algorithm may help against an outside attacker if they can't easily deduce what system you used. However, it provides absolutely no additional protection against an inside attacker who have access to your code. As an extreme example, if your system keeps critical Personally Identifiable Information (PII) in your database, but both production database access credentials, encryption algorithms, and encryption keys are stored directly in your code repository, then you have effectively given everyone who has access to your code repository access to all of your customers' PII.
Of course you don't want to do that, so you keep production systems segregated from everyone expect admins, you keep encryption keys stored in a separate key management system accessible (as much as possible) only to the application, etc... Your developers know what encryption algorithms are used (because they can see it in the code repository), but they don't have access to the production database, and even if they did get read access to the database they wouldn't have the keys to decrypt the data that is there.
Applying Kerckhoff's Principle
That's the whole point of Kerckhoffs's principle - the only thing that you have to keep a secret is the actual secret (aka your encryption key). Everything else can be known by everyone and you're still secure. Which is good, because keeping just one secret is hard enough. Trying to devise a system that hides not just the keys but also the encryption algorithms and other details from as many people as possible is quite a bit harder and more prone to failure.
In short, people are bad at keeping secret. As a result, designing your system so you have less secrets to keep actually makes you more secure, even if it seems counter intuitive. After all, what you suggest makes sense on some level: why should we only encrypt our data? Let's hide the encryption method too and be extra secure! In practice though, hiding more things gives you more room to make mistakes and a sense of false security. It is much better to use an effective encryption method that makes keeping secrets as simple as possible - hide the key and the message is secure.
add a comment |
The answers from @John Deter and @TripeHound explain things very well, but I wanted to give an example that would put Kerckhoffs's principle in context. It's natural to approach these questions from the perspective of an outside attacker, but that's not the only relevant threat model. In fact, roughly half of all data breaches start from inside agents (aka employees, contractors, etc...), with a mix of both accidental and intentional leaks.
More realistic Threat Vectors
Having a hidden encryption algorithm may help against an outside attacker if they can't easily deduce what system you used. However, it provides absolutely no additional protection against an inside attacker who have access to your code. As an extreme example, if your system keeps critical Personally Identifiable Information (PII) in your database, but both production database access credentials, encryption algorithms, and encryption keys are stored directly in your code repository, then you have effectively given everyone who has access to your code repository access to all of your customers' PII.
Of course you don't want to do that, so you keep production systems segregated from everyone expect admins, you keep encryption keys stored in a separate key management system accessible (as much as possible) only to the application, etc... Your developers know what encryption algorithms are used (because they can see it in the code repository), but they don't have access to the production database, and even if they did get read access to the database they wouldn't have the keys to decrypt the data that is there.
Applying Kerckhoff's Principle
That's the whole point of Kerckhoffs's principle - the only thing that you have to keep a secret is the actual secret (aka your encryption key). Everything else can be known by everyone and you're still secure. Which is good, because keeping just one secret is hard enough. Trying to devise a system that hides not just the keys but also the encryption algorithms and other details from as many people as possible is quite a bit harder and more prone to failure.
In short, people are bad at keeping secret. As a result, designing your system so you have less secrets to keep actually makes you more secure, even if it seems counter intuitive. After all, what you suggest makes sense on some level: why should we only encrypt our data? Let's hide the encryption method too and be extra secure! In practice though, hiding more things gives you more room to make mistakes and a sense of false security. It is much better to use an effective encryption method that makes keeping secrets as simple as possible - hide the key and the message is secure.
The answers from @John Deter and @TripeHound explain things very well, but I wanted to give an example that would put Kerckhoffs's principle in context. It's natural to approach these questions from the perspective of an outside attacker, but that's not the only relevant threat model. In fact, roughly half of all data breaches start from inside agents (aka employees, contractors, etc...), with a mix of both accidental and intentional leaks.
More realistic Threat Vectors
Having a hidden encryption algorithm may help against an outside attacker if they can't easily deduce what system you used. However, it provides absolutely no additional protection against an inside attacker who have access to your code. As an extreme example, if your system keeps critical Personally Identifiable Information (PII) in your database, but both production database access credentials, encryption algorithms, and encryption keys are stored directly in your code repository, then you have effectively given everyone who has access to your code repository access to all of your customers' PII.
Of course you don't want to do that, so you keep production systems segregated from everyone expect admins, you keep encryption keys stored in a separate key management system accessible (as much as possible) only to the application, etc... Your developers know what encryption algorithms are used (because they can see it in the code repository), but they don't have access to the production database, and even if they did get read access to the database they wouldn't have the keys to decrypt the data that is there.
Applying Kerckhoff's Principle
That's the whole point of Kerckhoffs's principle - the only thing that you have to keep a secret is the actual secret (aka your encryption key). Everything else can be known by everyone and you're still secure. Which is good, because keeping just one secret is hard enough. Trying to devise a system that hides not just the keys but also the encryption algorithms and other details from as many people as possible is quite a bit harder and more prone to failure.
In short, people are bad at keeping secret. As a result, designing your system so you have less secrets to keep actually makes you more secure, even if it seems counter intuitive. After all, what you suggest makes sense on some level: why should we only encrypt our data? Let's hide the encryption method too and be extra secure! In practice though, hiding more things gives you more room to make mistakes and a sense of false security. It is much better to use an effective encryption method that makes keeping secrets as simple as possible - hide the key and the message is secure.
answered Jan 30 at 20:06
Conor ManconeConor Mancone
10.3k32149
10.3k32149
add a comment |
add a comment |
Abstractly, if there were 2^n encryption schemes that were exactly equally hard to break, and had the same space of possible keys, then sure, you could define a new encryption scheme as "randomly pick one of these 2^n schemes" and effectively consider those n bits to be added to the key.
But in practice, even if this were possible, that's a lot of unnecessary complexity when you could instead just pick a single algorithm and make the key a bit longer.
add a comment |
Abstractly, if there were 2^n encryption schemes that were exactly equally hard to break, and had the same space of possible keys, then sure, you could define a new encryption scheme as "randomly pick one of these 2^n schemes" and effectively consider those n bits to be added to the key.
But in practice, even if this were possible, that's a lot of unnecessary complexity when you could instead just pick a single algorithm and make the key a bit longer.
add a comment |
Abstractly, if there were 2^n encryption schemes that were exactly equally hard to break, and had the same space of possible keys, then sure, you could define a new encryption scheme as "randomly pick one of these 2^n schemes" and effectively consider those n bits to be added to the key.
But in practice, even if this were possible, that's a lot of unnecessary complexity when you could instead just pick a single algorithm and make the key a bit longer.
Abstractly, if there were 2^n encryption schemes that were exactly equally hard to break, and had the same space of possible keys, then sure, you could define a new encryption scheme as "randomly pick one of these 2^n schemes" and effectively consider those n bits to be added to the key.
But in practice, even if this were possible, that's a lot of unnecessary complexity when you could instead just pick a single algorithm and make the key a bit longer.
answered Jan 31 at 16:32
Christoph BurschkaChristoph Burschka
28115
28115
add a comment |
add a comment |
If you just encrypt something with one algorithm and pretend another one was used, Kerckhoff's principle applies as pointed out in other answers, so that's kinda useless, at least against an attacker with knowledge about our implementation. It will still "work" against an attacker who e.g. steals the encrypted file from your OneDrive or whatever cloud store without knowing anything about it.
If you require the choice of encryption algorithm to be included in the decryption process (i.e. your tool chooses one out of several algorithms, depending on what you input) then you effectively add bits to your key length. Kerckhoff's principle does not apply here. It is however kinda hard to add a significant amount of bits -- one would need many algorithms to choose from -- and it doesn't make any sense (see last paragraph).
In either case, whatever you want to do is pretty much pointless. The assumption that someone's great-great-grand children may have your data if they invest a couple of billions in equipment and elictricity bill is arguably true for keys in the 90-100 bit range. Although realistically, nobody (not even the NSA) would do that. The cost-benefit ratio doesn't give that away. Social engineering, or torture followed up by murder is a much cheaper, faster, and practical approach.
For anything noticeably larger than 110 or so bits, a brute force attack isn't realistic even if you neglect the cost-benefit ratio. You should be more worried about backdoors built into AES which is more likely to be the case than you seeing one single 128 bit key broken by brute force during your lifetime.
Now, the mere idea of cracking a 256-bit key via brute force is outright ridiculous, and the idea of adding bits on top of that is nonsensical, it makes exactly zero difference. Impossible doesn't get any better than "impossible".
1
I don't really see what this adds to any of the other answers that were already given. Could you elaborate?
– Tom K.
Jan 31 at 14:31
@TomK: If nothing else, then the fact that even thinking about brute-forcing a 256 bit key or the hypothetical consequences of that is absurd (or adding to a 256 bit key). Which nobody else seems to consider at all. Thinking about the possibility of brute forcing a 128 bit key is absurd enough already, though it may be physically possible.
– Damon
Jan 31 at 16:43
add a comment |
If you just encrypt something with one algorithm and pretend another one was used, Kerckhoff's principle applies as pointed out in other answers, so that's kinda useless, at least against an attacker with knowledge about our implementation. It will still "work" against an attacker who e.g. steals the encrypted file from your OneDrive or whatever cloud store without knowing anything about it.
If you require the choice of encryption algorithm to be included in the decryption process (i.e. your tool chooses one out of several algorithms, depending on what you input) then you effectively add bits to your key length. Kerckhoff's principle does not apply here. It is however kinda hard to add a significant amount of bits -- one would need many algorithms to choose from -- and it doesn't make any sense (see last paragraph).
In either case, whatever you want to do is pretty much pointless. The assumption that someone's great-great-grand children may have your data if they invest a couple of billions in equipment and elictricity bill is arguably true for keys in the 90-100 bit range. Although realistically, nobody (not even the NSA) would do that. The cost-benefit ratio doesn't give that away. Social engineering, or torture followed up by murder is a much cheaper, faster, and practical approach.
For anything noticeably larger than 110 or so bits, a brute force attack isn't realistic even if you neglect the cost-benefit ratio. You should be more worried about backdoors built into AES which is more likely to be the case than you seeing one single 128 bit key broken by brute force during your lifetime.
Now, the mere idea of cracking a 256-bit key via brute force is outright ridiculous, and the idea of adding bits on top of that is nonsensical, it makes exactly zero difference. Impossible doesn't get any better than "impossible".
1
I don't really see what this adds to any of the other answers that were already given. Could you elaborate?
– Tom K.
Jan 31 at 14:31
@TomK: If nothing else, then the fact that even thinking about brute-forcing a 256 bit key or the hypothetical consequences of that is absurd (or adding to a 256 bit key). Which nobody else seems to consider at all. Thinking about the possibility of brute forcing a 128 bit key is absurd enough already, though it may be physically possible.
– Damon
Jan 31 at 16:43
add a comment |
If you just encrypt something with one algorithm and pretend another one was used, Kerckhoff's principle applies as pointed out in other answers, so that's kinda useless, at least against an attacker with knowledge about our implementation. It will still "work" against an attacker who e.g. steals the encrypted file from your OneDrive or whatever cloud store without knowing anything about it.
If you require the choice of encryption algorithm to be included in the decryption process (i.e. your tool chooses one out of several algorithms, depending on what you input) then you effectively add bits to your key length. Kerckhoff's principle does not apply here. It is however kinda hard to add a significant amount of bits -- one would need many algorithms to choose from -- and it doesn't make any sense (see last paragraph).
In either case, whatever you want to do is pretty much pointless. The assumption that someone's great-great-grand children may have your data if they invest a couple of billions in equipment and elictricity bill is arguably true for keys in the 90-100 bit range. Although realistically, nobody (not even the NSA) would do that. The cost-benefit ratio doesn't give that away. Social engineering, or torture followed up by murder is a much cheaper, faster, and practical approach.
For anything noticeably larger than 110 or so bits, a brute force attack isn't realistic even if you neglect the cost-benefit ratio. You should be more worried about backdoors built into AES which is more likely to be the case than you seeing one single 128 bit key broken by brute force during your lifetime.
Now, the mere idea of cracking a 256-bit key via brute force is outright ridiculous, and the idea of adding bits on top of that is nonsensical, it makes exactly zero difference. Impossible doesn't get any better than "impossible".
If you just encrypt something with one algorithm and pretend another one was used, Kerckhoff's principle applies as pointed out in other answers, so that's kinda useless, at least against an attacker with knowledge about our implementation. It will still "work" against an attacker who e.g. steals the encrypted file from your OneDrive or whatever cloud store without knowing anything about it.
If you require the choice of encryption algorithm to be included in the decryption process (i.e. your tool chooses one out of several algorithms, depending on what you input) then you effectively add bits to your key length. Kerckhoff's principle does not apply here. It is however kinda hard to add a significant amount of bits -- one would need many algorithms to choose from -- and it doesn't make any sense (see last paragraph).
In either case, whatever you want to do is pretty much pointless. The assumption that someone's great-great-grand children may have your data if they invest a couple of billions in equipment and elictricity bill is arguably true for keys in the 90-100 bit range. Although realistically, nobody (not even the NSA) would do that. The cost-benefit ratio doesn't give that away. Social engineering, or torture followed up by murder is a much cheaper, faster, and practical approach.
For anything noticeably larger than 110 or so bits, a brute force attack isn't realistic even if you neglect the cost-benefit ratio. You should be more worried about backdoors built into AES which is more likely to be the case than you seeing one single 128 bit key broken by brute force during your lifetime.
Now, the mere idea of cracking a 256-bit key via brute force is outright ridiculous, and the idea of adding bits on top of that is nonsensical, it makes exactly zero difference. Impossible doesn't get any better than "impossible".
answered Jan 31 at 13:38
DamonDamon
2,857715
2,857715
1
I don't really see what this adds to any of the other answers that were already given. Could you elaborate?
– Tom K.
Jan 31 at 14:31
@TomK: If nothing else, then the fact that even thinking about brute-forcing a 256 bit key or the hypothetical consequences of that is absurd (or adding to a 256 bit key). Which nobody else seems to consider at all. Thinking about the possibility of brute forcing a 128 bit key is absurd enough already, though it may be physically possible.
– Damon
Jan 31 at 16:43
add a comment |
1
I don't really see what this adds to any of the other answers that were already given. Could you elaborate?
– Tom K.
Jan 31 at 14:31
@TomK: If nothing else, then the fact that even thinking about brute-forcing a 256 bit key or the hypothetical consequences of that is absurd (or adding to a 256 bit key). Which nobody else seems to consider at all. Thinking about the possibility of brute forcing a 128 bit key is absurd enough already, though it may be physically possible.
– Damon
Jan 31 at 16:43
1
1
I don't really see what this adds to any of the other answers that were already given. Could you elaborate?
– Tom K.
Jan 31 at 14:31
I don't really see what this adds to any of the other answers that were already given. Could you elaborate?
– Tom K.
Jan 31 at 14:31
@TomK: If nothing else, then the fact that even thinking about brute-forcing a 256 bit key or the hypothetical consequences of that is absurd (or adding to a 256 bit key). Which nobody else seems to consider at all. Thinking about the possibility of brute forcing a 128 bit key is absurd enough already, though it may be physically possible.
– Damon
Jan 31 at 16:43
@TomK: If nothing else, then the fact that even thinking about brute-forcing a 256 bit key or the hypothetical consequences of that is absurd (or adding to a 256 bit key). Which nobody else seems to consider at all. Thinking about the possibility of brute forcing a 128 bit key is absurd enough already, though it may be physically possible.
– Damon
Jan 31 at 16:43
add a comment |
I think OPs question demonstrates insight and the answer is, at least in theory, yes it does. There is something here. I think that's the first point that should be made.
The responses given are mostly of the view: in practice it doesn't work like that.
These aren't wrong but I think they miss the validity/interest of OP's point.
The way I reason this:
From a theoretical black box point of view the choice between 2 encryption systems is analogous to the choice of the first bit of the key. In fact they really are the same thing (if you add the bit back). In a black box, there really is nothing special about the key. They are just a good way of enumerating your options of which encryption transformations you want to use.
To see this:
Say I make a new variant of AES128 lets call it JES_0_128. The way this works is: I add a a binary encoding of 0 (in this case 128 zeros) to the front of the key supplied and use this in (standard) AES256. Then I make another one called JES_1_128: an encoding of 1 etc all the way up to JES_(whatever 2^128 is in base 10)_128. All of these are perfectly valid 128 bit key encryption algorithms. But if you don't know which one... it's a 256 bit key encryption algorithm. AES256 to be precise. Which is indeed a lot more entropy.
The differences the other answers point out is that in practice a key is a really good way of picking which of the 2^256 AES-256 encryption algorithms to use.
It's flexible, well understood and leaves the generating and trusting of the mutual secret to the users. Why use anything else?
On the other hand, picking one of the handful of 256 bit families of encryption algorithms to use and hard coding it, is not a very good way. Even relative to a very small increase in key size. Or at all. You may as well just tell everyone. From a practical / writing software / type point of view this is not at all safe to rely in this being kept from an attacker of any interest. There are a host of reason why. Not least because if an attacker had a copy which value you picked would be easy to test which. But they are 'just' practical considerations...
add a comment |
I think OPs question demonstrates insight and the answer is, at least in theory, yes it does. There is something here. I think that's the first point that should be made.
The responses given are mostly of the view: in practice it doesn't work like that.
These aren't wrong but I think they miss the validity/interest of OP's point.
The way I reason this:
From a theoretical black box point of view the choice between 2 encryption systems is analogous to the choice of the first bit of the key. In fact they really are the same thing (if you add the bit back). In a black box, there really is nothing special about the key. They are just a good way of enumerating your options of which encryption transformations you want to use.
To see this:
Say I make a new variant of AES128 lets call it JES_0_128. The way this works is: I add a a binary encoding of 0 (in this case 128 zeros) to the front of the key supplied and use this in (standard) AES256. Then I make another one called JES_1_128: an encoding of 1 etc all the way up to JES_(whatever 2^128 is in base 10)_128. All of these are perfectly valid 128 bit key encryption algorithms. But if you don't know which one... it's a 256 bit key encryption algorithm. AES256 to be precise. Which is indeed a lot more entropy.
The differences the other answers point out is that in practice a key is a really good way of picking which of the 2^256 AES-256 encryption algorithms to use.
It's flexible, well understood and leaves the generating and trusting of the mutual secret to the users. Why use anything else?
On the other hand, picking one of the handful of 256 bit families of encryption algorithms to use and hard coding it, is not a very good way. Even relative to a very small increase in key size. Or at all. You may as well just tell everyone. From a practical / writing software / type point of view this is not at all safe to rely in this being kept from an attacker of any interest. There are a host of reason why. Not least because if an attacker had a copy which value you picked would be easy to test which. But they are 'just' practical considerations...
add a comment |
I think OPs question demonstrates insight and the answer is, at least in theory, yes it does. There is something here. I think that's the first point that should be made.
The responses given are mostly of the view: in practice it doesn't work like that.
These aren't wrong but I think they miss the validity/interest of OP's point.
The way I reason this:
From a theoretical black box point of view the choice between 2 encryption systems is analogous to the choice of the first bit of the key. In fact they really are the same thing (if you add the bit back). In a black box, there really is nothing special about the key. They are just a good way of enumerating your options of which encryption transformations you want to use.
To see this:
Say I make a new variant of AES128 lets call it JES_0_128. The way this works is: I add a a binary encoding of 0 (in this case 128 zeros) to the front of the key supplied and use this in (standard) AES256. Then I make another one called JES_1_128: an encoding of 1 etc all the way up to JES_(whatever 2^128 is in base 10)_128. All of these are perfectly valid 128 bit key encryption algorithms. But if you don't know which one... it's a 256 bit key encryption algorithm. AES256 to be precise. Which is indeed a lot more entropy.
The differences the other answers point out is that in practice a key is a really good way of picking which of the 2^256 AES-256 encryption algorithms to use.
It's flexible, well understood and leaves the generating and trusting of the mutual secret to the users. Why use anything else?
On the other hand, picking one of the handful of 256 bit families of encryption algorithms to use and hard coding it, is not a very good way. Even relative to a very small increase in key size. Or at all. You may as well just tell everyone. From a practical / writing software / type point of view this is not at all safe to rely in this being kept from an attacker of any interest. There are a host of reason why. Not least because if an attacker had a copy which value you picked would be easy to test which. But they are 'just' practical considerations...
I think OPs question demonstrates insight and the answer is, at least in theory, yes it does. There is something here. I think that's the first point that should be made.
The responses given are mostly of the view: in practice it doesn't work like that.
These aren't wrong but I think they miss the validity/interest of OP's point.
The way I reason this:
From a theoretical black box point of view the choice between 2 encryption systems is analogous to the choice of the first bit of the key. In fact they really are the same thing (if you add the bit back). In a black box, there really is nothing special about the key. They are just a good way of enumerating your options of which encryption transformations you want to use.
To see this:
Say I make a new variant of AES128 lets call it JES_0_128. The way this works is: I add a a binary encoding of 0 (in this case 128 zeros) to the front of the key supplied and use this in (standard) AES256. Then I make another one called JES_1_128: an encoding of 1 etc all the way up to JES_(whatever 2^128 is in base 10)_128. All of these are perfectly valid 128 bit key encryption algorithms. But if you don't know which one... it's a 256 bit key encryption algorithm. AES256 to be precise. Which is indeed a lot more entropy.
The differences the other answers point out is that in practice a key is a really good way of picking which of the 2^256 AES-256 encryption algorithms to use.
It's flexible, well understood and leaves the generating and trusting of the mutual secret to the users. Why use anything else?
On the other hand, picking one of the handful of 256 bit families of encryption algorithms to use and hard coding it, is not a very good way. Even relative to a very small increase in key size. Or at all. You may as well just tell everyone. From a practical / writing software / type point of view this is not at all safe to rely in this being kept from an attacker of any interest. There are a host of reason why. Not least because if an attacker had a copy which value you picked would be easy to test which. But they are 'just' practical considerations...
answered Jan 31 at 23:56
drjpizzledrjpizzle
1011
1011
add a comment |
add a comment |
I think this is a good way to look at it:
You have a secret, which may be a 256-bit key, or a password from which you derive that key, or either of those plus other information like which encryption algorithm you used.
The attacker wants to guess your secret. They do this by trying various possibilities until they find the right one or they run out of time, money, or motivation.
You have no idea what possibilities they are trying. In your question, you say "what if all the years he was using the wrong algorithm?" and the only answer to that is "what if he wasn't?" You have no control over that. If you knew which possibilities the attacker was going to try, you could just pick anything not on their list as your secret, and the security problem would be trivially solved.
What you can do, though, is roughly estimate how many possibilities they can try before running out of time and/or money, based on the state of computing technology. This assumes that they don't secretly have access to technology that the rest of the world doesn't, such as quantum computing or a backdoor in AES - which is probably a safe assumption since they would have better things to do in that case than try to crack your password. (Cf Cut Lex Luthor a Check, though see also this rebuttal.)
You can also prove the following result: if you choose your secret uniformly at random (using a high quality RNG) from n possibilities, and the attacker tries k possibilities, no matter what they are, the chance that they'll guess your secret is at most k/n.
The nice thing is that n grows exponentially with the amount of information you have to store/remember, whereas k grows only linearly with the amount of time/money they spend, so it's not hard to make k/n very small.
So, you should choose your secret uniformly at random from a large set of possibilities. A random 256-bit symmetric key is chosen uniformly from a set of size 2256, which is (far more than) large enough.
You can pick randomly from a bag of (algorithm,key) pairs as well, but it's pointless because any single algorithm already offers plenty of choices.
You can pick an obscure algorithm and hope that the attacker won't try it, but that's not picking at random any more, and therefore you can't prove that it helps at all. If there were no other options then this would be better than nothing, but there are other options.
This is the fundamental reason that cryptographers advise you to treat only the key as your secret: there are plenty of keys and keys are the easiest thing to choose at random. You don't need anything else.
add a comment |
I think this is a good way to look at it:
You have a secret, which may be a 256-bit key, or a password from which you derive that key, or either of those plus other information like which encryption algorithm you used.
The attacker wants to guess your secret. They do this by trying various possibilities until they find the right one or they run out of time, money, or motivation.
You have no idea what possibilities they are trying. In your question, you say "what if all the years he was using the wrong algorithm?" and the only answer to that is "what if he wasn't?" You have no control over that. If you knew which possibilities the attacker was going to try, you could just pick anything not on their list as your secret, and the security problem would be trivially solved.
What you can do, though, is roughly estimate how many possibilities they can try before running out of time and/or money, based on the state of computing technology. This assumes that they don't secretly have access to technology that the rest of the world doesn't, such as quantum computing or a backdoor in AES - which is probably a safe assumption since they would have better things to do in that case than try to crack your password. (Cf Cut Lex Luthor a Check, though see also this rebuttal.)
You can also prove the following result: if you choose your secret uniformly at random (using a high quality RNG) from n possibilities, and the attacker tries k possibilities, no matter what they are, the chance that they'll guess your secret is at most k/n.
The nice thing is that n grows exponentially with the amount of information you have to store/remember, whereas k grows only linearly with the amount of time/money they spend, so it's not hard to make k/n very small.
So, you should choose your secret uniformly at random from a large set of possibilities. A random 256-bit symmetric key is chosen uniformly from a set of size 2256, which is (far more than) large enough.
You can pick randomly from a bag of (algorithm,key) pairs as well, but it's pointless because any single algorithm already offers plenty of choices.
You can pick an obscure algorithm and hope that the attacker won't try it, but that's not picking at random any more, and therefore you can't prove that it helps at all. If there were no other options then this would be better than nothing, but there are other options.
This is the fundamental reason that cryptographers advise you to treat only the key as your secret: there are plenty of keys and keys are the easiest thing to choose at random. You don't need anything else.
add a comment |
I think this is a good way to look at it:
You have a secret, which may be a 256-bit key, or a password from which you derive that key, or either of those plus other information like which encryption algorithm you used.
The attacker wants to guess your secret. They do this by trying various possibilities until they find the right one or they run out of time, money, or motivation.
You have no idea what possibilities they are trying. In your question, you say "what if all the years he was using the wrong algorithm?" and the only answer to that is "what if he wasn't?" You have no control over that. If you knew which possibilities the attacker was going to try, you could just pick anything not on their list as your secret, and the security problem would be trivially solved.
What you can do, though, is roughly estimate how many possibilities they can try before running out of time and/or money, based on the state of computing technology. This assumes that they don't secretly have access to technology that the rest of the world doesn't, such as quantum computing or a backdoor in AES - which is probably a safe assumption since they would have better things to do in that case than try to crack your password. (Cf Cut Lex Luthor a Check, though see also this rebuttal.)
You can also prove the following result: if you choose your secret uniformly at random (using a high quality RNG) from n possibilities, and the attacker tries k possibilities, no matter what they are, the chance that they'll guess your secret is at most k/n.
The nice thing is that n grows exponentially with the amount of information you have to store/remember, whereas k grows only linearly with the amount of time/money they spend, so it's not hard to make k/n very small.
So, you should choose your secret uniformly at random from a large set of possibilities. A random 256-bit symmetric key is chosen uniformly from a set of size 2256, which is (far more than) large enough.
You can pick randomly from a bag of (algorithm,key) pairs as well, but it's pointless because any single algorithm already offers plenty of choices.
You can pick an obscure algorithm and hope that the attacker won't try it, but that's not picking at random any more, and therefore you can't prove that it helps at all. If there were no other options then this would be better than nothing, but there are other options.
This is the fundamental reason that cryptographers advise you to treat only the key as your secret: there are plenty of keys and keys are the easiest thing to choose at random. You don't need anything else.
I think this is a good way to look at it:
You have a secret, which may be a 256-bit key, or a password from which you derive that key, or either of those plus other information like which encryption algorithm you used.
The attacker wants to guess your secret. They do this by trying various possibilities until they find the right one or they run out of time, money, or motivation.
You have no idea what possibilities they are trying. In your question, you say "what if all the years he was using the wrong algorithm?" and the only answer to that is "what if he wasn't?" You have no control over that. If you knew which possibilities the attacker was going to try, you could just pick anything not on their list as your secret, and the security problem would be trivially solved.
What you can do, though, is roughly estimate how many possibilities they can try before running out of time and/or money, based on the state of computing technology. This assumes that they don't secretly have access to technology that the rest of the world doesn't, such as quantum computing or a backdoor in AES - which is probably a safe assumption since they would have better things to do in that case than try to crack your password. (Cf Cut Lex Luthor a Check, though see also this rebuttal.)
You can also prove the following result: if you choose your secret uniformly at random (using a high quality RNG) from n possibilities, and the attacker tries k possibilities, no matter what they are, the chance that they'll guess your secret is at most k/n.
The nice thing is that n grows exponentially with the amount of information you have to store/remember, whereas k grows only linearly with the amount of time/money they spend, so it's not hard to make k/n very small.
So, you should choose your secret uniformly at random from a large set of possibilities. A random 256-bit symmetric key is chosen uniformly from a set of size 2256, which is (far more than) large enough.
You can pick randomly from a bag of (algorithm,key) pairs as well, but it's pointless because any single algorithm already offers plenty of choices.
You can pick an obscure algorithm and hope that the attacker won't try it, but that's not picking at random any more, and therefore you can't prove that it helps at all. If there were no other options then this would be better than nothing, but there are other options.
This is the fundamental reason that cryptographers advise you to treat only the key as your secret: there are plenty of keys and keys are the easiest thing to choose at random. You don't need anything else.
answered Feb 3 at 18:31
benrgbenrg
1112
1112
add a comment |
add a comment |
Thanks for contributing an answer to Information Security Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsecurity.stackexchange.com%2fquestions%2f202534%2fdoesnt-the-choice-of-encryption-algorithm-add-entropy-by-itself%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
69
Let's assume there are 8 ciphers you could choose from in 2019 that have 256 bit keys, and that there was no way for an attacker to look at the cipher text and tell which kind of algorithm was used, then your secret algorithm choice is adding log2(8) = 3 bits of entropy. That's negligible noise compared to the 256 bits of the key.
– Mike Ounsworth
Jan 30 at 14:18
5
@MikeOunsworth Also worth considering that some ciphers may take several times longer to apply, but really most encryption formats explicitly say what cipher is being used anyway.
– AndrolGenhald
Jan 30 at 14:21
Couldn't you use a side-channel attack to discover whether it was a symmetric-key or public-key algorithm?
– EJoshuaS
Jan 30 at 21:01
1
There is one case that this helpful, the encryption scheme is created by you and you never revealed the design though it is hard to achieve. Than it is almost impossible to break.
– kelalaka
Jan 30 at 21:34
15
@kelalaka if you or me (or anybody with less then several decades in the field with ample peer reviews) invented our own encryption scheme, it is practically assured that it would be so weak it will get cracked in dozens of ways which are waaay faster then bruteforcing.
– Matija Nalis
Jan 31 at 0:30