I would like to make it absolutely clear that the only person I have emailed on the subject discussed on this page is the utilities author, and only asked for information on how his cipher worked, and why it was more secure than other methods – no more.

I did however also relay the URL to this page to a Debian-related channel.

]]>@Clive Robinson:

“Then you have to ask yourself a question, is it possible to put an incompressable Ptext into a cipher function and obtain a Ctext output with compressable redundancy?

The answer is yes, think back to the feedback system above using an IV of all zeros what happens if you carry on encrypting all zeros?”

BTW, in this case he means p_n=0 for all n. At first I couldn’t tell if he meant this or if he meant that the plaintext was chosen so that the input to the block cipher was 0 each time, but that makes even less sense when you work it out, since the output is always a constant, so the ciphertext is either a repeated (constant) block or 0.

“You should (asuming it’s working properly) get an output where the encryption function steps through every possible output in some apparently random order (actually dependent on the key). As there are no repeats of the output blocks there is no redundancy at the block size so it is uncompressable at that level.”

c_1 = e(IV), c_n = e(c_(n-1))

In any mode that uses feedback (CBC, CFB, OFB). Is that really guaranteed to step through every value? If so, it must rely on some property of the block cipher that I’m not thinking of. It seems to me that it could just as easily slip into a short cycle; if e(IV) = x and e(x) = IV, those two blocks repeat over and over. What you’re describing sounds a lot like CTR mode, which doesn’t use feedback at all. What makes you think iterating the block cipher forms a permutation?

“Then as the encryption function is invertable (ie you can decrypt it) you can simply put the non compressed output back into the encryption block and get an output without information (ie compleatly redundant).”

What’s an “encryption block”? The block cipher? I’m lost.

In any case, redundancy doesn’t go away when you encrypt; it just gets obscured. I assume that’s what he is trying to say. If my stream is the concatenation of the Bible encrypted with a counter as the key, you may have difficulty detecting that it is redundant, unless you happen to try decrypting it with a counter.

And compression algorithms test for *one kind* of redundancy. In the case of n-byte sliding window algorithms, you just make a repeating sequence with length n+1 bytes and no internal repetition, and the compression function won’t find it. Similar redundant inputs can be constructed for each individual compression algorithm. So by all means use them to check your sanity, but don’t expect them to find every kind of redundancy, and don’t expect them to only yield smaller outputs; that is impossible, by the pigeon-hole principle. What makes them useful is that they detect a certain kind of randomness that’s present in the data sets you are concerned with.

Also, most compression routines create PREDICTABLE bits and LARGER outputs up to some break-even point. For example, in Huffman coding, the “enter a new symbol” bit is predictable on the first byte (if not assumed) and unless the second input byte is the same as the first, is predictable again (if plaintexts are randomly generated, this happens 255/256ths of the time).

Testing for redundancy is like testing for predictable patterns in the output of a HWRNG; the statement “there are no predictable patterns in this output” is a universal statement (negation of an existential quantifier) and so cannot be said rigorously due to testing without testing for every possible prediction algorithm. Usually, the negation of a statement is a universal (because hypotheses tend to be existential), so when people say “you can’t prove a negative”, they really mean “you can’t prove a universal with individual (non-)existence evidence”.

Actually, I *can* prove negatives; I can prove the negation of “all elephants are dead” by finding a live one. If I say “measurable gravity exists everywhere”, that’s not a negative, but good luck trying to prove it with a gravimeter and a space ship.

Proof by contradiction is often the only hope for proving universals over infinite sets.

What does this all mean for crypto? It means that the claim “there is no shortcut for computing the inverse of this cipher that is more efficient than brute force” is not going to get proven. It means that you can’t test for unpredictability in a RNG. It means that you can’t test for uncompressability. That’s what keeps it interesting…. ðŸ˜‰

]]>I don’t know of one.

]]>I quote:

“The Snake Oil prevents clear code and other attacks by producing fake code to trick hackers into false sense of security and from the reviews over the years, my Snake Oil is working like a charm. Although I’m still in the doghouse, I know my next release will be the cat’s mellow!”

]]>“IOW, if you can compress the resulting final ciphertext, your cipher is weak.”

That is an unfortunate case of a rule of thumb becoming a law. And as it turns out is actually not true. It is based on an unfortunate assumption that redundancy and compressability are the same thing. They are not one is the posible consiquence of the other and in noway implies the former.

To start off,

1) if the plain text is if of normal usage (ie it contains more than one bit of information) it contains (often considerable) redundancy.

2) if the plain text message (Ptext) is of normal usage and suitably long then for most practical systems the resulting cipher text (Ctext) is of the same length as the original message (or slightly longer).

3) where there is redundancy in the form of repeted fixed length symbols it is possible to compress the message by using variable length symbols and a weighting algorithm.

Therefore simple logic dictates quite obviously if the Ptext message can be recovered from the Ctext of a same length, then the Ctext must also contain the same level of redundancy.

Again by logic the inverse of this is, as the texts are the same length. If Ctext does not contain the same level of redundancy then the Ptext message cannot be recovered.

This is fairly easily seen if you say take a large program listing or executable (better as it’s instruction size is usually a sub multiple of the cipher block size) and put it through DES or AES in code book mode. Simple observation will show you there is redundancy in the output, due to the having output blocks with the same value. So it is possible to compress the output….

This is a problem that has been well known prior to the original publication of the DES specification.

There are a number of ways you may obfuscate the visable redundancy but the usuall method employed involves using a feed forward / back mechanism.

The simple case is prior to each block (ptext) of the Ptext message being enciphered it is mixed with the encrypted (ctext) block from the previous ptext block encryption. This give rise to the problem of how do you deal with the first block where there is no previous block to provide feedback.

This is usually solved with Initialisation Vectors (IV’s) where a block of ptext known to both parties (usually a string of zeros or some such) is encrypted under the same key. Therefor both parties can put it through the function to calculate the first feed forward / back block.

Hopefully when you observe the ouput of such a system you will not be able to identify the redundancy by simple observation. However it is has not gone away, even though there are no repeated symbols that can be compressed.

Then you have to ask yourself a question, is it possible to put an incompressable Ptext into a cipher function and obtain a Ctext output with compressable redundancy?

The answer is yes, think back to the feedback system above using an IV of all zeros what happens if you carry on encrypting all zeros?

You should (asuming it’s working properly) get an output where the encryption function steps through every possible output in some apparently random order (actually dependent on the key). As there are no repeats of the output blocks there is no redundancy at the block size so it is uncompressable at that level.

Then as the encryption function is invertable (ie you can decrypt it) you can simply put the non compressed output back into the encryption block and get an output without information (ie compleatly redundant).

Therefor there will always be some input to a block cipher in feedback mode that is not compressable that at the output to the function will be fully compressable. In fact there are as many as there are IV’s. Which means that there are as many as the block size so for AES in 128bit there are 2^128 uncompressable messages that will give output blocks that are all the same value (fully compressable).

You can follow the logic down to show that at any point in the Ptext message it is possible to have the start of a string that provides compressable output after it is enciphered.

Finaly just to put the nail in the coffin as it where, what if your compression function does not work at a fixed block size, but a variable block size, and also has an adaptive algorithum. What is the probability that it will compress a Ctext message from a Ptext message with redundancy?

]]>IOW, if you can compress the resulting final ciphertext, your cipher is weak.

]]>