How to use proof of lack of knowledge?
up vote
8
down vote
favorite
This is a purely hypothetical example but is provable ignorance useful in cryptography?
For example, let's say I have a trapdoor collision resistant function. I know the trapdoor and therefore some $x_0 neq x_1$ such that $f(x_0) = f(x_1)$. This is however, hard to find. If someone proves they know $x_0$, I can conclude that they do not know $x_1$.
Is there any context where more complicated versions of such problems is useful?
protocol-design
add a comment |
up vote
8
down vote
favorite
This is a purely hypothetical example but is provable ignorance useful in cryptography?
For example, let's say I have a trapdoor collision resistant function. I know the trapdoor and therefore some $x_0 neq x_1$ such that $f(x_0) = f(x_1)$. This is however, hard to find. If someone proves they know $x_0$, I can conclude that they do not know $x_1$.
Is there any context where more complicated versions of such problems is useful?
protocol-design
add a comment |
up vote
8
down vote
favorite
up vote
8
down vote
favorite
This is a purely hypothetical example but is provable ignorance useful in cryptography?
For example, let's say I have a trapdoor collision resistant function. I know the trapdoor and therefore some $x_0 neq x_1$ such that $f(x_0) = f(x_1)$. This is however, hard to find. If someone proves they know $x_0$, I can conclude that they do not know $x_1$.
Is there any context where more complicated versions of such problems is useful?
protocol-design
This is a purely hypothetical example but is provable ignorance useful in cryptography?
For example, let's say I have a trapdoor collision resistant function. I know the trapdoor and therefore some $x_0 neq x_1$ such that $f(x_0) = f(x_1)$. This is however, hard to find. If someone proves they know $x_0$, I can conclude that they do not know $x_1$.
Is there any context where more complicated versions of such problems is useful?
protocol-design
protocol-design
asked Nov 30 at 12:13
user1936752
22929
22929
add a comment |
add a comment |
5 Answers
5
active
oldest
votes
up vote
12
down vote
accepted
In general, you cannot prove lack of knowledge, because even if you did know something you shouldn't, you can always pretend that you don't know it and carry out the proof as if you didn't know it.
For your specific example, consider how the prover would know $x_0$. Did you tell them what it is? If so, that proves nothing, since they would then know $x_0$ even if they had also learned $x_1$ from somewhere else.
Conversely, if your function $f$ is collision resistant without the trapdoor, but is not injective (i.e. potential collisions do exist), then it must also be preimage resistant. Thus, finding $x_0$ from $y = f(x_0)$ is (at least) as hard as finding $x_1$. Thus, paradoxically, the prover exhibiting $x_0$ would in fact be evidence that they can find preimages for $f$, and thus can probably find $x_1$ as well.
Thank you, that's a very interesting answer. Perhaps my toy example is too trivial and upon reflection, a better restatement of my question would be: Is there the idea of mutually exclusive sets of knowledge in cryptography? I know something about A at the expense of losing some knowledge about B. Perhaps knowledge is also the wrong word to describe what I mean (since you can't "lose" knowledge in most cases) but the idea is hopefully clearer.
– user1936752
Nov 30 at 13:17
1
I think that you can't prove ignorance, but you might not prove knowledge if the verifier refuses your proofs. Besides, it is also possible to refute facts if you bring the right proofs. It seems more like an issue related to an environment setup (a.k.a, generator / circuit to generate and verify proofs) than an algorithm issue itself, taking your example of mutually exclusive sets of knowledge. BTW, I'm not expert at proofs of zero-knowledge, so I can't discuss that further - it's just thoughts.
– Marco Aurélio da Silva
Nov 30 at 13:46
8
Possibly relevant/interesting: There was a paper about proofs of ignorance a little while ago.
– Ella Rose
Nov 30 at 16:13
Thank you - that paper seems to be asking a similar question. I'm surprised that this isn't an established concept yet!!
– user1936752
Nov 30 at 18:58
add a comment |
up vote
4
down vote
You can't prove unconditional lack of knowledge, but you can create proven, shared lack of knowledge of a number by all parties that contribute.
Suppose there are two parties. They each generate a 256-bit random number (call them $r_1$ and $r_2$), and publish and sign $H(r_1)$ and $H(r_2)$ as their choices. Both parties then sign $H(r_1)||H(r_2)$ as an agreement that they both agree to $H(r_1||r_2)$ as "the number" that they both do not know.
Now a number exists that two parties have agreed on that they both have no knowledge on (to a 256-bit security level), yet if the parties decide to reveal the number (by revealing $r_1, r_2$) it can be confirmed that this was indeed the number they agreed upon.
The above scheme can be used for example for trustless (in the sense that the random number truly was random) gambling.
Do the two parties each generate two random numbers? Or are the numbers generated "together" by both of the parties in collaboration somehow? If the latter, how is this done?
– Wildcard
Nov 30 at 20:29
@Wildcard Sorry I worded that a bit poorly. Each party only generates a single random number entirely without collaboration. $r_1$ is the random number of party 1, $r_2$ of party 2.
– orlp
Nov 30 at 20:32
Ah, I think I get it now. And you are using||
to mean...? I would think XOR would do it, but that's not the symbol for it. Also it's notable that neither of the inputs has to be random, or rather, neither party can prevent the other from ensuring the outcome is truly random.
– Wildcard
Nov 30 at 21:00
@Wildcard||
is concatenation. And yes, I'm stating random but that's for that party's own good, as long as one party uses random data the result is random. XOR would do as well.
– orlp
Nov 30 at 21:22
1
@Wildcard The result is hashed, so I don't think "half random" is really a thing. But XOR without hashing is probably simpler and also works.
– orlp
Nov 30 at 22:21
|
show 1 more comment
up vote
0
down vote
If you computed x0 and x1 yourself out of nothing, and never leaked any information anyone could use to derive x1, then it would be true that nobody but you who knows x0 would also know x1, but it would be equally true that nobody who doesn't know x0 would also know x1. Both statements would be true because you're the only person in the universe who knows x1. Further, if information about x1 did leak out in ways you don't know about, the fact that somebody acquires x0 would almost never imply anything about their knowledge of x1.
There is one possible situation where proof of knowledge might prove ignorance of something else: in cases where it is externally possible to prove that someone has only had the opportunity to receive a certain amount of information since a piece of information was created (e.g. an amount significantly less than the combined size of x0 and x1), demonstrated possession of a sufficient quantity of information that didn't prior that time window (e.g. x0) could prove that the communications channel wasn't used to convey certain other information (e.g. x1). Such a proof would not require any relationship between x0 and x1, however. If x0 and x1 had been entirely separate and unrelated random bit strings the same situation would apply.
add a comment |
up vote
0
down vote
If you want to be sure that no one knows X, then one thing you can do is incentivize people to reveal the fact that they know X. You could offer a monetary bounty to anyone who can demonstrate that they know X. If no one takes the bounty, then you know that either no one knows X, they value the secrecy of that knowledge at higher than your bounty, they just don't yet know about your bounty, they don't trust you to pay up, or they value their privacy more than the bounty and don't believe they can claim the bounty anonymously.
Cryptocurrency systems provide a few ways for people to offer bounties in a way that solves the trust and anonymity issues, and can even allow third parties to contribute directly to the bounty too without them needing to trust the bounty-offerer.
In 2013, Peter Todd created several bitcoin addresses that were configured to automatically allow anyone to withdraw the stored bitcoin if they publicly demonstrated a hash collision in one of several algorithms. Many people pitched in and contributed bitcoin into the bounty wallets. No one, including Peter Todd, could withdraw the money back from the wallet unless they demonstrated a hash collision. The bounty for a SHA-1 collision (worth about $3000 at the time) was claimed in 2017 using the SHA-1 collision published by Google. The other bounties (for SHA-256 and RIPEMD160) are still unclaimed, implying that no one yet knows of any collisions for them.
One strategy to detect intrusion into a computer system is to place an unencrypted cryptocurrency wallet file containing a valuable amount of cryptocurrency on the computer, and then watch the addresses controlled by the wallet to see if the cryptocurrency is ever taken. If a virus or attacker ever hacks into the computer, then they may find the wallet file and steal the cryptocurrency from it. An attacker may realize that taking the cryptocurrency will expose the fact that the system has been hacked, but if the cryptocurrency is worth more than the system is worth to the attacker, then they have little reason to not take it. (This article about the now-defunct service Bitcoin Vigil explains the strategy a bit more.)
add a comment |
up vote
-3
down vote
non-hearsay, eye-witness testimony of the side of a coin after a coin flip. Conversation between interviewer I and witness W, where the only "knowledge" that can permissibly enter the court record is direct eyewitness testimony:
I: did you witness the coin toss?
W: Yes.
I: Did you view the coin once it had landed?
W: Yes.
I: Can you describe the picture on the coin?
W: It was a a picture of an eagle?
I: What was depicted on the obverse side of the coin?
OBJECTION, the answer would be speculation. The witness has already dis-qualified his ability answer.
New contributor
1
Right now your answer is very confusing (which is why it's attracting downvotes), please edit it to more directly explain how this "hypothetical tale" relates to the question and note that analogies are mostly useful for supplementary but not for immediate explanations.
– SEJPM♦
2 days ago
add a comment |
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
12
down vote
accepted
In general, you cannot prove lack of knowledge, because even if you did know something you shouldn't, you can always pretend that you don't know it and carry out the proof as if you didn't know it.
For your specific example, consider how the prover would know $x_0$. Did you tell them what it is? If so, that proves nothing, since they would then know $x_0$ even if they had also learned $x_1$ from somewhere else.
Conversely, if your function $f$ is collision resistant without the trapdoor, but is not injective (i.e. potential collisions do exist), then it must also be preimage resistant. Thus, finding $x_0$ from $y = f(x_0)$ is (at least) as hard as finding $x_1$. Thus, paradoxically, the prover exhibiting $x_0$ would in fact be evidence that they can find preimages for $f$, and thus can probably find $x_1$ as well.
Thank you, that's a very interesting answer. Perhaps my toy example is too trivial and upon reflection, a better restatement of my question would be: Is there the idea of mutually exclusive sets of knowledge in cryptography? I know something about A at the expense of losing some knowledge about B. Perhaps knowledge is also the wrong word to describe what I mean (since you can't "lose" knowledge in most cases) but the idea is hopefully clearer.
– user1936752
Nov 30 at 13:17
1
I think that you can't prove ignorance, but you might not prove knowledge if the verifier refuses your proofs. Besides, it is also possible to refute facts if you bring the right proofs. It seems more like an issue related to an environment setup (a.k.a, generator / circuit to generate and verify proofs) than an algorithm issue itself, taking your example of mutually exclusive sets of knowledge. BTW, I'm not expert at proofs of zero-knowledge, so I can't discuss that further - it's just thoughts.
– Marco Aurélio da Silva
Nov 30 at 13:46
8
Possibly relevant/interesting: There was a paper about proofs of ignorance a little while ago.
– Ella Rose
Nov 30 at 16:13
Thank you - that paper seems to be asking a similar question. I'm surprised that this isn't an established concept yet!!
– user1936752
Nov 30 at 18:58
add a comment |
up vote
12
down vote
accepted
In general, you cannot prove lack of knowledge, because even if you did know something you shouldn't, you can always pretend that you don't know it and carry out the proof as if you didn't know it.
For your specific example, consider how the prover would know $x_0$. Did you tell them what it is? If so, that proves nothing, since they would then know $x_0$ even if they had also learned $x_1$ from somewhere else.
Conversely, if your function $f$ is collision resistant without the trapdoor, but is not injective (i.e. potential collisions do exist), then it must also be preimage resistant. Thus, finding $x_0$ from $y = f(x_0)$ is (at least) as hard as finding $x_1$. Thus, paradoxically, the prover exhibiting $x_0$ would in fact be evidence that they can find preimages for $f$, and thus can probably find $x_1$ as well.
Thank you, that's a very interesting answer. Perhaps my toy example is too trivial and upon reflection, a better restatement of my question would be: Is there the idea of mutually exclusive sets of knowledge in cryptography? I know something about A at the expense of losing some knowledge about B. Perhaps knowledge is also the wrong word to describe what I mean (since you can't "lose" knowledge in most cases) but the idea is hopefully clearer.
– user1936752
Nov 30 at 13:17
1
I think that you can't prove ignorance, but you might not prove knowledge if the verifier refuses your proofs. Besides, it is also possible to refute facts if you bring the right proofs. It seems more like an issue related to an environment setup (a.k.a, generator / circuit to generate and verify proofs) than an algorithm issue itself, taking your example of mutually exclusive sets of knowledge. BTW, I'm not expert at proofs of zero-knowledge, so I can't discuss that further - it's just thoughts.
– Marco Aurélio da Silva
Nov 30 at 13:46
8
Possibly relevant/interesting: There was a paper about proofs of ignorance a little while ago.
– Ella Rose
Nov 30 at 16:13
Thank you - that paper seems to be asking a similar question. I'm surprised that this isn't an established concept yet!!
– user1936752
Nov 30 at 18:58
add a comment |
up vote
12
down vote
accepted
up vote
12
down vote
accepted
In general, you cannot prove lack of knowledge, because even if you did know something you shouldn't, you can always pretend that you don't know it and carry out the proof as if you didn't know it.
For your specific example, consider how the prover would know $x_0$. Did you tell them what it is? If so, that proves nothing, since they would then know $x_0$ even if they had also learned $x_1$ from somewhere else.
Conversely, if your function $f$ is collision resistant without the trapdoor, but is not injective (i.e. potential collisions do exist), then it must also be preimage resistant. Thus, finding $x_0$ from $y = f(x_0)$ is (at least) as hard as finding $x_1$. Thus, paradoxically, the prover exhibiting $x_0$ would in fact be evidence that they can find preimages for $f$, and thus can probably find $x_1$ as well.
In general, you cannot prove lack of knowledge, because even if you did know something you shouldn't, you can always pretend that you don't know it and carry out the proof as if you didn't know it.
For your specific example, consider how the prover would know $x_0$. Did you tell them what it is? If so, that proves nothing, since they would then know $x_0$ even if they had also learned $x_1$ from somewhere else.
Conversely, if your function $f$ is collision resistant without the trapdoor, but is not injective (i.e. potential collisions do exist), then it must also be preimage resistant. Thus, finding $x_0$ from $y = f(x_0)$ is (at least) as hard as finding $x_1$. Thus, paradoxically, the prover exhibiting $x_0$ would in fact be evidence that they can find preimages for $f$, and thus can probably find $x_1$ as well.
answered Nov 30 at 13:06
Ilmari Karonen
34k269136
34k269136
Thank you, that's a very interesting answer. Perhaps my toy example is too trivial and upon reflection, a better restatement of my question would be: Is there the idea of mutually exclusive sets of knowledge in cryptography? I know something about A at the expense of losing some knowledge about B. Perhaps knowledge is also the wrong word to describe what I mean (since you can't "lose" knowledge in most cases) but the idea is hopefully clearer.
– user1936752
Nov 30 at 13:17
1
I think that you can't prove ignorance, but you might not prove knowledge if the verifier refuses your proofs. Besides, it is also possible to refute facts if you bring the right proofs. It seems more like an issue related to an environment setup (a.k.a, generator / circuit to generate and verify proofs) than an algorithm issue itself, taking your example of mutually exclusive sets of knowledge. BTW, I'm not expert at proofs of zero-knowledge, so I can't discuss that further - it's just thoughts.
– Marco Aurélio da Silva
Nov 30 at 13:46
8
Possibly relevant/interesting: There was a paper about proofs of ignorance a little while ago.
– Ella Rose
Nov 30 at 16:13
Thank you - that paper seems to be asking a similar question. I'm surprised that this isn't an established concept yet!!
– user1936752
Nov 30 at 18:58
add a comment |
Thank you, that's a very interesting answer. Perhaps my toy example is too trivial and upon reflection, a better restatement of my question would be: Is there the idea of mutually exclusive sets of knowledge in cryptography? I know something about A at the expense of losing some knowledge about B. Perhaps knowledge is also the wrong word to describe what I mean (since you can't "lose" knowledge in most cases) but the idea is hopefully clearer.
– user1936752
Nov 30 at 13:17
1
I think that you can't prove ignorance, but you might not prove knowledge if the verifier refuses your proofs. Besides, it is also possible to refute facts if you bring the right proofs. It seems more like an issue related to an environment setup (a.k.a, generator / circuit to generate and verify proofs) than an algorithm issue itself, taking your example of mutually exclusive sets of knowledge. BTW, I'm not expert at proofs of zero-knowledge, so I can't discuss that further - it's just thoughts.
– Marco Aurélio da Silva
Nov 30 at 13:46
8
Possibly relevant/interesting: There was a paper about proofs of ignorance a little while ago.
– Ella Rose
Nov 30 at 16:13
Thank you - that paper seems to be asking a similar question. I'm surprised that this isn't an established concept yet!!
– user1936752
Nov 30 at 18:58
Thank you, that's a very interesting answer. Perhaps my toy example is too trivial and upon reflection, a better restatement of my question would be: Is there the idea of mutually exclusive sets of knowledge in cryptography? I know something about A at the expense of losing some knowledge about B. Perhaps knowledge is also the wrong word to describe what I mean (since you can't "lose" knowledge in most cases) but the idea is hopefully clearer.
– user1936752
Nov 30 at 13:17
Thank you, that's a very interesting answer. Perhaps my toy example is too trivial and upon reflection, a better restatement of my question would be: Is there the idea of mutually exclusive sets of knowledge in cryptography? I know something about A at the expense of losing some knowledge about B. Perhaps knowledge is also the wrong word to describe what I mean (since you can't "lose" knowledge in most cases) but the idea is hopefully clearer.
– user1936752
Nov 30 at 13:17
1
1
I think that you can't prove ignorance, but you might not prove knowledge if the verifier refuses your proofs. Besides, it is also possible to refute facts if you bring the right proofs. It seems more like an issue related to an environment setup (a.k.a, generator / circuit to generate and verify proofs) than an algorithm issue itself, taking your example of mutually exclusive sets of knowledge. BTW, I'm not expert at proofs of zero-knowledge, so I can't discuss that further - it's just thoughts.
– Marco Aurélio da Silva
Nov 30 at 13:46
I think that you can't prove ignorance, but you might not prove knowledge if the verifier refuses your proofs. Besides, it is also possible to refute facts if you bring the right proofs. It seems more like an issue related to an environment setup (a.k.a, generator / circuit to generate and verify proofs) than an algorithm issue itself, taking your example of mutually exclusive sets of knowledge. BTW, I'm not expert at proofs of zero-knowledge, so I can't discuss that further - it's just thoughts.
– Marco Aurélio da Silva
Nov 30 at 13:46
8
8
Possibly relevant/interesting: There was a paper about proofs of ignorance a little while ago.
– Ella Rose
Nov 30 at 16:13
Possibly relevant/interesting: There was a paper about proofs of ignorance a little while ago.
– Ella Rose
Nov 30 at 16:13
Thank you - that paper seems to be asking a similar question. I'm surprised that this isn't an established concept yet!!
– user1936752
Nov 30 at 18:58
Thank you - that paper seems to be asking a similar question. I'm surprised that this isn't an established concept yet!!
– user1936752
Nov 30 at 18:58
add a comment |
up vote
4
down vote
You can't prove unconditional lack of knowledge, but you can create proven, shared lack of knowledge of a number by all parties that contribute.
Suppose there are two parties. They each generate a 256-bit random number (call them $r_1$ and $r_2$), and publish and sign $H(r_1)$ and $H(r_2)$ as their choices. Both parties then sign $H(r_1)||H(r_2)$ as an agreement that they both agree to $H(r_1||r_2)$ as "the number" that they both do not know.
Now a number exists that two parties have agreed on that they both have no knowledge on (to a 256-bit security level), yet if the parties decide to reveal the number (by revealing $r_1, r_2$) it can be confirmed that this was indeed the number they agreed upon.
The above scheme can be used for example for trustless (in the sense that the random number truly was random) gambling.
Do the two parties each generate two random numbers? Or are the numbers generated "together" by both of the parties in collaboration somehow? If the latter, how is this done?
– Wildcard
Nov 30 at 20:29
@Wildcard Sorry I worded that a bit poorly. Each party only generates a single random number entirely without collaboration. $r_1$ is the random number of party 1, $r_2$ of party 2.
– orlp
Nov 30 at 20:32
Ah, I think I get it now. And you are using||
to mean...? I would think XOR would do it, but that's not the symbol for it. Also it's notable that neither of the inputs has to be random, or rather, neither party can prevent the other from ensuring the outcome is truly random.
– Wildcard
Nov 30 at 21:00
@Wildcard||
is concatenation. And yes, I'm stating random but that's for that party's own good, as long as one party uses random data the result is random. XOR would do as well.
– orlp
Nov 30 at 21:22
1
@Wildcard The result is hashed, so I don't think "half random" is really a thing. But XOR without hashing is probably simpler and also works.
– orlp
Nov 30 at 22:21
|
show 1 more comment
up vote
4
down vote
You can't prove unconditional lack of knowledge, but you can create proven, shared lack of knowledge of a number by all parties that contribute.
Suppose there are two parties. They each generate a 256-bit random number (call them $r_1$ and $r_2$), and publish and sign $H(r_1)$ and $H(r_2)$ as their choices. Both parties then sign $H(r_1)||H(r_2)$ as an agreement that they both agree to $H(r_1||r_2)$ as "the number" that they both do not know.
Now a number exists that two parties have agreed on that they both have no knowledge on (to a 256-bit security level), yet if the parties decide to reveal the number (by revealing $r_1, r_2$) it can be confirmed that this was indeed the number they agreed upon.
The above scheme can be used for example for trustless (in the sense that the random number truly was random) gambling.
Do the two parties each generate two random numbers? Or are the numbers generated "together" by both of the parties in collaboration somehow? If the latter, how is this done?
– Wildcard
Nov 30 at 20:29
@Wildcard Sorry I worded that a bit poorly. Each party only generates a single random number entirely without collaboration. $r_1$ is the random number of party 1, $r_2$ of party 2.
– orlp
Nov 30 at 20:32
Ah, I think I get it now. And you are using||
to mean...? I would think XOR would do it, but that's not the symbol for it. Also it's notable that neither of the inputs has to be random, or rather, neither party can prevent the other from ensuring the outcome is truly random.
– Wildcard
Nov 30 at 21:00
@Wildcard||
is concatenation. And yes, I'm stating random but that's for that party's own good, as long as one party uses random data the result is random. XOR would do as well.
– orlp
Nov 30 at 21:22
1
@Wildcard The result is hashed, so I don't think "half random" is really a thing. But XOR without hashing is probably simpler and also works.
– orlp
Nov 30 at 22:21
|
show 1 more comment
up vote
4
down vote
up vote
4
down vote
You can't prove unconditional lack of knowledge, but you can create proven, shared lack of knowledge of a number by all parties that contribute.
Suppose there are two parties. They each generate a 256-bit random number (call them $r_1$ and $r_2$), and publish and sign $H(r_1)$ and $H(r_2)$ as their choices. Both parties then sign $H(r_1)||H(r_2)$ as an agreement that they both agree to $H(r_1||r_2)$ as "the number" that they both do not know.
Now a number exists that two parties have agreed on that they both have no knowledge on (to a 256-bit security level), yet if the parties decide to reveal the number (by revealing $r_1, r_2$) it can be confirmed that this was indeed the number they agreed upon.
The above scheme can be used for example for trustless (in the sense that the random number truly was random) gambling.
You can't prove unconditional lack of knowledge, but you can create proven, shared lack of knowledge of a number by all parties that contribute.
Suppose there are two parties. They each generate a 256-bit random number (call them $r_1$ and $r_2$), and publish and sign $H(r_1)$ and $H(r_2)$ as their choices. Both parties then sign $H(r_1)||H(r_2)$ as an agreement that they both agree to $H(r_1||r_2)$ as "the number" that they both do not know.
Now a number exists that two parties have agreed on that they both have no knowledge on (to a 256-bit security level), yet if the parties decide to reveal the number (by revealing $r_1, r_2$) it can be confirmed that this was indeed the number they agreed upon.
The above scheme can be used for example for trustless (in the sense that the random number truly was random) gambling.
edited Nov 30 at 20:31
answered Nov 30 at 19:45
orlp
3,3701124
3,3701124
Do the two parties each generate two random numbers? Or are the numbers generated "together" by both of the parties in collaboration somehow? If the latter, how is this done?
– Wildcard
Nov 30 at 20:29
@Wildcard Sorry I worded that a bit poorly. Each party only generates a single random number entirely without collaboration. $r_1$ is the random number of party 1, $r_2$ of party 2.
– orlp
Nov 30 at 20:32
Ah, I think I get it now. And you are using||
to mean...? I would think XOR would do it, but that's not the symbol for it. Also it's notable that neither of the inputs has to be random, or rather, neither party can prevent the other from ensuring the outcome is truly random.
– Wildcard
Nov 30 at 21:00
@Wildcard||
is concatenation. And yes, I'm stating random but that's for that party's own good, as long as one party uses random data the result is random. XOR would do as well.
– orlp
Nov 30 at 21:22
1
@Wildcard The result is hashed, so I don't think "half random" is really a thing. But XOR without hashing is probably simpler and also works.
– orlp
Nov 30 at 22:21
|
show 1 more comment
Do the two parties each generate two random numbers? Or are the numbers generated "together" by both of the parties in collaboration somehow? If the latter, how is this done?
– Wildcard
Nov 30 at 20:29
@Wildcard Sorry I worded that a bit poorly. Each party only generates a single random number entirely without collaboration. $r_1$ is the random number of party 1, $r_2$ of party 2.
– orlp
Nov 30 at 20:32
Ah, I think I get it now. And you are using||
to mean...? I would think XOR would do it, but that's not the symbol for it. Also it's notable that neither of the inputs has to be random, or rather, neither party can prevent the other from ensuring the outcome is truly random.
– Wildcard
Nov 30 at 21:00
@Wildcard||
is concatenation. And yes, I'm stating random but that's for that party's own good, as long as one party uses random data the result is random. XOR would do as well.
– orlp
Nov 30 at 21:22
1
@Wildcard The result is hashed, so I don't think "half random" is really a thing. But XOR without hashing is probably simpler and also works.
– orlp
Nov 30 at 22:21
Do the two parties each generate two random numbers? Or are the numbers generated "together" by both of the parties in collaboration somehow? If the latter, how is this done?
– Wildcard
Nov 30 at 20:29
Do the two parties each generate two random numbers? Or are the numbers generated "together" by both of the parties in collaboration somehow? If the latter, how is this done?
– Wildcard
Nov 30 at 20:29
@Wildcard Sorry I worded that a bit poorly. Each party only generates a single random number entirely without collaboration. $r_1$ is the random number of party 1, $r_2$ of party 2.
– orlp
Nov 30 at 20:32
@Wildcard Sorry I worded that a bit poorly. Each party only generates a single random number entirely without collaboration. $r_1$ is the random number of party 1, $r_2$ of party 2.
– orlp
Nov 30 at 20:32
Ah, I think I get it now. And you are using
||
to mean...? I would think XOR would do it, but that's not the symbol for it. Also it's notable that neither of the inputs has to be random, or rather, neither party can prevent the other from ensuring the outcome is truly random.– Wildcard
Nov 30 at 21:00
Ah, I think I get it now. And you are using
||
to mean...? I would think XOR would do it, but that's not the symbol for it. Also it's notable that neither of the inputs has to be random, or rather, neither party can prevent the other from ensuring the outcome is truly random.– Wildcard
Nov 30 at 21:00
@Wildcard
||
is concatenation. And yes, I'm stating random but that's for that party's own good, as long as one party uses random data the result is random. XOR would do as well.– orlp
Nov 30 at 21:22
@Wildcard
||
is concatenation. And yes, I'm stating random but that's for that party's own good, as long as one party uses random data the result is random. XOR would do as well.– orlp
Nov 30 at 21:22
1
1
@Wildcard The result is hashed, so I don't think "half random" is really a thing. But XOR without hashing is probably simpler and also works.
– orlp
Nov 30 at 22:21
@Wildcard The result is hashed, so I don't think "half random" is really a thing. But XOR without hashing is probably simpler and also works.
– orlp
Nov 30 at 22:21
|
show 1 more comment
up vote
0
down vote
If you computed x0 and x1 yourself out of nothing, and never leaked any information anyone could use to derive x1, then it would be true that nobody but you who knows x0 would also know x1, but it would be equally true that nobody who doesn't know x0 would also know x1. Both statements would be true because you're the only person in the universe who knows x1. Further, if information about x1 did leak out in ways you don't know about, the fact that somebody acquires x0 would almost never imply anything about their knowledge of x1.
There is one possible situation where proof of knowledge might prove ignorance of something else: in cases where it is externally possible to prove that someone has only had the opportunity to receive a certain amount of information since a piece of information was created (e.g. an amount significantly less than the combined size of x0 and x1), demonstrated possession of a sufficient quantity of information that didn't prior that time window (e.g. x0) could prove that the communications channel wasn't used to convey certain other information (e.g. x1). Such a proof would not require any relationship between x0 and x1, however. If x0 and x1 had been entirely separate and unrelated random bit strings the same situation would apply.
add a comment |
up vote
0
down vote
If you computed x0 and x1 yourself out of nothing, and never leaked any information anyone could use to derive x1, then it would be true that nobody but you who knows x0 would also know x1, but it would be equally true that nobody who doesn't know x0 would also know x1. Both statements would be true because you're the only person in the universe who knows x1. Further, if information about x1 did leak out in ways you don't know about, the fact that somebody acquires x0 would almost never imply anything about their knowledge of x1.
There is one possible situation where proof of knowledge might prove ignorance of something else: in cases where it is externally possible to prove that someone has only had the opportunity to receive a certain amount of information since a piece of information was created (e.g. an amount significantly less than the combined size of x0 and x1), demonstrated possession of a sufficient quantity of information that didn't prior that time window (e.g. x0) could prove that the communications channel wasn't used to convey certain other information (e.g. x1). Such a proof would not require any relationship between x0 and x1, however. If x0 and x1 had been entirely separate and unrelated random bit strings the same situation would apply.
add a comment |
up vote
0
down vote
up vote
0
down vote
If you computed x0 and x1 yourself out of nothing, and never leaked any information anyone could use to derive x1, then it would be true that nobody but you who knows x0 would also know x1, but it would be equally true that nobody who doesn't know x0 would also know x1. Both statements would be true because you're the only person in the universe who knows x1. Further, if information about x1 did leak out in ways you don't know about, the fact that somebody acquires x0 would almost never imply anything about their knowledge of x1.
There is one possible situation where proof of knowledge might prove ignorance of something else: in cases where it is externally possible to prove that someone has only had the opportunity to receive a certain amount of information since a piece of information was created (e.g. an amount significantly less than the combined size of x0 and x1), demonstrated possession of a sufficient quantity of information that didn't prior that time window (e.g. x0) could prove that the communications channel wasn't used to convey certain other information (e.g. x1). Such a proof would not require any relationship between x0 and x1, however. If x0 and x1 had been entirely separate and unrelated random bit strings the same situation would apply.
If you computed x0 and x1 yourself out of nothing, and never leaked any information anyone could use to derive x1, then it would be true that nobody but you who knows x0 would also know x1, but it would be equally true that nobody who doesn't know x0 would also know x1. Both statements would be true because you're the only person in the universe who knows x1. Further, if information about x1 did leak out in ways you don't know about, the fact that somebody acquires x0 would almost never imply anything about their knowledge of x1.
There is one possible situation where proof of knowledge might prove ignorance of something else: in cases where it is externally possible to prove that someone has only had the opportunity to receive a certain amount of information since a piece of information was created (e.g. an amount significantly less than the combined size of x0 and x1), demonstrated possession of a sufficient quantity of information that didn't prior that time window (e.g. x0) could prove that the communications channel wasn't used to convey certain other information (e.g. x1). Such a proof would not require any relationship between x0 and x1, however. If x0 and x1 had been entirely separate and unrelated random bit strings the same situation would apply.
answered Nov 30 at 17:36
supercat
1934
1934
add a comment |
add a comment |
up vote
0
down vote
If you want to be sure that no one knows X, then one thing you can do is incentivize people to reveal the fact that they know X. You could offer a monetary bounty to anyone who can demonstrate that they know X. If no one takes the bounty, then you know that either no one knows X, they value the secrecy of that knowledge at higher than your bounty, they just don't yet know about your bounty, they don't trust you to pay up, or they value their privacy more than the bounty and don't believe they can claim the bounty anonymously.
Cryptocurrency systems provide a few ways for people to offer bounties in a way that solves the trust and anonymity issues, and can even allow third parties to contribute directly to the bounty too without them needing to trust the bounty-offerer.
In 2013, Peter Todd created several bitcoin addresses that were configured to automatically allow anyone to withdraw the stored bitcoin if they publicly demonstrated a hash collision in one of several algorithms. Many people pitched in and contributed bitcoin into the bounty wallets. No one, including Peter Todd, could withdraw the money back from the wallet unless they demonstrated a hash collision. The bounty for a SHA-1 collision (worth about $3000 at the time) was claimed in 2017 using the SHA-1 collision published by Google. The other bounties (for SHA-256 and RIPEMD160) are still unclaimed, implying that no one yet knows of any collisions for them.
One strategy to detect intrusion into a computer system is to place an unencrypted cryptocurrency wallet file containing a valuable amount of cryptocurrency on the computer, and then watch the addresses controlled by the wallet to see if the cryptocurrency is ever taken. If a virus or attacker ever hacks into the computer, then they may find the wallet file and steal the cryptocurrency from it. An attacker may realize that taking the cryptocurrency will expose the fact that the system has been hacked, but if the cryptocurrency is worth more than the system is worth to the attacker, then they have little reason to not take it. (This article about the now-defunct service Bitcoin Vigil explains the strategy a bit more.)
add a comment |
up vote
0
down vote
If you want to be sure that no one knows X, then one thing you can do is incentivize people to reveal the fact that they know X. You could offer a monetary bounty to anyone who can demonstrate that they know X. If no one takes the bounty, then you know that either no one knows X, they value the secrecy of that knowledge at higher than your bounty, they just don't yet know about your bounty, they don't trust you to pay up, or they value their privacy more than the bounty and don't believe they can claim the bounty anonymously.
Cryptocurrency systems provide a few ways for people to offer bounties in a way that solves the trust and anonymity issues, and can even allow third parties to contribute directly to the bounty too without them needing to trust the bounty-offerer.
In 2013, Peter Todd created several bitcoin addresses that were configured to automatically allow anyone to withdraw the stored bitcoin if they publicly demonstrated a hash collision in one of several algorithms. Many people pitched in and contributed bitcoin into the bounty wallets. No one, including Peter Todd, could withdraw the money back from the wallet unless they demonstrated a hash collision. The bounty for a SHA-1 collision (worth about $3000 at the time) was claimed in 2017 using the SHA-1 collision published by Google. The other bounties (for SHA-256 and RIPEMD160) are still unclaimed, implying that no one yet knows of any collisions for them.
One strategy to detect intrusion into a computer system is to place an unencrypted cryptocurrency wallet file containing a valuable amount of cryptocurrency on the computer, and then watch the addresses controlled by the wallet to see if the cryptocurrency is ever taken. If a virus or attacker ever hacks into the computer, then they may find the wallet file and steal the cryptocurrency from it. An attacker may realize that taking the cryptocurrency will expose the fact that the system has been hacked, but if the cryptocurrency is worth more than the system is worth to the attacker, then they have little reason to not take it. (This article about the now-defunct service Bitcoin Vigil explains the strategy a bit more.)
add a comment |
up vote
0
down vote
up vote
0
down vote
If you want to be sure that no one knows X, then one thing you can do is incentivize people to reveal the fact that they know X. You could offer a monetary bounty to anyone who can demonstrate that they know X. If no one takes the bounty, then you know that either no one knows X, they value the secrecy of that knowledge at higher than your bounty, they just don't yet know about your bounty, they don't trust you to pay up, or they value their privacy more than the bounty and don't believe they can claim the bounty anonymously.
Cryptocurrency systems provide a few ways for people to offer bounties in a way that solves the trust and anonymity issues, and can even allow third parties to contribute directly to the bounty too without them needing to trust the bounty-offerer.
In 2013, Peter Todd created several bitcoin addresses that were configured to automatically allow anyone to withdraw the stored bitcoin if they publicly demonstrated a hash collision in one of several algorithms. Many people pitched in and contributed bitcoin into the bounty wallets. No one, including Peter Todd, could withdraw the money back from the wallet unless they demonstrated a hash collision. The bounty for a SHA-1 collision (worth about $3000 at the time) was claimed in 2017 using the SHA-1 collision published by Google. The other bounties (for SHA-256 and RIPEMD160) are still unclaimed, implying that no one yet knows of any collisions for them.
One strategy to detect intrusion into a computer system is to place an unencrypted cryptocurrency wallet file containing a valuable amount of cryptocurrency on the computer, and then watch the addresses controlled by the wallet to see if the cryptocurrency is ever taken. If a virus or attacker ever hacks into the computer, then they may find the wallet file and steal the cryptocurrency from it. An attacker may realize that taking the cryptocurrency will expose the fact that the system has been hacked, but if the cryptocurrency is worth more than the system is worth to the attacker, then they have little reason to not take it. (This article about the now-defunct service Bitcoin Vigil explains the strategy a bit more.)
If you want to be sure that no one knows X, then one thing you can do is incentivize people to reveal the fact that they know X. You could offer a monetary bounty to anyone who can demonstrate that they know X. If no one takes the bounty, then you know that either no one knows X, they value the secrecy of that knowledge at higher than your bounty, they just don't yet know about your bounty, they don't trust you to pay up, or they value their privacy more than the bounty and don't believe they can claim the bounty anonymously.
Cryptocurrency systems provide a few ways for people to offer bounties in a way that solves the trust and anonymity issues, and can even allow third parties to contribute directly to the bounty too without them needing to trust the bounty-offerer.
In 2013, Peter Todd created several bitcoin addresses that were configured to automatically allow anyone to withdraw the stored bitcoin if they publicly demonstrated a hash collision in one of several algorithms. Many people pitched in and contributed bitcoin into the bounty wallets. No one, including Peter Todd, could withdraw the money back from the wallet unless they demonstrated a hash collision. The bounty for a SHA-1 collision (worth about $3000 at the time) was claimed in 2017 using the SHA-1 collision published by Google. The other bounties (for SHA-256 and RIPEMD160) are still unclaimed, implying that no one yet knows of any collisions for them.
One strategy to detect intrusion into a computer system is to place an unencrypted cryptocurrency wallet file containing a valuable amount of cryptocurrency on the computer, and then watch the addresses controlled by the wallet to see if the cryptocurrency is ever taken. If a virus or attacker ever hacks into the computer, then they may find the wallet file and steal the cryptocurrency from it. An attacker may realize that taking the cryptocurrency will expose the fact that the system has been hacked, but if the cryptocurrency is worth more than the system is worth to the attacker, then they have little reason to not take it. (This article about the now-defunct service Bitcoin Vigil explains the strategy a bit more.)
answered Dec 1 at 1:07
Macil
40627
40627
add a comment |
add a comment |
up vote
-3
down vote
non-hearsay, eye-witness testimony of the side of a coin after a coin flip. Conversation between interviewer I and witness W, where the only "knowledge" that can permissibly enter the court record is direct eyewitness testimony:
I: did you witness the coin toss?
W: Yes.
I: Did you view the coin once it had landed?
W: Yes.
I: Can you describe the picture on the coin?
W: It was a a picture of an eagle?
I: What was depicted on the obverse side of the coin?
OBJECTION, the answer would be speculation. The witness has already dis-qualified his ability answer.
New contributor
1
Right now your answer is very confusing (which is why it's attracting downvotes), please edit it to more directly explain how this "hypothetical tale" relates to the question and note that analogies are mostly useful for supplementary but not for immediate explanations.
– SEJPM♦
2 days ago
add a comment |
up vote
-3
down vote
non-hearsay, eye-witness testimony of the side of a coin after a coin flip. Conversation between interviewer I and witness W, where the only "knowledge" that can permissibly enter the court record is direct eyewitness testimony:
I: did you witness the coin toss?
W: Yes.
I: Did you view the coin once it had landed?
W: Yes.
I: Can you describe the picture on the coin?
W: It was a a picture of an eagle?
I: What was depicted on the obverse side of the coin?
OBJECTION, the answer would be speculation. The witness has already dis-qualified his ability answer.
New contributor
1
Right now your answer is very confusing (which is why it's attracting downvotes), please edit it to more directly explain how this "hypothetical tale" relates to the question and note that analogies are mostly useful for supplementary but not for immediate explanations.
– SEJPM♦
2 days ago
add a comment |
up vote
-3
down vote
up vote
-3
down vote
non-hearsay, eye-witness testimony of the side of a coin after a coin flip. Conversation between interviewer I and witness W, where the only "knowledge" that can permissibly enter the court record is direct eyewitness testimony:
I: did you witness the coin toss?
W: Yes.
I: Did you view the coin once it had landed?
W: Yes.
I: Can you describe the picture on the coin?
W: It was a a picture of an eagle?
I: What was depicted on the obverse side of the coin?
OBJECTION, the answer would be speculation. The witness has already dis-qualified his ability answer.
New contributor
non-hearsay, eye-witness testimony of the side of a coin after a coin flip. Conversation between interviewer I and witness W, where the only "knowledge" that can permissibly enter the court record is direct eyewitness testimony:
I: did you witness the coin toss?
W: Yes.
I: Did you view the coin once it had landed?
W: Yes.
I: Can you describe the picture on the coin?
W: It was a a picture of an eagle?
I: What was depicted on the obverse side of the coin?
OBJECTION, the answer would be speculation. The witness has already dis-qualified his ability answer.
New contributor
edited 2 days ago
SEJPM♦
27.8k452131
27.8k452131
New contributor
answered Nov 30 at 19:44
zdh
1
1
New contributor
New contributor
1
Right now your answer is very confusing (which is why it's attracting downvotes), please edit it to more directly explain how this "hypothetical tale" relates to the question and note that analogies are mostly useful for supplementary but not for immediate explanations.
– SEJPM♦
2 days ago
add a comment |
1
Right now your answer is very confusing (which is why it's attracting downvotes), please edit it to more directly explain how this "hypothetical tale" relates to the question and note that analogies are mostly useful for supplementary but not for immediate explanations.
– SEJPM♦
2 days ago
1
1
Right now your answer is very confusing (which is why it's attracting downvotes), please edit it to more directly explain how this "hypothetical tale" relates to the question and note that analogies are mostly useful for supplementary but not for immediate explanations.
– SEJPM♦
2 days ago
Right now your answer is very confusing (which is why it's attracting downvotes), please edit it to more directly explain how this "hypothetical tale" relates to the question and note that analogies are mostly useful for supplementary but not for immediate explanations.
– SEJPM♦
2 days ago
add a comment |
Thanks for contributing an answer to Cryptography Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcrypto.stackexchange.com%2fquestions%2f64436%2fhow-to-use-proof-of-lack-of-knowledge%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown