Skilled player (1828)
Joined: 4/20/2005
Posts: 2161
Location: Norrköping, Sweden
I have another math problem I thought up today. I don't know how complicated it is, maybe you can shed some light on it: Imagine that there is a 4 digit code that you have to break. Once you hit the correct code, the lock opens up, so there is no doubt when you found the correct code. You have an unlimited number of tries, but you still want to minimize the average number of tries you have to make. Of course one possible brute force method is to start on 0000 and increase by 1 each time, giving an average number of tries of 5000. However, this setup has an additional feature: below each digit is a light that can be either green or red. If you have entered digit i (i=1,2,3,4) correctly, light i turns green with probability p_right (and red with probability 1-p_right). If the digit is incorrect, the light can still turn green, but does so with probability p_wrong (and red with probability 1-p_wrong). One situation could be where p_right =0.95 and p_wrong=0.05, i.e. each light is "probably" green if you entered the digit correctly, and "probably" red if it is not correct. In this case you can most likely get away with less than 5000 guesses. Your task is to construct an algorithm that uses this light information in order to minimize the average number of tries you have to make. For some reason you know the values of p_right and p_wrong (but of course you don't know the correct code). I'm interested in what you can come up with, and how the average number of attempts your algorithm has to make depends on the values of p_right and p_wrong. (I don't know any smart solution to this myself, I just propose the problem to you in the hopes that you find it interesting!)
Player (80)
Joined: 8/5/2007
Posts: 865
Randil wrote:
I have another math problem I thought up today. I don't know how complicated it is, maybe you can shed some light on it: Imagine that there is a 4 digit code that you have to break. Once you hit the correct code, the lock opens up, so there is no doubt when you found the correct code. You have an unlimited number of tries, but you still want to minimize the average number of tries you have to make. Of course one possible brute force method is to start on 0000 and increase by 1 each time, giving an average number of tries of 5000. However, this setup has an additional feature: below each digit is a light that can be either green or red. If you have entered digit i (i=1,2,3,4) correctly, light i turns green with probability p_right (and red with probability 1-p_right). If the digit is incorrect, the light can still turn green, but does so with probability p_wrong (and red with probability 1-p_wrong). One situation could be where p_right =0.95 and p_wrong=0.05, i.e. each light is "probably" green if you entered the digit correctly, and "probably" red if it is not correct. In this case you can most likely get away with less than 5000 guesses. Your task is to construct an algorithm that uses this light information in order to minimize the average number of tries you have to make. For some reason you know the values of p_right and p_wrong (but of course you don't know the correct code). I'm interested in what you can come up with, and how the average number of attempts your algorithm has to make depends on the values of p_right and p_wrong. (I don't know any smart solution to this myself, I just propose the problem to you in the hopes that you find it interesting!)
That's a nice puzzle and I'm looking into it with simple binary cases and the minimax rule. It's a cute exercise in type I and type II errors. However, I should point out that the code cannot be "cracked" in the conventional sense because you could enter the correct code and the lights might all be red, especially if p_right is very small. I think the best you could do is repeatedly guess codes until a preponderance of evidence points to a particular code (perhaps to the point that the probability that a code is incorrect is less than one in a million). Also, I'll quickly note that there is necessarily a degeneracy when p_right = p_wrong = 0.5, since that corresponds to the lights being red or green completely at random.
Skilled player (1828)
Joined: 4/20/2005
Posts: 2161
Location: Norrköping, Sweden
Yes, I thought about the case p_right = p_wrong myself. If I'm thinking correctly, for all cases where p_right = p_wrong the lights will give no additional information. If you were to construct an algorithm and plot the average number of attempts on a 2D-plane with p_right on the x-axis and p_wrong on the y-axis, I think it will have the value 5000 along the line y=x (since here the lights don't provide any information). Right? Oh, and as for cracking the code, as soon as you type in the correct code you will be notified (in some way). So when writing your code you should have a variable containing the correct 4-digit combination (that the algorithm of course doesn't have access to), but you use it to check if your guess is correct. Once you guess correctly your algorithm will stop. Maybe that wasn't clear. Or maybe I'm misunderstanding what you're saying. :)
Editor
Joined: 11/3/2013
Posts: 506
Here's what I would want my code to do: At each stage, pick a random four-digit number. The probability for picking each digit at each position would be weighted so as to reflect the information already received: if a digit has been tried before and has flashed red, it is less likely to be picked by the algorithm, and if a digit has flashed green, it is more likely to be picked (assuming p_right>p_wrong). Modify the algorithm to stop it guessing the same combination twice. Some pseudocode: int (rand(a), rand(b), rand(c), rand(d)) = {random integers uniformly distributed between 0 & 9}; int (a, b, c, d, n); a = rand(a); b = rand(b) c = rand(c); d = rand(d); n = 1; rand(a)green = {Bayesian formula for probability distribution of digit was of light turned green based on rand, p_right, p_wrong}; if (a = green), rand(a) = rand(a)green; rand(a)red = {similar definition}; if (a = red), rand(a) = rand(a)red; //similar expressions follow for b, c & d a(n) = a; b(n) = b; c(n) = c; d(n) = d; n = n + 1; bool (check); {check returns TRUE if code is correct, FALSE otherwise} for (i = 1, i++, n) if (a = a(i) and b = b(i) and c = c(i) and d = d(i)) goto line 2 else (if (check = TRUE) end else (goto line 2)); {the first if clause stops the program guessing the same code twice} I apologise if this code is unreadable or just doesn't make any sense; it's been a while since I did any coding and I was always rubbish at it.
Skilled player (1828)
Joined: 4/20/2005
Posts: 2161
Location: Norrköping, Sweden
Interesting approach, thatguy. I also thought that some kind of bayesian approach should work well here, since basically we are updating our believes in the light (no pun intended) of more and more data. Though I haven't worked out the exact equations, but it seems that neither have you :) (or maybe you have but are keeping them secret!) It would be awesome if you could get a working script for your idea so we can see how much better than the brute force's 5000 average guesses it is. I had the following thought: you run the 10 values 0000, 1111, 2222, ..., 9999 many times each (maybe 100 times each, giving 1000 guesses total), and note the probability of green for each of the 4 lights and 10 runs, giving you a total of 40 probability values. We know that 4 of these will approximately be p_right, and the remaining 36 should be close to p_wrong. Our goal would then be to identify which belongs to which group. I haven't worked out the details here, but maybe it's a possible approach. Of course you could halt this routine early if you are confident that you found the answer before getting to 9999.
Editor
Joined: 11/3/2013
Posts: 506
p(green) = p(green|correct guess)p(correct guess) + p(green|incorrect guess)p(incorrect guess) = p_right*p(correct guess) + p_wrong*p(incorrect guess) = p_right*rand + p_wrong*(1-rand) randgreen = p(correct guess|green) = p(green|correct guess)*p(correct guess)/p(green) = p_right*rand/(p_right*rand + p_wrong*(1-rand)) Similarly, randred = (1-p_right)*rand/((1-p_right)*rand + (1-p_wrong)(1-rand)) For a sanity check, when p_right = p_wrong then randgreen = randred = rand which is what you would expect.
Skilled player (1828)
Joined: 4/20/2005
Posts: 2161
Location: Norrköping, Sweden
I made an algorithm that completes this in the following average number of tries (average over 500 monte carlo simulations): p_right = 0.95, p_wrong = 0.05: 13 tries p_right = 0.80, p_wrong = 0.20: 22 tries p_right = 0.60, p_wrong = 0.40: 181 tries p_right = 0.55, p_wrong = 0.45: 506 tries Quite a good improvement from the original 5000. When p_right = p_wrong the average numer of tries was approximately 5000. Here's how I did this: For each 4 digit number I just checked each digit's "probability weight" (that I made up myself) to p_right. High value = closer to p_right. I summed the values 100 / (1 + 100*abs(p_right - p_now)) where p_now is the current ratio of times that the green light has lit up for this digit on this position. So if the current digit has a ratio close to p_right. I will add a high number. I all 4 digits have a ratio close to p_right this number will be even higher. I then pick the 4 digit number with the highest value of this (after having removed the 4-digit numbers I have picked previously). If anyone is interested I can upload my Matlab code. Note that this algorithm does not care about p_wrong. So as you can see, the average number of tries is low if and only if p_right and p_wrong are far apart.
Editor, Expert player (2080)
Joined: 6/15/2005
Posts: 3284
Regarding the problem Randil brought up, maximum likelihood seems to work the best (although I can't prove it). Basically, at each step we guess the four-digit code that is most likely to yield the evidence given so far. I will use the following notation: ● pr to denote Prob(green|right), ● pw to denote Prob(green|wrong), ● qr = 1-pr, ● qw = 1-pw, I will also assume that 0 < pw < pr < 1. The probability that a four-digit code (that is not already guessed) yields the evidence so far is prA qrB pwC qwD, where A is # of greens for correctly guessed digits (over all the evidence so far), B is # of reds for correctly guessed digits, C is # of greens for incorrectly guessed digits, and D is # of reds for incorrectly guessed digits. Note that A+C is the total number of greens and B+D is the total number of reds. So then divide probability by pwA+C qwB+D, which is constant over all four-digit codes, and this gives (pr/pw)A (qr/qw)B, and the log of this is A log (pr/pw) + B log (qr/qw). So we just choose the four-digit code that maximizes the above quantity. I wrote C++ code (C++11 standard) to simulate this. Various values of pr and pw are selected. For each set of values, it runs the guessing game 5000 times using the above strategy and takes the average number of tries. Except for the first guess which is completely random, if two or more values are tied for the maximum, this code chooses the first instance, which is the smallest 4-digit code/number. (Note that in the code, "TRIES" is number of times to run the game; the number of tries/guesses for each game is called "count".) Here are the results. The resulting averages tend to be better than the averages Randil posted for his strategy (ex. for pr=0.55 and pw=0.45, the average with this strategy is 276.97, compared to 506). Also, I noticed that pairs of values with fixed pr-pw which are further away from the center (0.5) have a lower average number of guesses. A few other things. I only did pr > pw but that's good enough since if pr < pw, just reverse the colors and switch pr with qr and pw with qw. The case pr = pw is obvious enough (it's equivalent to completely random guessing and the average number of tries for that is 5000.5). I didn't do pw=0 or pr=1 though since the code doesn't support it; you can try the code with something like pw=0.0001 or pr=0.9999 but I don't expect anything really special to come out of it. The only other thing is that the case pr=1 and pw=0 can be done theoretically; the strategy is obviously to independently test each digit 0, 1, ... until you get the right digit, and the number of tries is always the largest digit plus one. The probability that the largest digit is k is ((k+1)4-k4)/10000, and when you work it out, the expected value for the number of tries is 8.4667.
Skilled player (1828)
Joined: 4/20/2005
Posts: 2161
Location: Norrköping, Sweden
Nice work! I agree with you that maximum likelihood should be the best (or at least one of the best) ways to go. I'm impressed that you halved my number of tries for 0.55/0.45. What I'm wondering now is whether the maximum likelihood approach is better than the bayesian approach thatguy mentioned. What do you guys think? I suspect that the bayesian approach might be better if you have a very small dataset and when p_wrong and p_right are close (and of course a good a priori distribution), but that's just a theory...
Editor
Joined: 11/3/2013
Posts: 506
I think the likelihood approach is better - purely because Fractal is a far better mathematician than me.
Editor, Expert player (2080)
Joined: 6/15/2005
Posts: 3284
OK, I looked at choosing the next guess based on the Bayesian probability distribution. That is, if I understand it right, we make a guess with probability Prob(guess is correct|evidence). Since Prob(guess is correct|evidence) = Prob(evidence|guess is correct) * Prob(guess is correct) / Prob(evidence) and a priori Prob(guess is correct) is always 1/10000 and at each stage Prob(evidence) is a fixed number, the probability distribution is proportional to likelihood. So we just select the next guess proportionally according to prA qrB pwC qwD, or, after dividing by pwA+C qwB+D, which is a fixed number at each stage, (pr/pw)A (qr/qw)B. I modified the previous C++ program to get a new program that uses this method, and here are the results. The average number of tries is larger than for the maximum likelihood strategy by up to 50% but otherwise shows similar patterns like the other results.
Player (147)
Joined: 11/14/2011
Posts: 68
Location: Brookline, MA
I think that the idea posted by thatguy is similar, but here is my idea. My idea is to start with a uniform prior on each digit and update the distributions with Bayesian approach while making guesses according to the following scheme. For both pright and pwrong being >=1/2 do the following. Make a random guess and update the probability distributions. From then on, keep generating random combinations, but using the updated distributions. If the generated combination was already guessed, reject it and try again. Otherwise, guess it, and use the lock's response to update your distributions. For any other case. whichever probabilities is <1/2, do the same as above, but assume the light display is wrong when red if pwrong<1/2, and wrong when green if pright<1/2. This method will inevitably still give the unavoidable expectation of 5000 guesses if pright=pwrong=1/2, but the expectation should lower dramatically as you increase pright and pwrong from those values. I will run some tests on this method using R.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Prove that 21/n is irrational for any integer value of n > 1.
Player (36)
Joined: 9/11/2004
Posts: 2631
If 21/n is rational then 21/n = a/b where a and b are coprime integers. Then 2 = an/bn With an and bn still coprime integers. 2bn = an which implies that bn and an share all factors with each aside from a factor of two. However an and bn are supposed to be coprime. Coprimes cannot share factors except for 1. Thus we've reached a contradiction, and 21/n must be irrational for all integer n > 1.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
P.JBoy
Any
Editor
Joined: 3/25/2006
Posts: 850
Location: stuck in Pandora's box HELLPP!!!
Warp wrote:
Prove that 21/n is irrational for any integer value of n > 1.
Suppose 21/n is rational, then it may be expressed as a/b for coprime a,b, b != 0. 21/n = a/b 2 = (a/b)n 2bn = an. From this we see that an is even, so it must be the case that a is even, which in turn implies that b is odd as a and b are coprime. As a is even, a = 2c for some integer c: 2bn = (2c)n = 2n cn bn = 2n-1 cn Thus if n > 1: bn is even, which in turn implies b is even, but this contradicts the condition that a and b are coprime, thus 21/n is irrational
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
OmnipotentEntity wrote:
If 21/n is rational then 21/n = a/b where a and b are coprime integers. Then 2 = an/bn With an and bn still coprime integers.
I'm not so sure I see trivially why if a and b are coprime, then an and bn are also coprime.
P.JBoy
Any
Editor
Joined: 3/25/2006
Posts: 850
Location: stuck in Pandora's box HELLPP!!!
Initially a and b share no prime factors and raising a number to some power doesn't then include any further prime factors.
Skilled player (1653)
Joined: 7/25/2007
Posts: 299
Location: UK
Just from the fact that the integers are a unique factorisation domain, we have the primes dividing a number will be the only primes dividing all powers of it; thus we cannot have one power being a prime multiple of another. IE For a given prime P P1/n= a/b P=an/bn Pbn=an Given our prime factorisation a=p1x1p2x2...plxl b=q1y1q2y2...qmym an=p1nx1p2nx2...plnxl bn=q1ny1q2ny2...qmnxm For Pbn=an, we have Pbn=q1ny1q2ny2...PnyP+1...qmnym =an=p1nx1p2nx2...PnxP...plnxl For these to be equal, due to unique factorisation of primes, we need all pi=qi, and xi=yi. But just consider the power of P, clearly we have nxP+1=/=nxP. Thus the two are not equal, and thus one power is not a prime multiple of another. Hence nth roots of primes are not rational.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
21/n is irrational for any integer value of n > 1. I wonder if that's true also for any rational value of n > 1. (I can't intuitively think of a counter-example at this moment. But of course intuition is no proof.)
Amaraticando
It/Its
Editor, Player (162)
Joined: 1/10/2012
Posts: 673
Location: Brazil
In general, if n^p is not integer (n integer and p rational), then it's irrational. Proof: Suppose n^(a/b) is rational, not integer and gdc(a,b) = 1. The last supposition is always possible, since we can reduce the fraction. We want to arrive in a contradiction. (n^a)^(1/b) = c/d (every variable is integer and gdc(c, d) = 1, |d| > 1) n^a = (c^b)/(d^b) m = (c^b)/(d^b) (m integer) Therefore, d^b divides c^b, an absurd. If d and c have no common prime factor, it's impossible. EDIT: we don't even need to reduce a/b, only c/d .
Player (173)
Joined: 12/28/2007
Posts: 235
Location: Japan, Sapporo
Flip wrote:
Just from the fact that the integers are a unique factorisation domain
We don't require the uniqueness of factorization. In fact, we only have to require a slightly weaker condition that any pair of integers a and b has a greatest common divisor (and hence a least common multiple), which is deduced from Euclid's algorithm in the case of integers. Now recall the two conditions on an integer p (or an element p of an integral domain).
  • (I) p cannot be written as a product of two non-invertible (non-zero) elements. (Such p is said to be irreducible. This is a usual definition of a prime number.)
  • (P) If p divides ab, then p divides a, or p divides b. (Such p is said to be prime.)
(P) always implies (I), but in general, the converse is not true. One of the sufficient conditions for the converse to be true is the GCD condition above. Suppose that an irreducible element p divides ab, and assume p doesn't divide a, so that d := gcd(a,p) ≠ p. Since d divides p and p is irreducible, d = 1. Since both p and a divides ab, l := lcm(a,p) divides ab, hence lc = ab for some c. On the other hand, we have ap = dl = l. Altogether, we have ab = lc = apc. Thus b = pc, or equivalently p divides b. Furthermore, it follows by induction that if an irreducible element p divides a^n, then p divides a, which was a key property in proving the n-th root of a prime is irrational.
Amaraticando wrote:
In general, if n^p is not integer (n integer and p rational), then it's irrational.
I shall prove this in a slightly different setting: if p is a prime, then p^r is irrational for any rational r with 0 < r < 1. (The case r > 1 follows from the observation that if r = s + r' with s an integer and 0 < r' < 1, then p^r = p^s p^r', and p^s is an integer.) Proof. Let p be a prime, and r = a/b rational with gcd(a,b) = 1 and 0 < a < b. Assume p^r is rational: p^r = c/d, where gcd(c,d) = 1 and d >0. Then we have dp^(a/b) = c, hence d^b p^a = c^b. Since p divides LHS, p divides c^b and hence p divides c. We can write c = pc'. Then we have d^b p^a = p^b c'^b. Dividing it by p^a gives d^b = p^(b-a) c'^b. Again, RHS is a multiple of p, hence so is d. Thus p is a common divisor of c and d, which is a contradiction. The proof is complete. The case of a composite number is harder to deal with, but essentially the above proof applies, with some appropriate case reduction.
Retired because of that deletion event. Projects (WIP RIP): VIP3 all-exits "almost capeless yoshiless", VIP2 all-exits, TSRP2 "normal run"
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Warp wrote:
21/n is irrational for any integer value of n > 1. I wonder if that's true also for any rational value of n > 1.
Conversely, my intuition tells me that if we remove the limitation of n being rational, in other words, we accept any real value of n > 1, then you can choose an n so that 21/n is rational. However, I can't come up with an example of this. Either there is a really trivial example that just doesn't come to mind at this moment, or it's a bit difficult.
Editor, Skilled player (1348)
Joined: 12/28/2013
Posts: 396
Location: Rio de Janeiro, Brasil
Warp wrote:
Warp wrote:
21/n is irrational for any integer value of n > 1. I wonder if that's true also for any rational value of n > 1.
Conversely, my intuition tells me that if we remove the limitation of n being rational, in other words, we accept any real value of n > 1, then you can choose an n so that 21/n is rational. However, I can't come up with an example of this. Either there is a really trivial example that just doesn't come to mind at this moment, or it's a bit difficult.
That's right. If we have, for example, 2^(1/x) = 1.5, x is n irrational number (1.70951...).
My YouTube channel: https://www.youtube.com/channel/UCVoUfT49xN9TU-gDMHv57sw Projects: SMW 96 exit. SDW any%, with Amaraticando. SMA2 SMW small only Kaizo Mario World 3
Amaraticando
It/Its
Editor, Player (162)
Joined: 1/10/2012
Posts: 673
Location: Brazil
21/n = r log(21/n) = (1/n)*log(2) = log(r) n = log(2)/log(r) since log (natural or decimal) is a bijective function from (0, +∞) → (-∞,  +∞), such number always exists for all r > 0, r ~= 1.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
In other words, if I want 21/n to be x, then the required value of n is the logarithm base-x of 2 (ie. logx(2).) Makes sense. Which, in conjunction to the previous postulate, means that logx(2) is never rational for rational values of x?