Player (98)
Joined: 12/12/2013
Posts: 380
Location: Russia
OmnipotentEntity wrote:
r57shell wrote:
FractalFusion wrote:
Indeed, g(x)=x+f(x), and so g(d+1)-g(d)=1+f(d+1)-f(d).
Тhis works only for x from Z (integer).
QED. Considering that g is defined to have a domain of ℝ, nothing in these steps requires d ∈ ℤ.
f(x) = floor(n/x) g(x) = floor(x+n/x) Let x = 1/2, then f(1/2) = floor(n/(1/2)) = floor(n*2) = n*2 g(x) = floor(1/2 + n/(1/2)) = floor(1/2 + n*2) = n*2 x+f(x) = 1/2 + f(1/2) = 1/2 + n*2 (for f(1/2) look a bit up) g(x) != x + f(x) n*2 != 1/2 + n*2 FractalFusion, I'll look proof a bit later.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Famously, lim(x → ∞) (1 + 1/x)x = e which is one of the definitions of e. However, it seems that also: lim(x → -∞) (1 + 1/x)x = e although it's not immediately apparent why.
Player (98)
Joined: 12/12/2013
Posts: 380
Location: Russia
Warp wrote:
Famously, lim(x → ∞) (1 + 1/x)x = e which is one of the definitions of e. However, it seems that also: lim(x → -∞) (1 + 1/x)x = e although it's not immediately apparent why.
lets y = -x, then lim(x → -∞) (1 + 1/x)x = lim(y → ∞) (1 - 1/y)-y = = lim(y → ∞) ((y - 1)/y)-y = lim(y → ∞) (y/(y - 1))y = = lim(y → ∞) ((y - 1 + 1)/(y - 1))y = lim(y → ∞) (1 + 1/(y - 1))y = = lim(y → ∞) (1 + 1/(y - 1))(y - 1) * (1 + 1 / (y - 1)) = e * lim (y → ∞) (1 + 1/(y - 1)) = e
FractalFusion wrote:
Since gr(α)=C+1 and gr(x)=x+n/x is increasing on [sqrt(n),∞), it follows that gr(ceil(α))≥C+1, so g(ceil(α))≥C+1. If ceil(α)-1≥sqrt(n), then gr(ceil(α)-1)<C+1, and so g(ceil(α)-1)=C. (If not, then ceil(α)-1=floor(sqrt(n)), and it must follow that g(ceil(α)-1)=C; otherwise min(g(x))=C+1, contradicting that the minimum is C.) Therefore the minimum d such that g(d)<g(d+1) is d=ceil(α)-1.
First, just to make it clearer: gr(ceil(α))≥C+1 is because gr(α)=C+1 and gr(α) is non-decreaseing in range [a, ceil(a)]. Now, "If ceil(α)-1≥sqrt(n)". To make clearer. Not taking into account this condition above, there are two possible cases: 1) ceil(α-1)<sqrt(n)<=ceil(α) 2) sqrt(n)<=ceil(α-1)<ceil(α) This "If ceil(α)-1≥sqrt(n)" is about case (2). "then gr(ceil(α)-1)<C+1" Reason is following: gr(α-epsilon) < gr(α) = C + 1 In other words: gr(α-epsilon) < C + 1 for small epsilon, including epsilon = α - ceil(α - 1) Thus g(ceil(α)-1) <= C because it is flooring. Ok, this is fine. Now case (1): ceil(α-1)<sqrt(n)<=ceil(α) "If not, then ceil(α)-1=floor(sqrt(n))," Indeed. "and it must follow that g(ceil(α)-1)=C; otherwise min(g(x))=C+1, contradicting that the minimum is C." Yes, because ceil(a) then floor(sqrt(n))+1, and min(g(x)) should be among floor(sqrt(n)), and floor(sqrt(n))+1. Okay. Great, I haven't find any mistakes. Just to make you know, my approach was following: Start: floor(n/x) = floor(n/(x+1)) Now, from definition of floor: floor(n/x) <= n/x < floor(n/x) + 1 floor(n/(x+1)) <= n/(x+1) < floor(n/(x+1)) + 1 Now you can replace floor(n/(x+1)) by floor(n/x) because they should be equal. floor(n/x) <= n/x < floor(n/x) + 1 floor(n/x) <= n/(x+1) < floor(n/x) + 1 now, you can multiply both sides of both inequations because they works well on whole real range. floor(n/x)*x <= n < (floor(n/x) + 1)*x floor(n/x)*(x+1) <= n < (floor(n/x) + 1)*(x+1) now, using intersection of this ranges, we reduce this into one inequation: floor(n/x)*(x+1) <= n < (floor(n/x) + 1)*x Open brackets: floor(n/x)*x + floor(n/x) <= n < floor(n/x)*x + x Now, subtract floor(n/x)*x from all sides. floor(n/x) <= n - floor(n/x)*x < x Stare at it. Notice, that n - floor(n/x)*x is remainder. In other words: floor(n/x) <= (n mod x) < x it means, that right "<" is always true because (n mod x) is always < x. Now we have other equivalent task: floor(n/x) <= (n mod x) From here we can easily prove that lower bound of x is sqrt(n) because if x < sqrt(n) then: sqrt(n) < floor(n/x) ??? (n mod x) < sqrt(n) -->> floor(n/x) > (n mod x) So, lower bound is proved by contradiction. Now, next my idea was to use following, define m = floor(sqrt(n)). now m*m <= n < (m+1)*(m+1) define also k such as: m*m + k = n then look at following: m^2 + k = m^2 - y^2 + y^2 + k = (m + y)*(m - y) + y^2 + k = n So, I was thinking what I can say about such: (m + y)*(m - y) + y^2 + k = n Then to have floor(n/(m+y)) = m - y we need to have y^2 + k < m + y, and we also should fit floor(n/x) <= (n mod x): m - y <= y^2 + k So, answer can be m + y if: (y ^ 2 + k) < m + y and m - y <= y^2 + k Here possibly mistakes because I just wrote this from top of my head. But from here you can also see that this grows by power of two, so y should be ~ sqrt(m) where m ~ sqrt(n). I haven't finished this approach. you did first.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Is math related to science?
Skilled player (1417)
Joined: 10/27/2004
Posts: 1978
Location: Making an escape
Yes.
A hundred years from now, they will gaze upon my work and marvel at my skills but never know my name. And that will be good enough for me.
Amaraticando
It/Its
Editor, Player (162)
Joined: 1/10/2012
Posts: 673
Location: Brazil
Related and inspired, but not dependent on it. Some people believe that mathematics is just a meaningless game of symbols. I don't like this though. It surely must have some metaphysical meaning.
Player (80)
Joined: 8/5/2007
Posts: 865
It always frustrates me when I have more trouble with the Riddler Express puzzle than the Riddler Classic puzzle. Yet that's where I am this week. In about 30 minutes' work in Excel, I found that the number of times needed to flip the coins to correctly identify which one has been doctored is between 130 and 140 (I have the exact number, but I'll keep this post spoiler-free). Meanwhile, I'm making little headway into the Riddler Express puzzle. My first finding was that when the match is tied at 1-1, there is no incentive to play scissors since double-scissors are objectively more powerful. Also, based on the symmetry of the game, the probability of winning a match at 1-1 is obviously 1/2. But then I had to extrapolate backwards to an uneven match at 0-1 (or 1-0, depending on your perspective). I wrote the probability of winning the entire match as a bunch of terms based on the relative probabilities for each player using rock, paper, scissors, or double-scissors. There are 7 variables (recall that the player up in the match 1-0 has no incentive to choose scissors) and since the probabilities of playing something must sum to 1 for each player, there are 5 degrees of freedom. So I wrote out the probability that the player down 0-1 wins the match. I'll let someone else write it symbolically, but this is 1/2 times the probability that they win with paper, rock, or double-scissors (1/2 because they still have to win the final round) plus the probability that they win with scissors (instant win), all divided by the same terms plus the probability that their opponent wins. I called the terms in the numerator W (for "win") and the added terms in the denominator L (for "lose"). Next, I took some derivatives to indicate whether the probability of winning is helped or hurt by throwing more of a particular hand. Again, I'll avoid writing this explicitly, but for example, the derivative of the probability of winning with respect to the rate at which the player throws paper is proportional 1/2 times the rate at which their opponent throws rock times L minus the rate at which their opponent throws double-scissors times W. You can extrapolate this pattern outward to obtain similar expressions for all other variables. These aren't quite derivatives (they omit the denominator squared in the quotient rule) but they must all be zero when an optimum strategy is obtained. I called these values "pressures" since they indicate whether a variable should increase or decrease. Basically, I took my original values, then added to them some small constant times the pressures I calculated minus the average pressure. I iterated this process, hoping to home in on optimal strategies for both players. It didn't work. For every initial condition I plug in, I end up with negative probabilities or probabilities that escape to infinity. I also tried a search based on fitting a paraboloid to the probability function, then finding the minimum of this paraboloid. This method is similar to Newton's method for finding the zeroes of a function, except it's a multivariable version. That took a long time to set up and was a complete mess. So I'm stuck.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Amaraticando wrote:
Related and inspired, but not dependent on it. Some people believe that mathematics is just a meaningless game of symbols. I don't like this though. It surely must have some metaphysical meaning.
In case somebody missed it, it's a meme. (Google it).
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
lim (x → 0+) x1/ln(2x) = e That can be fun to prove, but it's not my question. Wolfram alpha agrees with that result, however, for this: lim (x → 0+) x1/ln(x) it says: "(limit does not exist in the real line) (there are infinitely many singularities in every neighborhood about x = 0)" Is that indeed so? And if so, what does that mean?
Player (36)
Joined: 9/11/2004
Posts: 2631
I'm not sure why Wolfram Alpha is complaining about that. x1/ln(x) = eln(x)*1/ln(x) = e where ln(x)/ln(x) makes sense (essentially, everywhere except x=0 and x=1). So it's pretty straight forward to prove that the limit of a constant function with point discontinuities is the value of that function. In fact, Mathematica has no problem with it.
Limit[x^(1/Log[x]), x -> 0]
e
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Editor, Expert player (2080)
Joined: 6/15/2005
Posts: 3284
Bobo the King wrote:
It always frustrates me when I have more trouble with the Riddler Express puzzle than the Riddler Classic puzzle. Yet that's where I am this week. In about 30 minutes' work in Excel, I found that the number of times needed to flip the coins to correctly identify which one has been doctored is between 130 and 140 (I have the exact number, but I'll keep this post spoiler-free).
The Riddler Classic problem is a straightforward computational question. I got 134. The Riddler Express problem is game theory, which really doesn't belong in Riddler Express unless it is a baby question. So I don't blame anyone for thinking that Riddler Express is harder than Riddler Classic this week. As a sample of how complicated game theory can get, see the solution to this Riddler problem. This one isn't nearly as hard because it has far fewer variables. But it still doesn't belong in Riddler Express. I managed to work out the strategies. Let δ be the real solution to 4x3-6x2+5x-1=0. (δ~0.2733) 1-1: Each player: 1/3 rock, 1/3 paper, 1/3 double scissors → 1/2 chance of winning the match. 1-0: Player with one win: 2δ-2δ2 (~0.3972) rock, δ (~0.2733) paper, 2δ-4δ2+4δ3 (~0.3295) double scissors → 1-δ (~0.7267) chance of winning the match. Player with no wins: 2δ (~0.5466) rock, 2δ-4δ2 (~0.2478) paper, δ-2δ2+4δ3 (~0.2056) scissors → δ (~0.2733) chance of winning the match. 0-0: Each player: 1/(3-4δ) (~0.5244) rock, (1-2δ)/(3-4δ) (~0.2378) paper, (1-2δ)/(3-4δ) (~0.2378) scissors → 1/2 chance of winning the match. The strategies for 1-0 and 0-0 are all based on minimaxing the expected reward matrix, which takes too much time for me to cover right now. I might write up details on how to find these values later.
Player (80)
Joined: 8/5/2007
Posts: 865
FractalFusion wrote:
Bobo the King wrote:
It always frustrates me when I have more trouble with the Riddler Express puzzle than the Riddler Classic puzzle. Yet that's where I am this week. In about 30 minutes' work in Excel, I found that the number of times needed to flip the coins to correctly identify which one has been doctored is between 130 and 140 (I have the exact number, but I'll keep this post spoiler-free).
The Riddler Classic problem is a straightforward computational question. I got 134. The Riddler Express problem is game theory, which really doesn't belong in Riddler Express unless it is a baby question. So I don't blame anyone for thinking that Riddler Express is harder than Riddler Classic this week. As a sample of how complicated game theory can get, see the solution to this Riddler problem. This one isn't nearly as hard because it has far fewer variables. But it still doesn't belong in Riddler Express. I managed to work out the strategies. Let δ be the real solution to 4x3-6x2+5x-1=0. (δ~0.2733) 1-1: Each player: 1/3 rock, 1/3 paper, 1/3 double scissors → 1/2 chance of winning the match. 1-0: Player with one win: 2δ-2δ2 (~0.3972) rock, δ (~0.2733) paper, 2δ-4δ2+4δ3 (~0.3295) double scissors → 1-δ (~0.7267) chance of winning the match. Player with no wins: 2δ (~0.5466) rock, 2δ-4δ2 (~0.2478) paper, δ-2δ2+4δ3 (~0.2056) scissors → δ (~0.2733) chance of winning the match. 0-0: Each player: 1/(3-4δ) (~0.5244) rock, (1-2δ)/(3-4δ) (~0.2378) paper, (1-2δ)/(3-4δ) (~0.2378) scissors → 1/2 chance of winning the match. The strategies for 1-0 and 0-0 are all based on minimaxing the expected reward matrix, which takes too much time for me to cover right now. I might write up details on how to find these values later.
These values are (almost) stable in my Excel spreadsheet, although it appears that they creep away from the saddle point, even with a very low step size. Either my gradient search algorithm is inherently unstable or I've entered it incorrectly somehow.
Editor, Expert player (2080)
Joined: 6/15/2005
Posts: 3284
I'll talk about how I arrived at the values in my solution. 1-1: As Bobo the King already mentioned, choosing double scissors dominates choosing scissors when a player already has one win. So we can remove scissors. This leaves standard rock-paper-scissors (with double scissors in place of scissors). So both players play rock/paper/double scissors each with 1/3 chance, and both players have a 1/2 chance of winning the match. 1-0: We call the player with one win "P1" and the other player "P2". Let σ be the chance that P1 wins the match and δ the chance that P2 wins the match. Note that σ+δ=1, and that 1/4<δ<1/2 and 1/2<σ<3/4. I explain the core theory below: The expected reward (read: probability of winning the match) for P1 for each strategy is shown in the table below: (R=rock, P=paper, S=scissors, DS=double scissors)
          P2
          d   e   f   g
          R   P   S   DS
         ---------------
P1 a R |  σ  0.5  1   1
   b P |  1   σ   0  0.5
   c DS| 0.5  1   1   σ
(Remember that for P1, choosing double scissors dominates choosing scissors, so we don't need a row for scissors.) To simplify things for later, we switch the table to the expected reward for P2 (transpose and subtract from 1) for each strategy:
          P1
          a   b   c
          R   P   DS
         -----------
P2 d R |  δ   0  0.5
   e P | 0.5  δ   0
   f S |  0   1   0
   g DS|  0  0.5  δ
Now a,b,c are the probabilities for each P1 choice and d,e,f,g the probabilities for each P2 choice. Together, the expected reward for P2 is: (δa+0.5c)d + (0.5a+δb)e + bf + (0.5b+δc)g. P1's strategy: P1's goal is to minimax this value; that is, find: min(abc)max(defg) (δa+0.5c)d + (0.5a+δb)e + bf + (0.5b+δc)g. Note: I use "min(abc)" to mean: Minimize over all reals a,b,c from 0 to 1 satisfying a+b+c=1. Similarly with "max(defg)". P2 maxes this quantity by setting equal to 1 the variable d,e,f,g whose coefficient is the largest. So we want to minimize the maximum of δa+0.5c, 0.5a+δb, b, 0.5b+δc. To make things short, let's exclude the coefficient of g (0.5b+δc) and minimize max{δa+0.5c, 0.5a+δb, b}. You can check that the minimum occurs when δa+0.5c = 0.5a+δb = b with a+b+c=1. Solving gives a minimum of b=1/(4δ2-6δ+5). However, the minimax is actually P2's probability of winning the match. So the minimum should also be b=δ. Therefore δ=1/(4δ2-6δ+5) → 4δ3-6δ2+5δ-1=0. There is only one real solution, and that is (in WolframAlpha's plaintext) δ = 1/2 + 1/2 ((1/2 (sqrt(177) - 9))^(1/3)/3^(2/3) - 2 (2/(3 (sqrt(177) - 9)))^(1/3)) or about 0.2733. This gives a=2δ-2δ2 (~0.3972), b=δ (~0.2733), c=2δ-4δ2+4δ3 (~0.3295). Note I excluded g earlier, so we have to account for the extra 0.5b+δc. But from above, it is clear that the minimax is at least δ. Plugging in a,b,c from above gives 0.5b+δc = δ(0.5+c) < δ, and so it follows that the minimax is indeed δ. P2's strategy: We can rewrite the expected reward for P2 as: (δd+0.5e)a + (δe+f+0.5g)b + (0.5d+δg)c. P2's goal is to maximin this value; that is, find: max(defg)min(abc) (δd+0.5e)a + (δe+f+0.5g)b + (0.5d+δg)c. But since the maximin is equal to the minimax, we know that there is some value of d,e,f,g that satisfies δd+0.5e = δe+f+0.5g = 0.5d+δg = δ. You can check that d=2δ (~0.5466), e=2δ-4δ2 (~0.2478), f=δ-2δ2+4δ3 (~0.2056), g=0 does the trick. 0-0: Clearly from symmetry, both players have a 1/2 chance of winning the match. We still need to find the strategies. Call the players P1 and P2 again and consider the expected reward for P1:
          P2
          e   f   g   h
          R   P   S   DS
         ---------------
P1 a R | 0.5  δ   σ   σ
   b P |  σ  0.5  0   δ
   c S |  δ   1  0.5  δ
   d DS|  δ   σ   σ  0.5
For P1's strategy, we want to find max(abcd)min(efgh) (0.5a+σb+δc+δd)e + (δa+0.5b+c+σd)f + (σa+0.5c+σd)g + (σa+δb+δc+0.5d)h. But since we know that the maximin has to be 1/2, just check that a=1/(3-4δ) (~0.5244), b=c=(1-2δ)/(3-4δ) (~0.2378), d=0 satisfies this. By symmetry, P2 has the same strategy. ---------- I verified using http://math.ucla.edu/~tom/gamesolve (a solver for these types of problems) that the values I have here are correct. Note that the strategies above are optimal but it may be possible that they aren't unique, though my guess is that they are unique.
Bobo the King wrote:
These values are (almost) stable in my Excel spreadsheet, although it appears that they creep away from the saddle point, even with a very low step size. Either my gradient search algorithm is inherently unstable or I've entered it incorrectly somehow.
I'm not sure how you are trying to determine whether they are stable. In any case, you can't use derivatives because minimax is a linear optimization problem.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
The infinite series sqrt(x+sqrt(x+sqrt(x+...))) converges to (1+sqrt(4x+1))/2. What's curious about that fact is that the expression has real values even for negative values of x, as long as x is >= -0.25. (For example at x=-0.2 the expression has the value of about 0.7236.) This feels highly counter-intuitive. If you tried to approximate the value of the series by calculating finite sub-series, using the algorithm
result=0; while(true) result=sqrt(x+result);
you would end up with the square root of a negative value, which is a no-no. (The above loop works for non-negative values of x, but obviously a negative value will cause an error.) In this sense the series behaves in a very unintuitive way. It's like the "x+sqrt(...)" part kind of expects that "sqrt(...)" part to be positive (and larger than abs(x).) But since x is always negative in every single sub-expression, how can it ever get positive? I can't wrap my head around it.
Player (36)
Joined: 9/11/2004
Posts: 2631
y = sqrt(x+sqrt(x+sqrt(x+...))) y = sqrt(x + y) y^2 = x + y y^2 - y - x = 0 y = (1 + sqrt(4x + 1)) / 2 Going from step 1 to step 2 assumes that the function converges. You have to be careful with this sort of infinite regression. They aren't always well behaved. In this case, because we know that sqrt(a + b) is less than sqrt(a) + sqrt(b) (for a, b > 0), we know that sqrt(y) + sqrt(x) > sqrt(y + x). But if sqrt(x) is out of domain then this equality simply doesn't hold. And because we know that sqrt(y + x) = y it must have a value if the series is convergent. So the series must not be convergent over this domain. (Ignore this shit. Read my next post.) You get a similar oxymoron if you substitute x = 0: 1 = sqrt(0 + sqrt(0 + ...)).
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Player (36)
Joined: 9/11/2004
Posts: 2631
@Fractal Fusion. Thank for doing this write up. I appreciate it. I've never seen a non-trivial game theory problem attacked before.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
OmnipotentEntity wrote:
You get a similar oxymoron if you substitute x = 0: 1 = sqrt(0 + sqrt(0 + ...)).
Does this mean that you can only say sqrt(x+sqrt(x+sqrt(x+sqrt(...)))) = (1 + sqrt(4x + 1)) / 2 , x > 0 In other words, the equality does not hold for x <= 0? If yes, at which point of the proof does one have to assert that x>0?
Player (36)
Joined: 9/11/2004
Posts: 2631
Warp wrote:
OmnipotentEntity wrote:
You get a similar oxymoron if you substitute x = 0: 1 = sqrt(0 + sqrt(0 + ...)).
Does this mean that you can only say sqrt(x+sqrt(x+sqrt(x+sqrt(...)))) = (1 + sqrt(4x + 1)) / 2 , x > 0 In other words, the equality does not hold for x <= 0? If yes, at which point of the proof does one have to assert that x>0?
I already went over this. y = sqrt(x+sqrt(x+sqrt(x+...))) y = sqrt(x + y) Going from step 1 to 2 you have to assert that the expression converges, and then I go to show (in a loose, nonrigorous way actually, I take that back, my argument makes no fucking sense at all...) that the expression cannot converge for x < 0. This isn't a rigorous argument, because I didn't show that the expression actually does converge anywhere, but assuming it does then its domain is limited to [0, inf), and although I didn't actually show that it doesn't converge for x=0, I'm thinking it doesn't, so its domain of convergence is actually (0, inf). If you want to make it rigorous, I'll leave that to you. EDIT: Here's a second attempt at making the x < 0 cannot converge argument stick: I'm going to approach this from a sign standpoint. If you collect a bunch of is in the sqrts, then they have to all cancel when forming the final value. Which means arg(z) = n pi. If x > 0 then sqrt(-x) = i sqrt(x). sqrt(-x + sqrt(-x)) = i sqrt(x + i sqrt(x)) arg(i sqrt(x)) = pi/2 arg(x + i sqrt(x)) = arctan(sqrt(x)/x) = arctan(1/sqrt(x)) arg(i sqrt(x +i sqrt(x))) = pi/2 + arctan(1/sqrt(x))/2 arg(x + i sqrt(x +i sqrt(x))) = arctan( (pi + arctan(1/sqrt(x))) /2x) From here we can create a general expression. arg(sqrt(-x + sqrt(-x ...)) = w = arctan( (pi + arctan(w)) /2x) Note that we are assuming that the value converges here. If it does not converge, then the original expression does not converge. So this is a safe assumption. tan(w) = (pi + arctan(w))/2x From this we can get x != 0. And because in order for y to be real, w must be n pi, and tan(n pi) = 0 we can replace: 0 = (pi + arctan(n pi))/2x pi + arctan(n pi) = 0 arctan has a range of (-pi/2, pi/2) so pi + arctan(anything) !=0. Therefore, this equation has no solutions that give y with Im{y} = 0. Therefore, over the domain (-inf, 0], sqrt(x + sqrt(x ....)) cannot have a purely real value.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Masterjun
He/Him
Site Developer, Expert player (2047)
Joined: 10/12/2010
Posts: 1185
Location: Germany
FractalFusion wrote:
Note that the strategies above are optimal but it may be possible that they aren't unique, though my guess is that they are unique.
A quick question: Does optimal strategy in this context mean that there are no other strategies against it with an expected win rate of over 50%? Or am I misinterpreting this?
Warning: Might glitch to credits I will finish this ACE soon as possible (or will I?)
Player (36)
Joined: 9/11/2004
Posts: 2631
Masterjun wrote:
FractalFusion wrote:
Note that the strategies above are optimal but it may be possible that they aren't unique, though my guess is that they are unique.
A quick question: Does optimal strategy in this context mean that there are no other strategies against it with an expected win rate of over 50%? Or am I misinterpreting this?
A good way to think about an optimal strategy is, if I'm playing an optimal strategy against you, then my strategy is setup such that every option you choose has an equal expected payoff. It doesn't mean that your win rate is any specific value. Just that you get the same, minimized win probability no matter which choice you pick.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Editor, Expert player (2080)
Joined: 6/15/2005
Posts: 3284
Warp wrote:
OmnipotentEntity wrote:
You get a similar oxymoron if you substitute x = 0: 1 = sqrt(0 + sqrt(0 + ...)).
Does this mean that you can only say sqrt(x+sqrt(x+sqrt(x+sqrt(...)))) = (1 + sqrt(4x + 1)) / 2 , x > 0 In other words, the equality does not hold for x <= 0? If yes, at which point of the proof does one have to assert that x>0?
What does sqrt(x+sqrt(x+sqrt(x+sqrt(...)))) mean exactly? If you mean a0=C, ai=sqrt(x+ai-1), limit as i goes to infinity, then there are values of C for which ai converges to (1 + sqrt(4x + 1)) / 2 for any x>=-1/4.
Masterjun wrote:
FractalFusion wrote:
Note that the strategies above are optimal but it may be possible that they aren't unique, though my guess is that they are unique.
A quick question: Does optimal strategy in this context mean that there are no other strategies against it with an expected win rate of over 50%? Or am I misinterpreting this?
An optimal strategy is one that ensures the highest reward assuming that the opponent knows exactly what your strategy is.
Player (36)
Joined: 9/11/2004
Posts: 2631
FractalFusion wrote:
What does sqrt(x+sqrt(x+sqrt(x+sqrt(...)))) mean exactly? If you mean a0=C, ai=sqrt(x+ai-1), limit as i goes to infinity, then there are values of C for which ai converges to (1 + sqrt(4x + 1)) / 2 for any x>=-1/4.
That's fair. And I suppose in the case of x > 0 then the particular value of C = 0 will converge. I wonder what is the convergence criteria for any x. If x > 0 then even if C is very large and negative, the repeated square rooting will drag the arg back toward 0. I suppose this means that for x > 0 for the expression to converge places no requirements on C. If x = 0 then even if C is very large and negative, you'll get a pull to one. However, if C = 0 then you won't converge. So for x = 0, C can be any value except 0. For x negative, it's non-trivial and I need to finish up some work. So I'll either revisit it or someone else will.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Player (36)
Joined: 9/11/2004
Posts: 2631
Playing around with it, for x negative, it seems like any C will do, if you allow complex values in intermediate steps. In fact, if you allow complex values, any C will do for any value of x and you'll get back your (1 + sqrt(4x + 1))/2. Except of course for the special case where x = 0 and C = 0.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Player (80)
Joined: 8/5/2007
Posts: 865
Another week, another two Riddler challenges. As with last week, I found the Classic puzzle to be more straightforward than the Express one. For Classic, I obtained an optimal strategy that was better than 1 in 10. (As usual, I'm keeping this post spoiler-free.) I found eleven strategies that were distinct even through symmetry considerations. Finding those strategies was rather tedious, but testing them took a few minutes in Excel. I would like to know about the extra credit, however. I scaled up my strategy and found a probability of about 1 in 10^29. I suspect this is not optimal, but my reasoning doesn't amount to anything more than a gut feeling. For Express, however, there are too many distinct ways of arranging these fences. I tested just three symmetrical configurations with three fences meeting at a central point. I also tested another simple strategy with two fences. Without going into too much detail, the best strategy I found used a hair under 1.633 miles of fencing. Can anyone beat this? Testing all possible fencing configurations without the assumption of symmetry would require way too much calculus for my brain to handle at the moment. Also, for practical considerations, the difference between the best and worst strategies I tested (disregarding two parallel, vertical lines, 2 miles of fencing) was about 573 feet of fencing. The difference between the two best strategies I tested was just 9 feet. Even if I'm incorrect, assuming that the best strategy is not much better than what I've already found, I doubt it would be worth the price of the additional fence to go through the trouble of calculating it.
Editor, Expert player (2080)
Joined: 6/15/2005
Posts: 3284
OmnipotentEntity wrote:
Playing around with it, for x negative, it seems like any C will do, if you allow complex values in intermediate steps. In fact, if you allow complex values, any C will do for any value of x and you'll get back your (1 + sqrt(4x + 1))/2. Except of course for the special case where x = 0 and C = 0.
It's best to think of this as a fixed-point iteration problem on the function f(y)=sqrt(y+X), that is, as I described above already: a0=C, ai=sqrt(ai-1+X), limit as i goes to infinity. Also, to avoid confusion, I will only deal with real numbers. Graphically, on the yz-plane the we draw both the line z=y and the curve z=sqrt(y+X). Fixed-point iteration works as thus: @ Start from some point (C,C) on the line z=y. @ If the curve z=sqrt(y+X) lies above this point, go up to the curve, then right to the line z=y to get the next point. @ If the curve z=sqrt(y+X) lies below this point, go down to the curve, then left to the line z=y to get the next point. @ Repeat this over and over and see where it converges (if it does). If it does converge, it does so at a fixed point (that is, a point where f(y)=y). Case 1: X>0 There is one fixed point at y=(1+sqrt(4X+1))/2. From the graph, every value of C>(1+sqrt(4X+1))/2 decreases until it converges to (1+sqrt(4X+1))/2 and every value of -X<=C<(1+sqrt(4X+1))/2 increases until it converges to (1+sqrt(4X+1))/2. Case 2: -1/4 <= X <= 0 There are two fixed points, one at y=(1+sqrt(4X+1))/2 and one at y=(1-sqrt(4X+1))/2 (unless X=-1/4, in which case they are the same y=1/2). From the graph, every value of C>(1+sqrt(4X+1))/2 decreases until it converges to (1+sqrt(4X+1))/2 and every value of (1-sqrt(4X+1))/2<C<(1+sqrt(4X+1))/2 increases until it converges to (1+sqrt(4X+1))/2. If C=(1-sqrt(4X+1))/2, then it just remains (1-sqrt(4X+1))/2. Finally, if C<(1-sqrt(4X+1))/2, then it decreases until it is less than -X, in which case it is no longer defined. Case 3: X<-1/4 There are no fixed points, and every value of C decreases until it is less than -X, in which case it is no longer defined. ---- By the way, for the case X=0, C=0 does converge, it just converges to 0 rather than 1. It is positive C that converges to 1, and negative C isn't even defined when you take the square root.
Bobo the King wrote:
For Classic, I obtained an optimal strategy that was better than 1 in 10. (As usual, I'm keeping this post spoiler-free.) I found eleven strategies that were distinct even through symmetry considerations. Finding those strategies was rather tedious, but testing them took a few minutes in Excel. I would like to know about the extra credit, however. I scaled up my strategy and found a probability of about 1 in 10^29. I suspect this is not optimal, but my reasoning doesn't amount to anything more than a gut feeling.
For 2n people I got 1 in C(2n,n), which for n=50 matches your guess for 100 people. Since I cannot come up with a reason why anything else is clearly better, I'll just assume that this method is optimal.
Bobo the King wrote:
For Express, however, there are too many distinct ways of arranging these fences. I tested just three symmetrical configurations with three fences meeting at a central point. I also tested another simple strategy with two fences. Without going into too much detail, the best strategy I found used a hair under 1.633 miles of fencing. Can anyone beat this? Testing all possible fencing configurations without the assumption of symmetry would require way too much calculus for my brain to handle at the moment. Also, for practical considerations, the difference between the best and worst strategies I tested (disregarding two parallel, vertical lines, 2 miles of fencing) was about 573 feet of fencing. The difference between the two best strategies I tested was just 9 feet. Even if I'm incorrect, assuming that the best strategy is not much better than what I've already found, I doubt it would be worth the price of the additional fence to go through the trouble of calculating it.
I don't know the strategy you have yet, but the fences don't have to be straight lines from the common joint to the edges, right? In that case, I think the optimal curves to the edges would be circular arcs (if not a perpendicular line to the edge). This problem reminds me of some variants of the Isoperimetric Problem. I might try to work it out later.