Posts for p4wn3r
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
FractalFusion wrote:
p4wn3r wrote:
From what I could see online, there are some algorithms to enumerate solutions for some cases. I would have to check if some can be simplified to provide a solution that always works.
I'm interested in knowing what kind of algorithms you are referring to. I constructed my examples by finding an appropriate set of integer points, taking advantage of the fact that these solids are highly symmetric.
I was referring to this paper: https://arxiv.org/abs/0912.1062 The author's bibliography seems to have a lot of content regarding this problem. Sorry for not answering early, it's been very busy since I moved to Munich. In any case, I'll drop the invitation here. If anyone wants to have a beer at Marienplatz, just PM!
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Very interesting problem. From what I could see online, there are some algorithms to enumerate solutions for some cases. I would have to check if some can be simplified to provide a solution that always works. Here's a partial result I could already obtain: this is impossible for the dodecahedron. Why? Because it's impossible to construct a regular pentagon using a 3d lattice. Proof follows. Suppose you have three consecutive points of a regular pentagon A, B, C, all three with integer coordinates in 3d space. Then we evaluate the inner product (A-B)*(C-B). This should be equal to lA-B|^2*cos(108) because we must have |A-B|=|C-B| and the angle between two edges of a regular pentagon is necessarily 108 degrees. That implies cos(108)=(A-B)*(C-B)/|A-B|^2. However, notice that the inner product (A-B)*(C-B) is an integer, because it's computed by products, suas and subtractions of integral numbers. Similarly, |A-B|^2 = (A-B)*(A-B) is also an integer, from similar arguments. That implies that cos(108) is a rational number, which is absurd, since cos(108)=(1-sqrt(5))/4 is in fact irrational. Since we reached a contradiction, our initial hypothesis that the pentagon exists must be false.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Very impressive! My solution uses the same recasting of Chaikin's algorithm in terms of the segment midpoints, but differs a bit at the end. Quadratic Bézier curves also go by the less fancy name of parabolas. Proving that each piece of the Chaikin curve is in fact the segment of a parabola follows immediately if we prove that a parabola has the following property: For any two points A and B in a parabola, trace the tangents at these points, which meet at point P. Denote by M and N the midpoints of AP and BP, respectively. Then, the midpoint C of MN lies on the parabola. There's probably an elegant solution of this using descriptive geometry, but I didn't bother to try it. If someone here wants to, please do and post it! I did it the boring way using analytic geometry. It suffices to prove this for the parabola y = x^2, because any other parabola can be obtained by scalings and rotations of this one. Let A = (a,a^2) and B = (b, b^2). The equation of the tangent at A is y = a^2 + 2a(x-a), and at B is y = b^2 + 2b(x-b). We find P by setting a^2 + 2a(x-a) = b^2 + 2b(x-b) => 2(a-b)x = a^2 - b^2 => x = (a+b)/2 => y = a^2 + a(-a+b) = ab => P = ((a+b)/2, ab) Now, the point C is given by C = (A + 2P + B)/4 = (2a + 2b, a^2 + 2ab + b^2)/4 = (a/2+b/2, (a/2+b/2)^2). Notice that the y coordinate is indeed the square of the x coordinate, so the claim is true. EDIT: Actually, now I see that this is not enough. We also need to prove that MN is tangent to the parabola. That's easy. The vector taking M to N is N - M = (B + P)/2 - (A+P)/2 = (B-A)/2 = (b-a,b^2-a^2)/2. Its slope is(b^2-a^2)/(b-a) = a+b, which is exactly the slope of the tangent at point C.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Here's another one in the same vein of "guess the curve", but tougher. There's an obscure curve generation algorithm called Chaikin curve subdivision. The way it works is: you pick a set of N+1 ordered control points, and from these you get N line segments. For an individual line segment AB, find the points C and D such that the lengths are AC = AB/4, and DB = AB/4. Do this for all segments. In the end, ditch all points A and B and take C and D as the next set of control points. Repeat this a huge number of times, and you'll have a curve. However, it's actually a well-known curve which can be generated by other methods, and that's why you don't hear a lot about Chaikin's algorithm. In any case, what is the curve?
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Let us change the coordinates a bit. Suppose the amount of work you need to do is y, and the time you need to deliver it is at x=0. You start with a given amount of work y0 at a point in time x0>0, and decreases as you work on it. With these coordinates, the differential equation for the normal way is: dy/dx = y/x You can see by inspection that the solution is of the form y = cx, for a constant c, which you determine with boundary conditions. Now, for the Riddler method we instead have: dy/dx = ky/x, for k=2 This is still a homogeneous ODE and can be solved in the standard way. Let v(x)=y(x)/x and apply the chain rule to find: dy/dx = d(v*x)/dx = x*dv/dx + v = kv dv/(k-1)v = dx/x This is, of course, only valid for k != 1. If k=1, we simply have dv/dx = 0, which implies that v is a constant. Anyway, integrate both sides: 1/(k-1)*ln(v) = ln(cx) v = y/x = C x^(k-1) y = C x^k So, the continuum generalization of this is just a power law.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Sometime ago, I posted here that General Relativity is a Machian theory, in the sense that matter causes spacetime to curve, and the spacetime causes matter to move, and at the time there were some objections to this, mostly stating that I don't know some concepts of differential geometry (disclaimer: I really don't), and a more interesting part relating to the fact that the existence of nontrivial vacuum solutions to the field equations (Schwarzschild, Kerr, gravitational waves, etc.) rules out this idea, because different matter configurations are generating different spacetimes. I would like to share a paper which confirms what I'm saying in a completely rigorous way. This paper is a mathematical proof (notice that it was published in a math journal, not a physics one) that General Relativity makes complete sense as a constrained initial value problem, in the sense that field configurations at a certain time completely determine fields in the future. I'll summarize the results the best I can. One good context to start is the ADM formulation of general relativity. Essentially, the equations are decomposed in a time part and a space part. The time part says how the fields should evolve, and the space part constrains how the fields should behave. For this to make sense, the time evolution should be consistent with the constraint (you cannot impose a constraint on the fields at some point in time, and then later they evolve and no longer respect it), in GR this is guaranteed by the Bianchi identities. Anyway, in the ADM formulation, to simulate the system you need to provide four arbitrary functions, the so-called lapse and shift vectors. These functions will tell spacetime how to deform. Different choices will give different results and it's not clear what GR is really predicting. Einstein was well aware of this, and his thought about this issue goes by the name of the hole argument. In any case, Einstein's solution is that the only thing meaningful about the coordinate is if the points are different or not. I think this is a good solution, but at our current knowledge of physics, completely impractical. In order to make any physical prediction, all of physics should be cast in a general covariant form, and we are still far from this. Einstein tried a similar idea, with the so-called unified field theory, but didn't have much success. Given this, how does the author of the 1952 paper get around this? Simple, the idea is to not allow the lapse and drift vectors of the ADM formalism to have a general form. The group of coordinate transformation has a well understood structure as a Lie group, and so has a dimension. That means, that if we can constrain the lapse and shift vectors with a given number of equations as to cancel the dimension of this Lie group, and prove that the initial value problem is well-posed by any means, we are done. We necessarily have a theory with meaningful physical predictions. Now, there are some technical complications when you do this, the route followed in the paper is to impose the conditions we know as harmonic gauge. What this does is: it breaks the general covariance of the Einstein equations, and they are now only Lorentz covariant (they are invariant with respect to boosts and rotations, but not general coordinate transformations). This introduces some technical difficulties, because now it's inconsistent with the ADM. However, they can be overcome, you can still prove that it's a constrained initial value problem, where the constraint is consistent with the time evolution, and you can prove that a solution exists using standard PDE techniques. The Lorentz covariance of the resulting equations is actually a good thing because the rest of physics is cast in a Lorentz (not general) covariant way. This result is important because, historically, relativists had a hard time settling the issue of whether gravitational waves were real. Although you can find wave solutions in linearized GR, it's not clear why they could not be simply a changing coordinate chart as time passes. The following theorem proves that General Relativity in the harmonic gauge is a well defined initial value problem, even when all nonlinearities are taken into account, so that the wave solutions describe real physical waves. By the way, none of this uses any differential geometry. All of this can be derived using knowledge of partial differential equations and Lie groups, and everything I said can be done with some modification for Maxwell's equations, Yang-Mills equations, and whatever. In this aspect, General Relativity is really not different from these other theories. Besides that, numerical GR calculations use these formalism and correctly predicted the form of gravitational waves coming from a black hole merger. I still haven't heard any reasonable explanation from physicists who claim GR is not Machian, about how this prediction makes sense for them. As to the other objection, that there are several vacuum solutions to the Einstein equations, it's not relevant for the following reason. An initial value problem is given by the differential equations plus the boundary conditions. It's perfectly fine to study solutions of a differential equation without worrying about boundary conditions, but often what happens is that you end up with topological invariants. What the enormous amount of vacuum solutions mean is that there are several nontrivial topologies for the spacetime manifold. That does not mean GR should not be well posed as an initial value problem, all that this means is that if we take two topologically distinct solutions, it's impossible that the time evolution of one of them can take us to the other. That also happens with lots of partial differential equations, and is not exclusive to general relativity.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
OmnipotentEntity wrote:
It's not very satisfying though, because it took a guess and check method. If you have a way of arriving at this from first principles please post it!
Allow me to introduce complicated math for absolutely no reason. First, let me rewrite your equation as: f(n) = 1 + (f(0)+f(1)+...+f(n-1))/n, where f(n) denotes the expected value of moves starting from n+1, so we have f(0) = 0. The complicated math I want to put in is the generating function. It turns out that, if we have a sequence f(n), starting at n=0, we can define a generating function F(x) = sum_n f(n)x^n, where n goes from 0 to infinity. If we ignore stuff like convergence, we can identify f(n) as n! times the n-th coefficient of the Taylor expansion of F(x). Now, if we have the generating function for a sequence, what's the generating function for the sequence + 1? Easy, we must compute sum_n (f(n)+1)*x^n = sum_n f(n)x^n + sum_n x^n = F(x) + 1/(1-x) Also, if we have the generating function for a sequence, what's the generating function for the sum of the sequence? Also easy, notice that if we have F(x) = f(0) + f(1)x + f(2)x² + ..., we can write F(x) + xF(x) + x²F(x) = f(0) + (f(0)+f(1))x + (f(0)+f(1)+f(2))x²+... = F(x)/(1-x), so to get the generating function of the sum, we simply divide by 1-x. We can also have a look at the integral of the generating function, and by integral I mean the primitive function G(x) such that G'(x)=F(x) and G(x) = 0. Notice that: G(x) = f(0)x + f(1)/2 * x² + f(2)/3 * x³ + ... = sum_n f(n)/(n+1) * x^(n+1) Now, what happens if we have a function F(x), divide it by 1-x and integrate? Here it is: int(F(x)/(1-x)) = f(0)x + (f(0)+f(1))/2 * x^2 + (f(0)+f(1)+f(2))/3 * x^3 Notice how this is cool, if we have a generating function F(x), then the generating function of its (shifted) average is simply int(F(x)/(1-x)). Using this, if we interpret the recursion as an identity of generating functions (multiply each side by x^n and sum everything from n=0 to infinity), we obtain: F(x) = 1/(1-x) + int(F(x)/(1-x)), and of course, F(0) = f(0) = 0. So, we essentially reduce the entire thing to an initial value problem. If we can solve for F(x), we can extract the Taylor series and obtain f(n). Anyway, let us convert this to an ODE. First, isolate the integration: int(F(x)/(1-x)) = F(x) - 1/(1-x), now differentiate: F(x)/(1-x) = F'(x) - 1/(1-x)^2 => F'(x) - F(x)/(1-x) = 1/(1-x)^2 Notice now that we can solve this using an integrating factor. You can find it using the general procedure, but it simplifies here. I'll just say you should multiply by (x-1): d/dx [ (x-1)*F(x) ] = 1/(x-1) => (x-1)*F(x) = ln(1-x) + c, Since F(0)=0, c = 0, then: F(x) = -ln(1-x)/(1-x). Notice that this is the generating function of the harmonic numbers. To see this, notice that the Taylor expansion of -ln(1-x) has the form 1/n: -ln(1-x) = x +(1/2)x^2 + (1/3)x^3 + ... By the arguments above, if we divide it by (1-x), we should get the sum of the numbers of the form 1/n, the harmonic numbers: -ln(1-x)/(1-x) = x + 3/2 x^2 + 11/6 x^3 + ... Notice, though, that guessing and checking is a perfectly valid mathematical argument, and I would only suggest you use these "brute force" methods like generating functions if you have a lot of time to solve the problem. Although they are more general, they are longer and more error prone.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Very nice solutions. I did it applying the Chinese remainder theorem. The nice thing about this theorem is that it gives the precise conditions where we can lift a system of congruences to a single congruence in another modulus. As mentioned before, a number of the form n = ABCABC can be written n = k*1001 = k*7*11*13. The condition of divisibility by the divisors can be stated as: n = 1 mod 6 n = 1 mod 10 n = 1 mod 12 This is not in the form where we can apply the Chinese remainder theorem, because we need all modulus to be coprime. However, we can work around this. See that n = 1 mod 12 implies n = 1 mod 6, so we can drop the first one. Also, n = 1 mod 10 implies n = 1 mod 5, and we have: n = 1 mod 5 n = 1 mod 12 The Chinese remainder theorem tells us that this is equivalent to a congruence mod 60. You can use the formula there or just notice that n = 1 mod 60 implies both. Now, if we substitute n = k*1001 and notice that 1001 = 41 mod 60, we must find k*41 = mod 60. That is, we must find the multiplicative inverse of 41 modulo 60. Again, you can use Euclid's algorithm to do this, but I simply tried all of 11, 21, 31, 41 and 51, since the multiplicative inverse must end in 1. Doing this, you'll see that 41*41 = 1681 = 27*60 + 41 = 41 mod 60. So, 41 is its own multiplicative inverse. So, we need to try all numbers k = 41 mod 60 in the range, which are 41, 101, 161, 221, 281, 341, 401, 461, 521, 581, 641, 701, 761, 821, 881, 941. To check them, I did basically the same as FractalFusion, 41041 and 101101 are the only solutions.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
This week's Riddler classic is a pretty standard number theory problem. Find a six digit Carmichael number which can be written as ABCABC in base 10. As the Riddler notes, try to solve without writing a brute force code, you learn a lot of number theory doing this. In other news, I made a lot of progress understanding the math behind the Apollonian gasket since my last post! I improved my code a lot, and manage to automate a lot of things to let me try things faster, and I managed to figure out the properties of the transformations that generate the attractor. One very nice result I got is that there's a version of the gasket which is unbounded, have a look: See how the color gets weaker the farther we go, that's an intrinsic limitation of the rendering software, but I think it's very cool! I should write something more elaborate later, but here are the formal properties behind the attractor. You need two Möbius transformations S and T. We take S a diagonal loxodromic transformation, whose multiplier is an ugly complex number you get by solving a quadratic equation with complex coefficients. For T, we take an elliptic transformation that rotates by 120 degrees, so it satisfies T3=I. It also must satisfy another property. An elliptic transformation has two fixed points, and we must choose these fixed points as complex numbers a and b such that their ratio a/b equals an even uglier constant, which is found by multiplying lots of square roots. Once you have these, you construct the two functions in the attractor (notice that my previous approach used four!): F = ST G = TS-1 One interesting thing about these generators is that they satisfy (FG)^3 = (GF)^3 = I. You can try proving this if you are bored. Anyway, if you pick a random Möbius transformation P and conjugate F and G by it, you get another gasket. While it's pretty elegant to generate the transformations like this, it has a drawback, it's pretty impossible to know what the gasket will look like, and you have to input two very complicated constants that, while they do have closed forms, it's much easier to evaluate them numerically. Recently, I've been coding a hybrid of these approaches, where I start with a configuration where I know what the gasket will look like, start changing these transformations, and then modify these constants a little bit. Obviously I don't obtain the Apollonian gasket, but another fractal that still looks cool. The problem is: this is taking a lot of work, which is why I haven't written anything in detail yet, but I should be able to finish soon. Let me know if you guys find this interesting!
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Here's the solution. It's based on the concept of a zero-knowledge proof. The idea is: by knowing the secret s, there are questions which only Alice can answer reliably. So, by answering a set of questions, we can confirm that Alice knows the secret without her revealing it. In our case, suppose Alice generates a random element t of the group G. She then computes c = tsas-1t-1, and publishes c. Now, Alice can answer 2 questions: (1) Which element g in G is such that gag-1=c ? (2) Which element g in G is such that gbg-1=c ? Now, only Alice knows the answer to questions (1) and (2). They are ts and t, respectively. If someone who's not Alice can compute the answer, then the conjugacy problem is not hard, violating our assumptions. Also, the answer to these questions is just something randomly generated, so Alice is not disclosing the secret. However, we are not done yet, because there are some weaknesses in simple protocols. Suppose we design a protocol, where Alice publishes the element c, and Bob asks (1). Now, someone can impersonate Alice, because they can generate a random element u' and publish c'=u'au'-1, and answer (1) without knowing s. Similarly, someone can also impersonate Alice if Bob asks (2). They can generate an element u' and calculate c'=u'bu'-1 and again answer (2) without knowing s. Can we solve this by requiring Bob to ask (1) and (2) simultaneously? Well, that does work to verify Alice, but unfortunately leaks the secret s. That's because s = t-1(ts), so Bob can retrieve the secret by multiplying the inverse of question (2) with the answer of question (1). The trick is, then, to require Bob to ask (1) and (2) randomly. That way, if Alice does know the secret, she can answer any of the questions, while someone impersonating her only has 50% chance of getting it right. By repeating this procedure many times, we can make the probability of an impostor to pass the test extremely small, and our protocol works.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Here's a problem I came up with today. Suppose you have a non-abelian group G. For this kind of problem, it's much more difficult to work with a concrete group than to look at its general properties. Anyway, the properties we want are: * For any a and b in G, it's easy to calculate their product ab, and also their inverses a-1 and b-1. It's also easy to generate a random element t in G. * From the property above, it follows that, given an element a in G, you can generate a secret element s, and evaluate the conjugate sas-1. * We now assume that the conjugacy problem, that is, given two elements a and b, finding an element g such that a = gbg-1, is computationally intractable. Now, the challenge is the following. Suppose that we have an element a of G, which is public knowledge. Suppose someone, named Alice, generates an element s of G as a private key and publishes the conjugate b = sas-1 as a public key. That all makes sense because finding the conjugate is hard, from our assumptions. Now, Alice wants another person, named Bob to confirm that she knows the secret. The easiest way to do this is, of course, to show s to Bob. Then, by verifying the identity b = sas-1, Bob can confirm the secret. However, that's undesirable because Alice doesn't trust Bob, and doing this she would lose her secret. So, is there a way that Alice can show to Bob that she knows the secret without Bob being able to deduce what it is?
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
I'll ignore the problem because I want to make a very special post here. TL;DR it's about this image: This image is an Apollonian gasket, and a very asymmetrical one. I actually used a 3-4-5 triangle where the hypotenuse is in the x axis to generate it. Most importantly, it's generated in a popular fractal generation software called Apophysis. I put the .flame file for the render in GitHub. The thing is: to my knowledge no one has done that before. The reason is that Apophysis uses an algorithm called IFS to render the fractal, and you actually need to come up with an IFS that will converge to the Apollonian gasket. From what I've seen in the literature, the only cases where an IFS was known were for very symmetrical configurations of the gasket and they actually converged only to a part of it. That's not a problem in most situations, because you can always pick a simple case and transform it to a crazier configuration, but I think that's cheating. I always thought that this limitation of the methods was odd, because my intuition told me that the general case shouldn't be more difficult to solve than the simple ones. So, after working on this problem on and off for the last week, I can finally say... I DID IT!!!!! Yay, go me. I got an original result. Anyway, the Github repo I linked before has the code I used, I wrote a Medium article explaining it. I hope it's useful/fun for people here. If you have reached the monthly limit in Medium, reach out to me and I'll send it to you.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Well, for what it's worth, the Riddler posted the official solution. I was a bit disappointed. He mentions that people ran code to find the optimal value, but their code just scans all possible encodings, without even using dynamic programming to optimize the recursion. He then claims without proof that you should pick numbers close to the golden ratio and look at the nearby ones if they are greater, again without even giving a bound on how many you should check. What a bummer, the proof is pretty interesting.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
I'll make another post because I got to the solution by a very convoluted method, and my edits give the impression that it's more difficult than it actually is. We take a function f(x) = 1 + 1/x. Let us look at a fixed diagram of it: To converge to the fixed point, one iteration is to go in the vertical direction and next on the horizontal direction. Doing this, you will converge to the fixed point, which is the golden ratio. Now, the problem of finding a sequence terminating on N is equivalent to: 1) Start at a rational x between 0 and 1. 2) Hop through the fixed point diagram until you land on a rational number N/m for some m. 3) Out of all m, choose the one that gives you more hops. That's it, basically. It turns out that if we look at intervals instead of points, this iteration is much more predictable. This image says it all: What's happening here is: when you are in the red interval (0,1), and apply f, you end up on the blue one (2,infinity). If you are on the blue one, and apply f, you end up on the purple one (1,3/2). If you are on the purple and apply f, you go to the black one (5/3,2). If you are on black and apply f, you go the yellow one (3/2,8/5). The points that divide these intervals are always fractions of consecutive Fibonacci numbers: 1, 2, 3/2, 5/3, 8/5 and so on. Because of this, counting the number of iterations starting from the red interval is very simple, we can actually assign a number to each color. red -> 0 purple -> 2 yellow -> 4 black -> 3 blue -> 1 As we get closer and closer to the golden ratio, which is the fixed point, this number starts to blow up. Because of this, if we want to maximize this number, notice the two properties: If we denote the golden ratio by phi, and we have two numbers a and b, such that a < b < phi, it's always better to pick b, because it's impossible for the number of iterations for b to be smaller. Similarly, if we have phi < a < b, it's better to pick a, again because it's impossible for the number of iterations for a to be smaller. For any sufficiently large number N, we can always find an m such that N/(m+1) < phi < N/m. Since N/(m+1) and N/m are the ones closest to phi, from below and above, we can find the solution by only looking at these two guys, so we only need to check the Fibonacci sequences ending in tuples (m,N) and (m+1,N). I hope this explanation is better.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Sorry for not posting the solution to my problem earlier. I've been busy at work. To see what the Digimon code is doing, it's convenient to think in terms of vectors. What it does is: it picks two points P1=(x1,y1) and P2=(x2,y2) in the grid, rotates the vector (P2 - P1) 90 degrees counter-clockwise ,displaces the midpoint by a random amount times this perpendicular vector, then calls itself recursively at (P1,displaced_point) and (displaced_point,P2), stopping when it called itself too many times or the length of the segment is too small, in this case, it just prints a straight line. This is a variant of random midpoint displacement, a technique commonly used to generate fractals, which are known to model coastlines well. If we look at the randomness in the digimon algorithm, it's specially controlled, it takes an input variable to control the intensity of the displacement, at the minimum the algorithm prints just a straight line. And also, since it's summing three rnd() calls, the distribution function is the Irwin-Hall distribution for n=3, which is already very close to a Gaussian, the mean is -1/2 and the variance is 1/4 (standard deviation 1/2). This means that it chooses a negative value approximately 84% of the time. That means it actually displaces the midpoint more in the clockwise direction. In the end, you're likely to end up with a random fractal between a variant of the dragon curve and the Koch snowflake, which are commonly used to approximate coastlines. Quite advanced math for a children anime! --------- Now, for the Riddler problem. There are two key pieces of information to notice. The first is that the Fibonacci-like sequence consists of positive integers, and the second is that two consecutive Fibonacci numbers completely determine a sequence, both at positive infinity and negative infinity. We can combine these two facts to solve the problem. Lemma 1: if a Fibonacci-like sequence has two consecutive positive integers, then all further numbers a_n as n goes to positive infinity are positive integers as well. We prove this by induction. Suppose we have a tuple (m,n) of consecutive numbers of the sequence. The next tuple is just (n, m+n). Notice that if m and n are positive integers, then so are n and m+n. Applying this inductively, all elements of the sequence are positive integers. Lemma 2: let an be an element of a Fibonacci-like sequence and assume an <= 0. Then, for every tuple (am-1,am) for m <= n, at least one of am-1 and am is nonpositive. This comes from an application of Lemma 1. Assume both elements of the tuple are positive. Then, by Lemma 1, every other element of the sequence in the positive infinity direction is also positive, this includes an. We reached a contradiction, so that means our assumption is false. Lemma 2 suggests an algorithm to find all valid Fibonacci-like sequences that start at a tuple (m,n) where both m and n are positive. We simply pick a tuple and go back using (m,n) -> (n-m,m). If at any point we reach 0 or a negative number, we know it's impossible to find a tuple further down satisfying the conditions of the problem, so we stop. This suggests a solution using dynamic programming, which we can do using O(n^2) memory and time, where n is the number we want to compute the maximal sequence. I wrote some quick and dirty C++ code for this problem:
Language: cpp

#include <stdio.h> #include <stdlib.h> struct rep { int m,n,q; }; const int MAXN = 1000; rep dp[MAXN][MAXN]; rep compute(int m, int n) { rep r = dp[m][n]; if (r.q != -1) { return r; } if (m >= n) { dp[m][n].m = m; dp[m][n].n = n; dp[m][n].q = 0; return dp[m][n]; } r = compute(n-m,m); dp[m][n].m = r.m; dp[m][n].n = r.n; dp[m][n].q = r.q + 1; return dp[m][n]; } int main(int argc, char* argv[]) { if (argc < 2) { printf("Usage: %s [number]\n",argv[0]); return 1; } int n = strtol(argv[1], NULL, 10); if (n <= 0) { printf("number must be positive!\n"); return 1; } if (n > MAXN - 1) { printf("too large!\n"); return 1; } for (int i=0; i<=n; i++) { for (int j=0; j<=n; j++) { dp[i][j].q = -1; } } rep ans; ans.q = -1; for (int i=1; i<=n; i++) { rep r = compute(i,n); if (r.q > ans.q) { ans = r; } } printf("(m,n,q) = (%d,%d,%d)\n",ans.m,ans.n,ans.q); }
Running it gives:
[felopfr@500010378997-NB ~]$ g++ fibo.cpp -o fibo
[felopfr@500010378997-NB ~]$ ./fibo 7
(m,n,q) = (2,1,3)
[felopfr@500010378997-NB ~]$ ./fibo 81
(m,n,q) = (3,2,7)
[felopfr@500010378997-NB ~]$ ./fibo 179
(m,n,q) = (11,7,6)
We can verify that these representations work: 2,1,3,4,7 (7 is the third number after the initial 2,1) 3,2,5,7,12,19,31,50,81 (81 is the 7th number after the initial 3,2) 11,7,18,25,43,68,111,179 (179 is the 6th number after the initial 11,7) I had a lot of fun solving this! Thanks, Fractal! EDIT: I played around with this code for a while, and it's clear to me now that this thing is just maximization of a function f(m,n) where n is fixed. For all the large numbers of n I tested, it's true that you need to pick the value of m such that n/m is the best rational approximation to the golden ratio. I don't see how to prove this, but if true it could simplify the computation a lot. EDIT 2: I managed to prove it! And doing so found a much simpler solution! It's much more efficient than dynamic programming, and can work for ridiculously large numbers as long as the precision of the machine's floating point numbers can handle that. If you look at my previous code, I've reduced the problem to the evaluation of the function: g(m,n) = 0, if m>=n g(m,n) = 1 + f(n-m,m), otherwise It turns out that I can relate g(m,n) to something with number theoretical properties. First, let's introduce the function: f(x) = (x+1)/x Notice that when we give f a rational number, we get another rational number: f(n/m) = (m+n)/n. Also, notice that it's related to the Fibonacci tuple iteration (m,n) -> (n,m+n). What's happening is that if we give f a rational approximation to the golden ratio, it gives us a better one. The explanation for this is simple. f is what is commonly known as a loxodromic Möbius transformation. Loxodromic transformations have a repulsive and an attractive fixed point. It turns out that phi=(1+sqrt(5))/2 is the attractive fixed point and its conjugate (1-sqrt(5))/2 is the repulsive one, we can find these values by solving the fixed point equation f(x) = x. Now, a property of Möbius transformations is that they map circles to circles in the complex plane. For loxodromic transformations, this can be visualized by placing a small circle around the repulsive fixed point and iterating it through the transformation. What's going to happen is that the circle will become larger away from the repulsive fixed point and then will start shrinking around the attractive one. That means that, for intervals close enough to the attractive fixed point, loxodromic transformations are in fact contraction mappings. Since the golden ratio is the attractive fixed point of f(x), iterating f will give us better and better rational approximations to it. To solve this, we don't need the full theory of Möbius transformations because the coefficients of f are all real, and also because f(x) = 1 + 1/x is monotonic for the intervals we need (1/x is decreasing for positive x). Because of this, we can ditch all machinery of circles in the complex plane and work with intervals in the real line, which is much easier! Let's start. The values where g is fixed at 0 are the ones for m >= n, which means x in the interval [0,1]. Plugging these values in f, we get that [0,1] -> [2,infty]. Doing this again, we have: [1,3/2] -> [5/3,2] -> [3/2, 8/5] -> [13/8, 5/3] -> ... We see some very interesting patterns here. First, starting from the [1,3/2] interval, we see that this infinite family completely partitions the range [1, 5/3]. A given number cannot be in the intersection of two of them (except for the ones who are made of Fibonacci numbers, but we can treat this case separately). Also, the first, third, fifth, etc. intervals are all strictly smaller than the golden ratio, while the second, fourth, sixth, etc. are larger. This pattern tells us that, the better a rational number approximates the golden ratio, the larger its position in the interval family, and therefore more times you need to apply f to get to it. After all of this we find a ridiculously simple solution. Since we need to treat the case where the approximation is larger and smaller than phi separately, for a given n, we find an m such that n/(m+1) < phi < n/m. It should be noted that m = floor(n/phi). Then, we simply test the Fibonacci-like sequences ending with (m,n) and (m+1,n) and pick whichever is larger. Code is given below:
Language: cpp

#include <stdio.h> #include <stdlib.h> #include <math.h> struct rep { int m,n,q; }; rep g(int m, int n) { rep r; if (m >= n) { r.m = m; r.n = n; r.q = 0; return r; } r = g(n-m,m); r.q++; return r; } int main(int argc, char* argv[]) { if (argc < 2) { printf("Usage: %s [number]\n",argv[0]); return 1; } int n = strtol(argv[1], NULL, 10); if (n <= 0) { printf("number must be positive!\n"); return 1; } double phi = (1+sqrt(5))/2; int m = (int)floor(n/phi); rep a = g(m,n); rep b = g(m+1,n); rep ans; if (a.q < b.q) { ans = b; } else { ans = a; } printf("(m,n,q) = (%d,%d,%d)\n",ans.m,ans.n,ans.q); }
EDIT 3: To Masterjun, below: For one of your examples, the code above generates
[felopfr@500010378997-NB ~]$ ./fibo_improved 102334154
(m,n,q) = (10946,2584,20)
That gives the sequence [10946, 2584, 13530, 16114, 29644, 45758, 75402, 121160, 196562, 317722, 514284, 832006, 1346290, 2178296, 3524586, 5702882, 9227468, 14930350, 24157818, 39088168, 63245986, 102334154], which has one more element than the one you provide.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Here's something pretty cool I stumbled on recently. I was watching some Digimon Adventure episodes recently and in one of them, some code is shown on Koushiro's laptop, and it's actually a pretty interesting algorithm.
100 /* func sample. coast creation */
110 float s
120 while s<1 or s>=2
130     input "ratio 1 to 2";s
140 endwhile
150 s = (s-1)/10+1
160 screen 1,2,1,1
170 s=sqr(s*s-1)
180 float x0=100, x1=412, y0=0, y1=0
190 fractal(x0,x1,y0,y1,1)
200 line(100, 50, 412, 50, 255, 65535)
210 end
220 func fractal(x0:float,x1:float,y0:float,y1:float,sp:int)
230     float l, r, x2, y2
240     l=sqr((x1-x0)*(x1-x0)+(y1-y0)*(y1-y0))
250     if l<2 or sp>=9 then {
260         line(x0,y0/3+50,x1,y1/3+50,255,65535) : return()
270     }
280     r=rnd()+rnd()+rnd()-2
290     x2=(x0+x1)/2+s*(y1-y0)*r
300     y2=(y0+y1)/2+s*(x0-x1)*r
310     sp = sp + 1
320     fractal(x0,x2,y0,y2,sp)
330     fractal(x2,x1,y2,y1,sp)
340 endfunc
As the internet is a very large place, other people had already seen and analyzed this, some guy ported it to Java here. In any case, the comment and the name of the function pretty much give away what this thing does, it prints a coastline using a fractal. To analyze it better, it helps to see which BASIC dialect this thing is, and it's actually X-BASIC, a programming language for the Sharp X86000, a home computer released only in Japan during the 80s. The functions are documented in this page: http://ww3.enjoy.ne.jp/~zoomark/ip/xb/xb_frm.html Essentially, sqr() is the square root function, line() draws a line on the screen. The first four arguments are x and y of first point, and x and y of the second point. The fifth is some code that represents the color palette, and the sixth the line style (they are not important to understand the algorithm), and finally rnd() returns a random floating point number, uniformly distributed in the interval [0,1) So, the challenge today is: reverse engineer the function and explain why you expect this thing to be a good approximation for a coastline.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Your intuition is essentially perturbation theory. If we look at the formula (a*cos(pi/2n))^n, and noticing that a = 1 - d for a small d, we can expand the functions (1-d)^n -> 1 - nd and cos(pi/2n)^n -> 1 - pi^2/8n, provided that both nd and pi^2/8n remain small. Since this is actually true for the solution, we can approximate the expression as 1 - nd - pi^2/8n. Now, if we call x = nd the loss due to extra polarizations and y = pi^2/8n the loss due to cosines, notice that the product xy = d*pi^2/8 is a constant, and we are interested in minimizing their sum. From AM-GM, this happens when x=y, exactly as you claimed. However, this should not be expected to give the exact result, because of the linearizations in the formula. Doing x=y gives the same result as a=cos(pi/2n) for small arguments. Just notice that 1 - d = 1 - pi^2/8n^2 => n = pi/sqrt(8d). This estimate of the value of n from this method is 11.1072, a bit different from yours since the equation you solved is a bit different. If you expand more powers in perturbation theory, you can get a better result, at the cost of making the equation harder to solve, so it's probably better to use other numerical methods to solve the transcendental equation. Perturbation theory does give us some nice information about the limit d -> 0 (or equivalently, a -> 1). I think I would have to be more rigorous to prove this, but by plugging the formula for n, we get that the wave amplitude (which is different from the intensity, and does decay with a cosine) decays by 1 - sqrt(d)*(pi/sqrt(2)). It definitely looks like that in the limit d -> 0, the wave amplitude decreases asymptotically with an sqrt(d) dependency, and the proportionality factor is pi/sqrt(2).
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
The solution to the cannonball problem using elliptic curves, which is pretty awesome, consists in writing down the sum of squares formula for n, and equating it to a perfect square. We obtain: n*(n+1)*(2n+1) = 6k^2 Now, if we change the variable names we have: 6y^2 = 2x^3 + 3x^2 + x This looks like an elliptic curve, because it's y^2 equals a cubic polynomial in x, and just by looking at it we know the number of integer solutions is finite. Why? Well, we can change it to a curve in Weierstrass so to solve it we need to put it in Weierstrass form, so that if we find an integer solution to this curve, we have a candidate for an integer solution to our curve. There's a theorem by Siegel that states that the number of integer points in an elliptic curve is always finite. Since we have a finite number of candidates, we must have a finite number of solutions to this as well. Now, the hard part is to actually convert it to Weierstrass form. There are several ways to do it. The easiest is to write it in homogeneous form (essentially just put Z's in every monomial until it reaches degree 3) and feed it to sage, together with a rational point, which we can derive from the solution x=1, y=1. From Sage you can find the minimal model and if you read its information well, you find that you arrive at the curve y^2 = x^3 -36x, the morphisms are:
Scheme morphism:
  From: Projective Plane Curve over Rational Field defined by 2*x^3 + 3*x^2*z - 6*y^2*z + x*z^2
  To:   Elliptic Curve defined by y^2 = x^3 + 18*x^2 + 72*x over Rational Field
  Defn: Defined on coordinates by sending (x : y : z) to
        (-x : -6*y : -1/12*z)
Generic morphism:
  From: Abelian group of points on Elliptic Curve defined by y^2 = x^3 + 18*x^2 + 72*x over Rational Field
  To:   Abelian group of points on Elliptic Curve defined by y^2 = x^3 - 36*x over Rational Field
  Via:  (u,r,s,t) = (1, -6, 0, 0)
What this means is: first Sage does (ignore the signs) y' = 6y and z' = z/12. This scaling on z is equivalent to multiplying x and y by 12, which means y' = 72y and x' = 12x Also, to get to the minimal model it does y' = Y and x' = X - 6. At the end of the day, the rational transformation Y = 72y and X = 12x + 6 brings us to the curve Y^2 = X^3 - 36X, and for every integer we have a candidate integer solution to the original problem. Now, if you remember, that's exactly the same elliptic curve we used for the congruent number problem from some time ago! Isn't that cool? Anyway, you can simply look it up or just use sage again! Try putting the following code here to see the magic:
Language: python

R3.<x,y,z> = PolynomialRing(QQ,3) cubic = 2*x^3+3*x^2*z+x*z^2-6*y^2*z P = [1,1,1] #E = EllipticCurve_from_cubic(cubic, [1,1,1], morphism=True) #E E1 = EllipticCurve(cubic, [1,1,1]) E = E1.minimal_model() #iso = E1.isomorphism_to(E) #iso E.integral_points()
Whatever method you use you will find X = 294 and Y = 5040, which correspond to x = 24 and y = 70, which are the only integral solutions beside x=y=1. And there you go, by relying on very general theorems about structure of elliptic curves you managed to solve a classic number theory problem using your computer and no need to understand convoluted elementary proofs. Isn't modern math amazing? ----
FractalFusion wrote:
This week's Riddler Classic is an interesting geometric problem.
This one looks like a simple optimization for me. Suppose you want to do it using n steps. You want to maximize a product of cosines whose argument sums to a given number. To do this you can apply Jensen's inequality for the function log(cos(x)). Since it's concave for 0 < x < pi/2, we find that cos(x/n)^n >= prod_n cos(x_n), when the sum of all x_n is x, and equality happens when each x_n = x/n. Essentially, what all this means is that to maximize the product of cosines given that all angles sum to some fixed amount, you must choose the arguments of the cosines to divide the interval evenly. So essentially you want to find the maximum of a^n*cos(pi/2n)^n, for a = 0,99. This sequence has a maximum because it's decreasing for large n.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
This problem is pretty hard to solve by hand, but there's a way to solve it using freely available software and math tables. For which natural numbers n is the sum of the first n perfect squares also a perfect square?
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
This one is not really a challenge, but a dissertation, and I wanna know what people here think about it. I came across a post that asked the "correct" value of 8 / 2 * 4 Of course, the way we all learn about this is that division and multiplication are two different operations that have the same precedence, and the way to resolve the ambiguity in the expression is to compute the operations from left to right, and the result is 16. Learning this way makes you question why it has to be left to right. Why not right to left? And also, why not have one operation have precedence over the other. I think there's a reason why lots of people discuss this, and we should do a better job teaching this. One clear tendency in mathematics is abstraction and minimalism. This principle says that the less axioms we need to assume the better, because the more general our thought process becomes. If we understand multiplication like this, if we want to keep expressions to a minimum, avoiding the use of parentheses, there's clearly a way that's more minimal, and in my opinion, the reason for the left-to-right rule. The better way to teach this, for me, would be to say that multiplication and division are not distinct operations, but there's in fact only one "fundamental" operation, which is of course multiplication. It turns out that it has some very useful properties. For example, it is associative (for real numbers, also commutative, although that's not important for my argument here), it has a unit element, namely 1, such that a * 1 = 1 * a = a, and every nonzero element a has an inverse, you can always find an element a-1 such that a * a-1 = 1. Division is merely a consequence of these properties, when we write a / b, what we really mean is a * b-1. Why is it better to think this way? Because it's clear that an expression like 8 / 2 * 4 should have a "canonical" result. When we write 8 / 2 * 4, what we really mean is 8 * (1/2) * 4. From this point of view, there's no reason to group the numbers with parentheses, because there's no need to do that, since multiplication is associative! We only treat division and multiplication as separate operations and apply the left-to-right rule because it always gives the same result as this more complicated explanation. Anyway, what do you guys think? Do you agree with me?
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
For fun: If S is an empty set, show that inf S = +infinity, and sup S = -infinity Is there a non-empty subset S of the reals such that inf S > sup S?
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Bobo the King wrote:
I don't even "get" object-oriented programming and every foray of mine into it has left me not only confused but outright contemptuous toward programming as a profession. "This makes no sense! This whole paradigm has to be a fiction!" (I've grown a little and am beginning to sort of understand object-oriented programming at a conceptual level, but I wouldn't dare say anything I've ever programmed has been object-oriented in any meaningful sense. Even if/when I figure it out, I'll probably always think of programs as code that, you know, does stuff, not objects. I'm a verb guy, not a noun guy.)
I will just comment a little bit on this to tell you how it is to have programming as a profession, and maybe you can take from it what's better for you. First of all, working with programming as a profession involves problems very different from those you attacked in this post. Although you can get paid to do something similar to using data structures and numeric simulations, from my experience there's a reversal of logic in these roles. Usually they already have a result they want to achieve, either because they want to sell something or got funding to study if it's the best, and you have to come up with a method to achieve it, and the ethics of this are very questionable. In any case, it must be clear to you that doing something for other people is quite different than doing it for yourself. Just look at the difference between cooking for you and your friends and opening a restaurant. Because of this, most problems you will get paid for solving have the following characteristics: 1) the person asking you to solve them usually has no idea of how it is solved, and doesn't really want to know how (just ask yourself, if someone knew all the secrets to cooking, would they pay a restaurant to make food for them or do it themselves?) 2) they are "dumb" problems, in the sense that it's pretty clear that they have a solution, but it takes a lot of effort to implement them 3) the biggest difficulty is in scaling, because for any business to be profitable, it needs to have lots of customers. because of that, the limitation is usually on IO, not on processing time, and the theory of distributed systems is much more relevant than standard algorithms to solve them. 4) it's inefficient to do something from the ground-up rather than something which is already working, so you need to be willing to work with technologies you do not like. 5) Almost all problems can be solved using a CRUD design. This actually becomes fun after some time, because you start trying to figure out how most software you use works by applying this principle :P Also, I'd like to say that it's perfectly fine to not like OOP, it recently has received some criticism in the industry, and lots of people are using functional programming concepts along with it. However, if you get used to the problems most programmers need to handle, you will find out that OOP is a pretty reasonable paradigm, and even if you are not interested in them, you can use it to make the code that solves the problems you find interesting accessible to nontechnical people. I remember using a complicated algorithm exactly once to solve a business related problem. Some criminals had managed to commit credit card fraud by creating fake accounts, and we thought they had made a mistake by using a small number of devices to create accounts for many document numbers. In the logs, we could see the device-id that created the account for a given document number, and also list all document numbers that a device-id interacted with. From this, I could construct a bipartite graph, and using depth-first search starting from a number of accounts where the fraud was confirmed, I managed to get the entire graph, which not only allowed us to track the devices used, and also which accounts were affected, and we could close some before they stole more money. However, this is the exception, rather than the rule, and if you're willing to consider programming as a profession, ask yourself these questions: 1) Are you willing to see yourself more as a consultant for the business that handles technical issues, rather than a programmer who's paid to write code? 2) Are you willing to understand how to sell a product so that both you, and the person paying you, are successful in the end? 3) Are you willing to spend a considerable amount of your day explaining simple things to non-technical people? 4) Are you willing to use technologies you do not like in order to deliver a product? In any case, best of luck to you and welcome back. The code you wrote is pretty cool!
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Answer: Use u = xphi, du = phi*xphi-1dx => dx = du*(x1-phi)/phi = du*(u1/phi-1)/phi. Notice that phi satisfies the equation phi² - phi - 1 = 0. If we move stuff around, we can obtain phi - 1 = 1/phi. Plugging this, we get dx = du*(uphi-2)/phi. Therefore, we reduce to the following integral: This is just an integral representation of the Beta function: https://dlmf.nist.gov/5.12 Applying formula 5.12.3 of the given reference for a=phi-1, and b=1, we see that the integral is just B(phi-1,1)/phi. The answer follows from simple algebraic manipulation of beta and gamma functions: So, we get the quite amazing result that the integral is simply 1.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Let phi denote the golden ratio. Evaluate:
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
This one is very fun if you solve it geometrically. However, if you want to use it as a calculus exercise, go ahead! Find the minimum of f(x)=sqrt(x^2+4x+13)+sqrt(x^2-8x+41)