Riddler Classic this week:
----
You and I are playing a game. It’s a simple one: Spread out on a table in front of us, face up, are nine index cards with the numbers 1 through 9 on them. We take turns picking up cards and putting them in our hands. There is no discarding.
The game ends in one of two ways. If we run out of cards to pick up, the game is a draw. But if one player has a set of three cards in his or her hand that add up to exactly 15 before we run out of cards, that player wins. (For example, if you had 2, 4, 6 and 7, you would win with the 2, 6 and 7. However, if you had 1, 2, 3, 7 and 8, you haven’t won because no set of three cards adds up to 15.)
Let’s say you go first. With perfect play, who wins and why?
----
This is a well-known problem with a trick. See if you can find the trick. (The answer to the question isn't as important even though I know what it is.)
Easy. Arrange all cards from 1 to 9 in a 3x3 magic square. There is exactly one 3x3 magic square (excluding reflections and rotations). You pick three cards that add to 15 iff you fill up a row, column, or diagonal of the magic square. So, the game in question is simply tic-tac-toe. Since with perfect play, tic-tac-toe is a draw, this game is a draw also.
A few days ago i to wanted to practice mental math calculation (just for fun) and found this nice site to do it:
https://www.mathtrainer.org/
Each time when you sovle mental math problems in an acceptable amount of time you go level up. There are 100 levels in total. Every next level will slightly more difficult than previous.
Starting from incrdibly easy in beggining and became extreemly difficult close to the end. So it is kind of videogame with 100 levels :)
I wonder what maximum level you can get there?
I show you how deep the rabbit hole goes.
Current projects: NES: Tetris "fastest 999999" (improvement, with r57shell)
Genesis: Adventures of Batman & Robin (with Truncated); Pocahontas; Comix Zone (improvement); Mickey Mania (improvement); RoboCop versus The Terminator (improvement); Gargoyles (with feos)
[14:15] <feos> WinDOES what DOSn't
12:33:44 PM <Mothrayas> "I got an oof with my game!"
Mothrayas Today at 12:22: <Colin> thank you for supporting noble causes such as my feet
MemoryTAS Today at 11:55 AM: you wouldn't know beauty if it slapped you in the face with a giant fish
[Today at 4:51 PM] Mothrayas: although if you like your own tweets that's the online equivalent of sniffing your own farts and probably tells a lot about you as a person
MemoryTAS Today at 7:01 PM: But I exert big staff energy honestly lol
Samsara Today at 1:20 PM: wouldn't ACE in a real life TAS just stand for Actually Cease Existing
In another forum the problem of "for any two natural numbers m and n, the nth root of m is either integer or irrational" came up.
A rather simple proof was presented. Essentially: If X=m1/n, then Xn is an integer (and more specifically m). This means that X cannot be a non-integer rational number, because a non-integer rational number raised to an integer power cannot be an integer. (Proving this last thing isn't very difficult.)
To this I commented: "Now we only have to prove that sqrt(2) is not an integer, and we have proven that it's irrational."
I got a bit stumped. How do you prove that sqrt(2) isn't an integer? Sometimes I get stumped on the most trivial things. My math-fu isn't very strong.
I show you how deep the rabbit hole goes.
Current projects: NES: Tetris "fastest 999999" (improvement, with r57shell)
Genesis: Adventures of Batman & Robin (with Truncated); Pocahontas; Comix Zone (improvement); Mickey Mania (improvement); RoboCop versus The Terminator (improvement); Gargoyles (with feos)
You can do it like this:
Of course, this assumes the monotonic property (strictly increasing) of square roots. In other words, x<y ⇒ sqrt(x)<sqrt(y), which can be proven easily. (Here's my attempt)
Warning: Might glitch to creditsI will finish this ACE soon as possible
(or will I?)
I just realized something I hadn't thought of before: The fundamental theorem of arithmetic indirectly proves that the set of all possible finite strings is countable (which in itself proves many other related things, such as the set of algebraic numbers being countable).
The fundamental theorem of arithmetic states that any positive whole number n can be expressed as a unique product of primes, which means:
n = p1n1 * p2n2 * p3n3 * ...
where p1, p2, etc. are all the primes and n1, n2, etc. are nonnegative whole numbers.
This means that any (finite) string of nonnegative whole numbers (n1, n2, ...) can be uniquely mapped to positive whole numbers. The fundamental theorem of arithmetic proves this indirectly. Thus it proves that eg. the set of algebraic numbers is countable.
From there it follows pretty easily that there exists real numbers that cannot be described in a finite amount of information. And that these numbers are the vast majority of all numbers.
Build a man a fire, warm him for a day,
Set a man on fire, warm him for the rest of his life.
Time for an actual challenge for a change.
In his last couple of videos, the youtuber blackpenredpen has tackled the question of finding a non-zero function y such that
y'⋅y'' = y'''
(iow. the derivative of y times the double-derivative of y is equal to the triple-derivative of y.)
It seemed surprisingly non-trivial.
There's a trick to solve ODE's like this. First, call f the derivative of y. You have
f⋅f' = f''
Now, let us call the argument of the function x. It's a second order ODE, but since it does not depend explicitly on x, you can always make a change of variables that will convert it to first order.
Set u = df/dx. Apply the chain-rule and get f'' = du/dx = du/df * df/dx = u*du/df
Now plug everything:
uf = u du/df => f df = du => f² + c = 2u
Now substitute u and separate:
dx/2 = df/(f²+c)
Now you integrate. I'll assume c is positive. If c is negative, I'll just state what needs to be changed later. Make c = b². Then:
x/2 = arctan(f/b)/b - k/b
f = b*tan(bx/2 + k)
Finally, y is simply the integral of f. The integral of tan(x) is just -ln(cos(x)), so:
y = -2 * ln(cos(bx/2+k)) + a
That has three free parameters (as expected from a third order ODE).
To include the solution y = a, notice that if you take b = 0, then f=0, and y = a.
Now, if you want to check, knowing that the derivative of tan(x) is sec²(x), and that the derivative of sec(x) is sec(x)tan(x):
y' = b*tan(bx/2+k),
y'' = b²/2 * sec²(bx/2+k),
y''' = b³/2 * sec²(bx/2+k)tan(bx/2+k)
So, if you multiply the first two you do get the third one, as expected.
If c is negative, you have to do c = -b², and since the integral of 1/(x²-b²) is -arctanh(x/b)/b, you get:
f = -b tanh(bx/2+k),
and
y = -2 * ln(cosh(bx/2+k)) + a
You can again check the solution, knowing that the derivative of tanh(x) is sech²(x) and that of sech(x) is -sech(x)tanh(x). The derivative of the hyperbolic trigonometric functions are always the analogues of the normal trigonometric ones, with the difference of some minus signs, and I always have to look up a table to know which sign it is xD. Anyway:
y' = -b*tanh(bx/2+k),
y'' = -b²/2 * sech²(bx/2+k),
y''' = b³/2 * sech²(bx/2+k)tanh(bx/2+k)
So, again, if you multiply the first two you get the third. These, with the constant function y = a, are all possible solutions.
EDIT: Spoke too soon! The equation at the beginning:
uf = u du/df,
also has a solution if u = 0!
That means y'' = 0, which has the solution
y(x) = ax + b,
which is another possible family of solutions. In this case, the second and third derivatives vanish, and the differential equation is, thus, always satisfied.
EDIT 2:
And, I missed another. If c = 0, we have:
dx/2 = df/f² => x/2 = -1/f - k/2 => f = -2/(x+k) => y = -2*ln |x+k| + a. To verify:
y' = -2/(x+k)
y'' = 2/(x+k)²
y''' = -4/(x+k)³
Multiplying the first two you get the third. So, the entire set of solutions should be:
y = -2 * ln(cos(bx/2+k)) + a
y = -2 * ln(cosh(bx/2+k)) + a
y = -2*ln |x+k| + a
y = ax + b
It's the first time I've seen so many different solutions to an ODE!
Joined: 10/27/2004
Posts: 1978
Location: Making an escape
"Ordinary differential equations," it would seem.
This seems like a good time to ask a question that has been bugging me for a long time, but I have no clue how to approach it, since I am no good at differential equations.
I am looking for two functions, f(x) and g(x), such that
f2 - g2 = 1
(f')2 + (g')2 = 1
All I can figure is that lim f' = lim g' = √(2)/2 as x-> inf, but that's only on an intuitive level, nothing rigorous.
If it helps, f(0) = 1, g(0) = 0
A hundred years from now, they will gaze upon my work and marvel at my skills but never know my name. And that will be good enough for me.
The reason why the limit goes to sqrt(2)/2 is because the derivative equation requires that f'(x) = cos(u) and g'(x) = sin(u) for some u. This restriction also requires that f(x) = cosh(v) and g(x) = sinh(v) for some v. Because both derivatives cannot be zero, the functions must be either increasing or decreasing. Because of the first constraint, the functions need to approach each other asymptotically, so the derivatives have to also approach each other. The only point where sin(u) = cos(u) is at +- sqrt(2)/2.
I don't believe that this differential equation has an elementary solution, but we can guess at what they might be related to.
Because f(x) and g(x) are related to cosh(v) and sinh(v) and because they are asymptotically linear, this means that v must be a logarithmic-like function of x (in the limit, but not near 0).
Because f'(x) and g'(x) are related to cos(u) and sin(u) and because they are asymptotically constant, this means that v must also be an asymptotically constant function of x that approaches pi/4.
Build a man a fire, warm him for a day,
Set a man on fire, warm him for the rest of his life.
Make the change of variables: f(x) = cosh(u(x)), g(x) = sinh(u(x))
This way, the first equation is automatically satisfied. Now, for the second:
(sinh² u + cosh² u)(u')² = 1
Now, if you use hyperbolic identities, you find that the first factor is just cosh(2u)
Taking the square root and separating:
sqrt(cosh 2u)du = dx
So, if you could integrate sqrt(cosh 2u), you could invert this equation and write the answer. The problem is that the function is not elementary, so you'd need to use special functions. If you just want the asymptotic behavior at infinity, it's simpler. Remember that:
cosh 2u = (exp(2u) + exp(-2u))/2
Now, if u is very large, the exp(2u) term dominates, so we can approximate the equation:
exp(u)du = sqrt(2)dx => exp(u) = sqrt(2)x
Notice that I dropped the integration constant because it's negligible in the limit where both u and x go to infinity.
Now, we remember our expressions for f and g:
f = cosh(u(x)), g = sinh(u(x))
But notice that, as u->infty, we have cosh(u) ~ sinh(u) ~ exp(u), so
f ~ g ~ exp(u)/2 = x*sqrt(2)/2
So, we see that, for large x, the asymptotic expansion of f and g is just x*sqrt(2)/2. If you derive it, it's clear that we should have f' ~ g' ~ sqrt(2)/2
This particular function actually has a pretty nice representation in special functions.
integral(sqrt(cosh(2u) du) = integral(sqrt(1 - 2 sin2(iu)) du) = -i E(iu | 2) + C, where E is the Elliptic E function.
So if we represent E(x | 2) as E2(x) then (where E2-1 is the inverse):
u = -i E2-1(i(x + C))
So f(x) = cosh(-i E2-1(i(x + C))) and g(x) = sinh(-i E2-1(i(x + C)))
If f(0) = 1 then (because arccosh(1) = 2k pi i where k element of Z)
2k pi i = -i E2-1(iC)
-2k pi = E2-1(iC)
C = -i E2(-2k pi)
g(0) = sinh(-i E2-1(iC))
0 = E2-1(-i E2(-2k pi))
E2(x) is non-zero for x != 0, so k = 0. So C = 0.
So now we have an explicit, albeit difficult to use expression for f and g, namely:
f(x) = cosh(-i E2-1(ix))
g(x) = sinh(-i E2-1(ix))
Plotted:
Build a man a fire, warm him for a day,
Set a man on fire, warm him for the rest of his life.
Joined: 10/27/2004
Posts: 1978
Location: Making an escape
And here I was, after reading pwner's post, trying to use the tangent and secant functions for g and f and ended up with an even more convoluted integral.
Thanks for the help, guys.
Just FYI, The question that inspired this was, What two functions will trace an object moving along a hyperbolic path at a constant rate, much like how the trig functions trace a steady circle?
A hundred years from now, they will gaze upon my work and marvel at my skills but never know my name. And that will be good enough for me.
Joined: 10/27/2004
Posts: 1978
Location: Making an escape
Mostly idle curiosity, born around 14-15 years ago when I first learned about the hyperbolic functions and was disappointed that they couldn't be used to trace a hyperbolic motion in the same way the trigs can for a circular path. It wasn't until a few weeks ago that I finally was able to put what I wanted in the form of an equation. Yeah, I'm a little slow.
A hundred years from now, they will gaze upon my work and marvel at my skills but never know my name. And that will be good enough for me.
Actually since f*f' is just (f2/2)', you can directly write:
(f2/2)' = f''
f2/2 + C = f'
f2 + c = 2f'
and so on. The case where u=0 just means that f is constant; that's the case when f2 + c = 0 (that gives f=sqrt(-c)).
I saw the blackpenredpen video, and one of the comments describes a missing solution.
It turns out that -arctanh(x/b)/b is not the only function (up to adding a constant) whose derivative is 1/(x2-b2). -arccoth(x/b)/b is another such function. This is because the integral of 1/(x2-b2) is -arctanh(x/b)/b but only when -b<x<b. When |x|>b, the integral is -arccoth(x/b)/b. This should give an additional solution of y = -2 * ln|sinh(bx/2+k)| + a.
(Technically the ln functions should all have absolute value signs, though it doesn't matter for ln(cos) (since -cos is just a shifted cos) and ln(cosh) (since cosh>=1).)
Edit: Also, we're assuming that y is continuous and three times differentiable on its domain. Otherwise, y can just be various pieces of the above solutions.
To be honest, I've never been a fan of using absolute value in logs, it's standard practice, so I do it, but I think it gives a wrong idea of what's happening.
When you have an ODE like xy' = 1, for example, what's really happening is that at x=0 you have a singular point, and you actually have two possible solutions, ln x, for x>0 and ln(-x) for x<0. This does not violate the uniqueness theorem for ODE's, so there's no problem in that.
This is the same problem you pointed out at the missing solution, thanks for bringing it up. The function being integrated, 1/(x²-b²) is singular at x = +-b, so you should expect two different solutions getting separated at x=b and x=-b.
When people say that the solution of xy' = 1 is y = ln |x|, x != 0, they actually give the impression that you can solve an ODE and find a function that's defined on a set of points that's not an interval, which is something that makes no sense, in light of all the theorems. So, yeah, although I use absolute values all the time, I'd be happier if people wrote the functions, with their correct intervals :)
The short answer is, you can't extend it to complex numbers.
The long answer is, integration assumes some properties about the topology of the space you are working on, and the topology of the space in R is totally different from that of R² and that of the complex numbers C.
For example, in R², a space can be path connected and not simply connected, while in R that's impossible (all path connected spaces in R are intervals, which are always simply connected). There are other differences, too, like a function with arguments in R that has a derivative is always differentiable, while it's possible for a function with domain in R² to have derivatives in all directions and not be differentiable.
In the space of complex numbers, a function having a derivative (with respect to complex numbers) is a notion even stronger than differentiability. For example, if a complex function is differentiable in a simply connected space, any integral around a closed loop gives exactly zero!
So, as a rule you cannot simply assume that what you found for an ODE will extend to a complex-valued function. In the case we were discussing, involving logarithms, it gets even uglier because the complex logarithm has branch cuts, and it's not simple to do analysis there.