10 out of 10, p4wn3r! You even got the gist of how I derived it!
I started with the identity
max(a,b) = 1/2(a+b+|a-b|).
I then used the property
max(a,b,c,d) = max(max(max(a,b),c),d) = max(max(a,b),max(c,d)).
I then iterated the identity to expand the two right expressions. Finally, I made a few simplifications to collect like terms and obscure the nature of the problem.
I'd say the key to solving it is noticing that both a and b only appear as
a+b+|a-b|
so you can collect those terms as 2a, 2b, or (as p4wn3r did it) 2u without loss of generality. I solved it by testing all six cases (u>c>d, u>d>c, c>u>d, c>d>u, d>u>c, and d>c>u) but p4wn3r's method is much more elegant.
Edit: Thanks to max(a,b) appearing on both sides of the equation, a and b only appear in a+b+|a-b|. It probably would have made the problem much harder to put a dent in if I had instead worked from the identity
max(max(max(a,b),c),d) = max(max(a,c),max(b,d))
or something similar. Oh well.
Evaluate
Suggestion: It's not a coincidence that this is posted after Bobo the King's problem. Try starting from n+1 = sqrt((n+1)^2) and see what you'll get.
EDIT: It's actually simple:
We have
And, similarly, and .
Now, just keep nesting:
Continuing this process indefinitely:
For n=2, we have:
If someone gave an alleged counter-example to the Collatz conjecture, how could one check if it's indeed a valid counter-example? (After all, you can't just run an algorithm on it for infinity. Even if it seems to grow forever, maybe it just takes a really, really long time to reach a value which causes it to eventually collapse to 1.)
Things like this are proved mostly with invariants. I find it easier to understand if an example is given. Take the triple (3,4,12) and follow an algorithm: at each iteration, pick two numbers a and b from the triple and replace them with 0.6a - 0.8b and 0.8a + 0.6b.
You're faced with the problem "Will this eventually get to (4,6,12)?" the answer is no, we can see this by summing the squares of each number in the triple after an iteration:
(a,b,c) -> (0.6a - 0.8b , 0.8a + 0.6b, c)
(0.6a - 0.8b)^2 + (0.8a + 0.6b)^2 + c^2 = 0.36a^2 - 0.96ab + 0.64b^2 + 0.96ab + 0.36b^2 + c^2 = a^2 + b^2 + c^2
See that, wherever you apply it, the sum of the squares doesn't change. It's an invariant of the process. Notice that the square sum of (3,4,12) is 169, while in (4,6,12) it's 196. Thus, we'll never reach (4,6,12) since it requires the sum to change.
The problem is that afaik nobody has found a strong enough invariant to disprove the Collatz conjecture, that's the reason it's believed to be true. And also, one way to attempt a computational proof is to find a number that reaches itself (without going to 1, of course) the existence of a cycle would disprove the conjecture.
I'm not sure I understand correctly. Unless I'm mistaken, you seem to talk about methods that could in some cases be used to disprove or prove the counter-example as valid, rather than a sureproof way of telling if it is.
As far as I understand, a theoretical counter-example to the Collatz conjecture doesn't need to cause a loop (which doesn't contain 1), but could cause the series to grow forever. If the counter-example would cause such a loop, then it could indeed be verified programmatically (at least in theory, if there's enough RAM and time). However, if the alleged counter-example causes the series to just grow and grow, never reaching a stable loop (or collapsing to 1), how would you verify that it's indeed a counter-example to the conjecture?
And indeed that's the case.
Of course.
When you asked that, I understood that you were asking how one could come to the conclusion that the number 1 would never be reached while doing only a finite number of computations, but you seem to be looking for an algorithm that determines whether it terminates or not.
I don't know if there's one, it may be possible that one doesn't exist (Hilbert's tenth problem is the most famous case where it was proved that no algorithm can solve a certain class of number theoretical problems).
Not necessarily an algorithm, but any mathematical method to assess if the counter-example is indeed valid (and thus proves the conjecture as false). In other words, how would a mathematician check the validity of the proposed counter-example?
It is unknown whether verifying a number's stopping time is decidable at all. Since we don't know, we can't have an algorithm that can verify any number in a finite amount of time; if we had one we'd know it's decidable.
Let me repeat that: there is no known algorithm (which includes application of "mathematical methods") that can verify any number you want. If you find one, submit a paper; you've just proven the collatz conjecture to be decidable.
When someone gives you a number, you can try several approaches to find an answer:
* you can run it through a computer and hope that it'll find an answer within a reasonable timeframe. If the recursion yields 1 or a circle, you'll have your answer. If it grows without bounds or you run out of computing time, you haven't gained anything.
* you can try searching for invariants like p4wn3r suggested. This could prove that the number grows without bounds. For example, find a property of n_0. Use that property to prove that for a certain i>0, n_i > n_0, and n_i has the same property as n_0. Finding a useful property is non-trivial though, and you can't brute force an infinite number of possible properties, either.
* you can try a million other things, though there's currently no guarantee that any of these things will yield an answer.
The practical solution would be: if someone gives you an alleged counter-example, you reply: "prove it!" :p
@Warp:
Using heuristic arguments for constructing the proof :P
Invariants are the best method you have at tackling problems like that, (in my previous example, if you found out that the square sum always doubles, it'd be enough to say it diverges), but the arguments are usually ad hoc and will work only for a very limited class of numbers (I think that's the reason there are a lot of classes of primes...).
I see it could be done by taking the number, finding an invariant of the sequence it generates (it doesn't have to be valid for the entire sequence, if it doesn't work for a finite number of terms, it still goes) and prove that this invariant causes the sequence to diverge.
EDIT: And there's this cute problem: http://en.wikipedia.org/wiki/Lychrel_number :D
this is wrong, x/0 is undefined.
limy->0+ x/y = infinity (for x > 0), but that cannot be simplified to your statement.
/edit: fixed the limit to spoil p4wn3er's fun with nitpicking
| *0
multiplying both sides by zero will not produce an equivalent equation. Whatever you derive from the new equation will not allow you to make statements about the initial value of x.
x*0/0=infinity*0
You're assuming that 0/0 = 1 and infinity*0 = 0. That's wrong twice, neither is a defined value.
x=0
again, I doubt that. Not a single line of your reasoning was correct. You can approach these problems by using proper limits, but as soon as you blindly drop the limits you get wrong results.
TASeditor is somewhat right.
Here's the proof that 7 equals 9 (and thus, by the rules of classical logic, every number is equal to 0 AND non-equal to zero)
1, 2, 3.1, 95, 98, me, xp, vista, 7
but i can place zero into x infinity times.
0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0....=(x is not equal to 0)
I think you are confusing "inf*0" with "limx->inf x*0", which are not the same thing (the former is undefined, the latter is zero).
How about this instead: -1 = (-1)3 = (-1)6/2 = ((-1)6)1/2 = 11/2 = 1.
(The task here is, of course, to tell which '=' is incorrect and why.)
How about this instead: -1 = (-1)3 = (-1)6/2 = ((-1)6)1/2 = 11/2 = 1.
(The task here is, of course, to tell which '=' is incorrect and why.)
xab=(xa)b is only valid in integral exponentiation (a, b integers, x not zero), in fractional exponentiation with fractions of odd denominators (a, b reduced fractions with odd denominators, x not zero) or in real exponentiation of positive integers (x>0, a, b real). Since (-1)6/2 = ((-1)6)1/2 does not satisfy any of these conditions, it is not valid.
p4wn3r wrote:
Similar to Warp's, find the error:
There's an error? It looks perfectly correct to me! Now if the original question made any sense...
This very subject has been discussed to some extent in this thread, but let me nevertheless pose a conjecture. (In fact, I came up with this when I was in high school, but back then I didn't have enough mathematical knowledge to postulate it as precisely and unambiguously as here.)
Let's assume we have two functions f(x) and g(x), and a constant c, such that:
1) Both f(x) and g(x) are smooth functions (around x=c).
2) f(c) = 0 and g(c) = 0.
3) The nth derivative of f(x) is non-zero at c. (I think this is a more formal way of stating that f(x) is not simply the function "0". Is there a better way?)
Conjecture: limx->c f(x)g(x) = 1
Can this conjecture be proven as true (or a counter-example given)? (In case there is a counter-example, is there any way to modify one of the prerequisites, or add a new one, that would make it true?)
3) The nth derivative of f(x) is non-zero at c. (I think this is a more formal way of stating that f(x) is not simply the function "0". Is there a better way?)
Yes, just saying "f is not the null function for any neighborhood of c".
It's not required that a derivative be different from zero for the function not to be constant. Take f(x)=e^(-1/x^2) for x!=0 and f(0)=0.
The nth derivative at x=0 is always zero and f is not the constant function for any neighborhood of x=0. Of course, having a nonzero derivative will guarantee that it's not constant around that point, but the converse is not true.
That said, I know that result holds if f and g are both analytic and f differs from the null function at a neighborhood of c. I also know that the condition requiring f and g to be analytic can be weakened, but probably not so much as just f(n)!=0 or they wouldn't even mention this requirement on f.
However, all examples I know of smooth non-analytic functions that have a non-zero derivative involve Fourier series or other long stuff like that, which I find way too boring to be interested in or to verify if they hold here. Perhaps you can find something in "Counterexamples in analysis".
A proof for analytic functions f, g, where f is locally positive except at c:
Since f is not locally zero, f has finite order at c (smallest n for which f(n)(c) is not 0). If g does not have finite order at c, then g is locally zero, so lim(c) f^g=lim(c) f^0 = 1. So assume g has finite order at c.
f^g=e^(g ln f) So consider g ln f. This is an indeterminate form since lim(c) g=0 and lim(c) ln f=-infinity (last one holds since lim(c) f=0 and f is locally positive). So it is an indeterminate form. Write g ln f as ln f / (1/g) and use l'Hôpital's rule:
(ln f)' / (1/g)' = -f' g^2 / (f g')
The orders of f and g (at c) are at least 1, since f(c)=g(c)=0. Let the order of f be m and the order of g be n. Then ord(f')=m-1, ord(g')=n-1, and ord(g^2)=2n.
Then the order of -f' g^2 / (f g') at c is:
(m-1)+2n-m-(n-1)=n>0
which means lim(c) -f' g^2 / (f g') = 0. So lim(c) f^g=lim(c) e^(g ln f)=e^0=1.
The non-analytic case is a lot different though.
I finally got around to doing this problem:
Warp wrote:
Let's assume we have two functions f(x) and g(x), and a constant c, such that:
1) Both f(x) and g(x) are smooth functions (around x=c).
2) f(c) = 0 and g(c) = 0.
3) The nth derivative of f(x) is non-zero at c. (I think this is a more formal way of stating that f(x) is not simply the function "0". Is there a better way?)
Conjecture: limx->c f(x)g(x) = 1
We can verify that f(0)=0, because when x=0 all cosines turn to 1 and the series becomes a geometric progression. For x in (]-2pi,2pi[ - {0}), at least one of the cosines is less than 1, so the series will sum to less than e/(e-1) and thus, f(x)>0 for that interval, so the exponentiation is defined there.
Using the fact that cos(nx) is at most 1, and taking Mn = e-n, we can apply the Weierstrass M-test to the series of functions in f, since the series with terms Mn converges absolutely, the series of functions in f will converge uniformly and thus can be derived term by term.
We have .
The absolute convergence of the series of functions isn't true just for the geometric series of 1/en, it also holds for n/en, n2/en and nk/en in general. There are many ways to prove this, the one that comes to my mind right away is to use the inequality:
And use a comparison test with the absolute convergent series 1/n^2.
Because of this, the Weierstrass M-test will guarantee uniform convergence for all of f's derivatives, which will always exist for x=0 and will be differentiable term by term. This is enough to say that f is smooth, and to satisfy another condition:
For g, it's fairly obvious that g(0)=0 and that g is smooth.
Now, for the limit, we use an idea similar to FractalFusion's: fg = exp(g ln f) and find the limit of the exponent using l'Hopital:
This time, it's significantly more annoying because we have to derive the numerator and the denominator four times to get away from the indetermination, eventually:
Just substitute x=0 and:
(the minus sign shouldn't be in the last member, oh well)
This time, it's significantly more annoying because we have to derive the numerator and the denominator four times to get away from the indetermination
Wait, isn't l'Hôpital's rule valid only if the resulting limit exists? In other words, if applying the rule once results in another indetermination, the rule cannot be applied. (At least that's what I understand from the wikipedia article.)
Or is the rule nevertheless valid if by applying it several times you end up with determined limit? (At least the wikipedia article doesn't say so.)
If the calculation is nevertheless valid, is the infinite sum essential for the conjecture to become false? If yes, is there a simple/sensible way of expressing that f(x) (and possibly g(x)) must have only a finite number of terms?